How will we know when the global has reached AI?
To be clear, there are many statements in those days about computers that include AI, implying that the device is the equivalent of human intelligence, however, you must be careful with those impetuous and downright misleading claims.
The purpose of those who expand AI is one day to be to have a PC formula limit to bring up human intelligence, in the broader, more internal way in which human intelligence exists and is presented.
There is no such AI yet.
Confusion about this factor is so uncontrollable that the AI box has been forced to place a new nickname to express AI’s overversed purpose, now proclaiming that the goal is to achieve General Artificial Intelligence (AGI).
This is done in the hope of pointing out to the Lego and the general public that such precious and desired AI would come with non-unusual reasoning and a number of other intelligence-like functions that humans possess (for more main points on the perception of strong AI rather than a weak AI , as well as a narrow AI as well, see my explanation in this link here).
Given that there is some confusion between what constitutes AI and what is not, you may wonder how we can nevertheless find out whether AI has been achieved unequivocally.
We rightly insist on having more than just a provocative proclamation and will have to remain skeptical about anyone who proposes an AI formula that claims to be a genuine deal.
Appearance would be inadequate to attest to the arrival.
There are many lounge stunts in the AI material bag that can make many other people think they are witnessing an AI with incredible human qualities (see my canopy of this deception in this link here).
No, it’s enough to take your word for AI or just kick the AI tires to assess their merits and no doubt
They were given to be a better way.
AI players have tended to see a type of verification known as Turing verification as gold to try to certify AI as respected AI or semantically AGI.
The name of its author, Alan Turing, a well-known mathematician and pioneer of the PC, the Turing check designed in 1950 and remains applicable today (here is a link to the original article).
Parsimoniously, Turing’s check is relatively undeniable to describe and undeniably undeniable (for my deeper research into this, see the link here).
Here’s a look at the nature of the Turing test.
Imagine that we have a human hidden a curtain, and a computer hides a curtain for the moment, so that he cannot see the view just discern what or who is living the two curtains.
Humans and computers are considered to be a competition at a festival that will be used to verify whether AI has been achieved.
Some prefer to call them “subjects” than competitors, due to the perception that this could be more of an experiment than a game program, but the fact is that they are “participants” in a form of challenge or contest that involves minds and intelligence.
There is no arm struggle involved, no physical act.
The procedure is completely a matter of high knowledge.
A moderator acts as an interviewer (also known as a “judge” because of the decisive role in this case) and proceeds to ask questions of the two participants who are hidden.
Based on the answers to the questions, the moderator will check to indicate which curtain hides the human and which curtain the PC hides. That’s a judging facet. Simply put, if the moderator is unable to distinguish between the two competitors as to who the human is and who the PC is, chances are that the PC has “tested” sufficiently that it is the equivalent of human intelligence.
Turing originally invented this “imitation game” because it’s AI that seeks to mimic the intelligence of humans. Keep in mind that AI doesn’t necessarily have to be designed in the same way as humans, and it’s not mandatory for AI to have a brain or use neurons and the like. Therefore, those who design AI are encouraged to use Lego and duct tape if this achieves the equivalence of human intelligence.
To pass Turing’s check, the computer had to answer questions with the same appearance of intelligence as a human. Turing’s check would fail if the moderator could announce which backdrop the PC was hosting, implying that there is some kind of revealing clue that reveals AI.
In general, this turns out to be a useful and effective way to locate AI which is ambitious AGI compared to less AI.
Of course, like the ultimate things in life, there are difficulties and twists on this issue.
Imagine that we set a level with two curtains and a podium for the moderator. Competitors are hidden from view.
The moderator takes the podium and asks one of the applicants how to make a bean burrito, then asks the other candidate how to make a mortadella sandwich. Suppose the answers are appropriate and describe the effort in making a bean burrito and a mortadella sandwich, respectively.
The moderator makes the decision not to ask any more questions.
Here, the moderator announces, AI is indistinguishable from human intelligence and this AI is declared without delay as the smartest in AI, the AGI has always sought.
Should we settle for this decree?
I don’t think so.
This highlights a vital detail of Turing’s test, namely that the moderator will have to ask a sufficient diversity and intensity of questions that will help eliminate the embodiment of intelligence. When the questions are superficial or insufficient, any conclusion drawn is, at best, a productive error.
Also note that there is no specific set of questions that have been considered and accepted as the “right” to conduct a Turing test. Of course, some researchers have tried to recommend the types of questions that deserve to be asked, however, this is an ongoing debate and, to some extent, shows that we are not even quite sure what intelligence itself is (it is difficult to identify metrics). and meacertain for what is relatively poorly explained and ontologically soft).
There is another with respect to applicants and their behaviour.
For example, think the moderator asks applicants if they are human.
Humans can probably answer yes, honestly. AI might say he’s not a human being, opting to be honest, but it definitely ruins the check and undermines the spirit of Turing’s check.
Maybe he’ll lie and say he’s human. However, there are ethics specialists who would denounce such a reaction and argue that we do not need AI to be a liar, so AI will never be allowed to lie.
Of course, the human can lie and deny that he is the human in this contest. If we seek to make AI the equivalent of human intelligence, and if humans lie, what we all know is that humans lie from time to time, shouldn’t AI be allowed to lie too?
In any case, the fact is that the competition can go to the Turing test or leave to undermine or distort Turing’s test, which some say is fine, and it’s up to the moderator to know what to do.
It’s all in love and war, as they say.
How delicate are we the moderator?
Suppose the moderator asks each candidate to calculate the answer to a complex mathematical equation. AI can temporarily come to an accurate reaction of 8.27689459, while humans have trouble doing the calculation by hand and finding a response of 9.
Aha, the moderator deceived AI by revealing himself and the human by revealing that he is a human, by asking him that automatic AI can respond smoothly and that a human would have difficulty responding.
Believe it or not, for this very reason, AI researchers have proposed the advent of what some describe as synthetic stupidity (for detailed aspects of this topic, see my canopy here). The concept is that AI will intentionally prove that it is “stupid” by sharing answers as if they were ready through a human. In this case, the AI can simply sign that the answer is 8, so the answer is very similar to that of the human.
You may believe that having AI intentionally looking to make mistakes or fail (this is invented as the “Dimwit” trick through AI, see my explanation in this link here), it is unpleasant, disturbing and is not something everyone necessarily agrees with it’s a smart thing.
We allow humans to laugh, but having an AI that does, especially when it “knows best” would be a nonvtive and harmful slippery slope.
The inverted Turing raises its head
Now I’ve described the general of the Turing test.
Then, a variant that some like to call an inverted Turing test.
That’s how it works.
The human competitor makes the decision that he will claim to be the AI. As such, they will attempt to provide responses other than the AI type of responses.
Remember that AI in traditional Turing control tries to seem a must-have for a human. In the opposite Turing test, the human competitor tries to “oppose” perception and act as if it were AI and indistinguishable from AI.
Well, that sounds interesting, but why would the human do that?
This can be done just for fun, a kind of laugh for other people who like to expand AI formulas. This can also be done as a challenge, seeking to mimic or mimic an AI formula and whether you can do it effectively or not.
Another explanation for why that has more skills or merits is doing what’s called a Wizard of Oz.
When a programmer develops software, it infrequently pretends to be the program and uses a facade or interface for others to interact with the budding system, although those users do not know that the programmer is tracking its interaction and is also able to interact. (by doing so in secret on the screen and without revealing its presence).
Doing this kind of progression can reveal how difficult it is for end users to use the software and, in the meantime, remain in the software flow because the programmer quietly intervened to succeed on one of the deficiencies of the PC formula that may have interrupted the effort.
This would possibly explain why he is called a Wizard of Oz, involving the human who is consciously and secretly playing the role of Oz.
Returning to the opposite Turing test, the human candidate can simply claim that it is AI to find out where AI is missing and therefore be more capable of AI and continue their AGI search.
In this way, an inverted Turing check can be used to laugh and make a profit.
Turing up and down with the right look up
Some other people think we can, in spite of everything, move toward what is called Turing’s control backwards.
Yes, it’s true, it’s variant.
In the Turing test upside down, the moderator with the AI.
Can you repeat please?
This less-discussed variant implies that AI is the trial or interrogator, rather than a human. AI asks the two candidates, made up of an AI and a human, and then provides an opinion on which is which.
Your first fear is that the AI has two seats in this game, and as such, this is cheating or just an absurd arrangement. Those who request this variant are quick to point out that Turing’s original check has a human as a moderator and a human as a competitor, so why not allow AI to do the same?
The instant answer is that humans are different from others, while AI is probably the same and undifferentiated.
This is where those interested in turing’s opposite verification would say he is wrong in this hypothesis. They argue that we will have multitudes of AI, each of which will be its own instance of differentiation, and it is similar to how humans are each of the different moments (in short, the argument is that AI will be polylytic and heterogeneous, which monolithic, homogeneous).
The counterargument is that AI is probably just a type of software and a machine, all of which can be gently mixed with other software and machines, however, you can’t seamlessly mix humans and their brains. We have an intact brain in our skulls, and there is no known way to mix them directly or link them with others.
However, this round trip continues, providing a replica, and it is not clear that the variant upside down can be safely dismissed as a valid possibility.
As you can imagine, there is a Turing check upside down and also a Turing check backwards, reflecting the appearance of the traditional Turing check and its counterpart, the opposite Turing check (some, along the way, do not like the use of Upside down and insist that this added variant is just another branch of the opposite Turing test).
You reluctantly agree to let AI be in two positions at once and have one AI as an interrogator and another as a candidate.
What good does that do anyway?
One idea is that it is potentially helping to show more whether AI is intelligent, which could be evident in terms of the question and nature of how AI digests the answers provided, illustrating the ability of AI as the equivalent of a human judgment or interrogator.
This is the mundane or banal explanation.
Are you in conditions of the scary version?
It has to do with intelligence, as I’ll describe later.
Some claim that AI will eventually surpass human intelligence and succeed in super-synthetic intelligence (ASI).
The word “super” is intended to involve powers of supermen or superwomans, and instead, AI’s intelligence is beyond our human intelligence, but it is necessarily able to jump tall buildings or move faster than a ball that rushes.
No one can say what this ASI or superintelligence might think, and humans have such limited intelligence that we cannot see beyond our limits. As such, ASI can be intelligent in a way we cannot predict.
That’s why some see AI or AGI as a possible existential risk to humanity (this is what Elon Musk has continued to communicate about, see my policy on this link here), and the ASI is meant to be an even greater risk.
If you are interested in this existential risk argument, as I have pointed out many times (see link here), there are so many tactics to prevent AI or AGI or ASI from helping humanity and helping us prosper as they exist. apocalyptic scenarios where we are crushed like an insect. In addition, there is a growing wave of interest in AI ethics, fortunately, that can help address or mitigate the long-term AI calamities (for more information on AI ethics, see my discussion at this link here).
That said, it actually makes sense to prepare for the crisis scenario, because of the apparent discomfort and unhappy outcome that would result. I guess none of us need to be summarily crushed from lifestyles like annoying pests and transported smoothly.
Going back to Turing’s verification backwards, an ASI would possibly sit in the moderator’s position and judge whether “conventional” AI has still reached the ambitious point of AI that makes it able to pass Turing’s verification and be indistinguishable. Human intelligence.
Depending on the distance you need in the rabbit hole, at some point, Turing control may have two seats for the ASI and one seat for AI. This means that the moderator would be an ASI, while there is a traditional AI as a candidate and some other ASI as another candidate.
Note that there is no human at all.
Maybe we’ll call it The Turing Acquisition Test.
No human need; They’re not allowed.
Conclusion
Ai is unlikely to be designed just to create AI, and instead there will be a goal-based justification for why humans create AI.
One of those objectives is the preference to have self-driving cars.
A genuine self-driving car is a car in which the AI drives the car and there is no need for a human driver. The only role of a human would be as a passenger, but not at all as a driver.
A thorny moment right now is what point or grade of AI you need to get self-driving cars.
Some other people think that until AI reaches the ambitious AGI, we might not have genuine self-driving cars. In fact, those who have that opinion would probably say that AI will have to achieve sensitivity, perhaps in a moment of transition from automation to a spark of being called the moment of singularity (to be more informed, see my research on this link here).
Hogwash, some accountant, and insists that we can get an AI that is necessarily worthy of the Turing test, but that can nevertheless drive cars in a pleasant and safe way.
To be clear, at this time, there is no self-driving car with AI that has a technique similar to the AGI, so for now we have to check if the “pure vanilla” AI can be enough to drive a car. In short, for AI stakeholders, some refer to any symbolic technique for AI such as GOFAI or Good Old-Fashioned Artificial Intelligence, which is endearing and, to some extent, a small setback, all at once (see more about my explanation here)
When you ponder the situation, in one viewpoint, you could say that we are conducting a Turing Test on our streets today, allowing self-driving cars to cruise on our streets amongst human-driven cars, and if the AI-driven car is indistinguishable in terms of driving properly, it is passing a driver-oriented Turing Test.
Critics fear that we will allow a Turing check to take a stand before our eyes, which could unknowingly endanger the rest of us by dragging into a dicy experiment, while others argue that with the use of emergency human drivers in vehicles. we’re probably fine (to be more informed about the scruples of this facet, see my discussion here).
In any case, Turing verification is a vital tool in the AI Search Toolkit, and whether it’s classic Turing verification, opposite Turing verification or Turing verification upside down, let’s aim to create an AI that needs to be friends and don’t fall.
This is probably the maximum verification of all.
Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced, high-tech executive
Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive and entrepreneur, it combines the practical delight of the industry with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former USC and UCLA professor, and head of a pioneering AI lab, he speaks at major AI industry events. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media outlets such as CNN and co-hosted the popular technotrends radio screen. He has served as an advisor to Congress and other legislative bodies and has won many awards/recognitions. He sits on several director forums, has worked as a venture capitalist, angel investor and mentor for marketing founders and start-ups.