Strong AI.
Weak AI.
Strong or AI.
Or, if you prefer, you can implicate it as a weak or AI (you can list them in any order).
If you read about AI in the popular press, you are likely to notice references to the so-called strong AI and the so-called weak AI, and yet it is likely that any of the expressions have been misused. and offer misleading and confusing impressions.
It’s time to make things right.
First, let’s report wrong.
Some speak of a weak AI as if it were a weak AI and did not have the same functions as a strong AI, and add that a weak AI is decidedly slower, or much less optimized, or in a different way inevitable and undoubtedly weaker. in their AI functions.
No, that’s not all.
Another form of distortion is to characterize that “narrow” AI, which refers to AI that will only paint in a strictly explained field, as in an express medical use, for example, or in a specific monetary research use, can be characterized as being the same as a weak AI, while probably stronger AI is broader and more global.
No, it’s not that either.
And so on, the wrong thing disappears.
You can also be sympathetic and admit that the words “weak” and “strong” have connotations that would lead you to assume that this bureaucracy of interpretation in an AI context would possibly seem correct.
Unfortunately, this is the case.
To be honest, this is not what weak AI and strong AI mean.
Those who are versed in the AI box will probably lament that the words “weak” and “strong” were anointed as labels, as this generated a sordid clue and a blank life about what other people assume or think those words mean in an AI context.
Over time, many have not been able to examine well and be informed of what weak AI and strong AI intended to describe (twisted and distorted through lazy or misinformed culprits about the use of the lexicon), or some have chosen to deflect phraseology for countless other uses (do so voluntarily, adding fuel to the fire of misinterpretion).
I’m not saying that you can’t reorient and redevelop the terminology, however, it creates additional confusion and adds unnecessarily to discussions about what a user means in relation to what someone else means.
In essence, yes, a rose is a rose with another name, but an apple is not an orange, even if you spend a day calling oranges through the word apples.
Now that I’ve covered what those AI vocabulary names don’t mean, let’s go check on what they mean (or at least what the original meaning was).
Meaning AI and Weak AI
Return to an earlier era of AI, in the past due to the 1970s and early 1980s, an era that was characterized as the first era of the AI boom, which you may know as a time when knowledge-based systems (KBS) and specialized systems (ES) were popular.
The last era, today, which was once the rise of AI, is known as device learning times (ML) and deep learning (DL).
Using a season oriented metaphor, the current era is depicted as the AI Spring, while the period between the first era and this now existent second era has been called the AI Winter (doing so to suggest that things were either dormant or slowed-down like how a winter season can clamp down via snow and other dampening weatherly conditions).
The first era consisted of quite a bit of hand-wringing about whether AI was going to become sentient and if so, how would we get there.
Even during this second era, there are still similar discussions and debates taking place now, though the first era really seemed to fully take the matter in-hand and many slews of philosophers joined onto the AI bandwagon as to what the future might hold and how AI could be or might not become truly intelligent.
Into that fray came the birth of the monikers of weak AI and strong AI.
Most would agree that the verbiage originated in a paper by philosopher John Searle entitled “Minds, Brains, And Programs” (see link here), though this is not quite the remembrance of everyone at the time in the sense that some suggest the wording of “weak” and “strong” was already floating around, meanwhile, Searle’s paper put it firmly into writing and became a handy and tangible flash-point on the matter (and, for that, he certainly deserves due credit).
What was the weak AI and what was the strong AI?
They are philosophical differences about how AI might ultimately be achieved, assuming that you agree as to what it means to achieve AI (more on this in a moment).
Let’s see what Searle said about defining the terminology of weak AI:
“According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion.”
And, furthermore, he indicated this about strong AI:
“But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”
With this added clarification:
“In a strong AI, because the scheduled PC has cognitive states, systems are simple equipment that verifies mental explanations; rather, the systems themselves are the explanations.”
The rest of his famous (now infamous) paper then proceeds to indicate that he has “no objection to the claims of weak AI,” and thus he doesn’t tackle particularly the weak AI side of things, and instead his focus goes mainly toward the portent of strong AI.
In short, he doesn’t have much faith or belief that strong AI is anything worth writing home about either, he says this:
“On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.”
Here’s what that signifies, at least as has been interpreted by some.
Conventional AI is doomed to verify for true AI if it adheres to the use of “computer systems” because those systems are never going to cut it, and they don’t have the functions to include those things we associate with idea and sensitivity.
Humans and animals have a type of intentionality, as a result of the use of our brains, and for those who believe that true AI demands this intentionality, they bark the wrong tree through the search for “computer programs” (these are not the right ones and cannot climb up the intelligence ladder).
All of this presupposes two key assumptions or proposals made through Seale:
1. “Intentionality in (and in animals) is the product of the causal characteristics of the brain…”
2. “The instantiation of a PC program is never in itself a sufficient condition of intentionality”.
If your goal then is to devise a computer program that can think, you are on a fool’s errand and won’t ever get there, though, it isn’t completely foolish because you might well learn a lot along the way and could have some really cool results and insights, but it isn’t going to be a thinker.
I think it is clear that this is a deeply intriguing philosophical consideration, worthy of academics and other pontificates.
Is there a difference in the day-to-day of AI paints that those who make AI-based systems like Alexa or Siri or robots running on a production line will worry and lose sleep?
No.
To be clear, we are a long, long, long way to design AI systems capable of presenting intelligence at the human point in any genuine scope, scope and intensity of human intelligence.
That’s a shocker to some that keep hearing about AI systems that are as adept as humans.
Take a slow and measured breath and keep reading herein.
Achieving true AI is the key question
I discussed an earlier AI.
It is said that some AI programs paint well in narrow areas, possibly having a small character of type Surgeon General that identifies the many warnings and limitations about what AI can actually do.
Today’s AI systems cannot adopt or provide non-unusual reasoning, which I think we all agree that humans have infrequently (for those who mock that humans don’t have unusual reasoning or not, yes, there are other people we know who seem meaningless non-unusual Array , however, it is not the same as infrequently thought of as non-unusual reasoning and not amalgamates them for lack of meaning
To insiders of AI, today’s AI applications are narrow AI, and not yet AGI (Artificial General Intelligence) systems, which is yet another term that is being used to get around the fact that “AI” has been watered down as terminology and used for anything that people want to say is AI, meanwhile there are others striving mightily to get to the purists’ version of AI, which would be AGI.
The debate about weak AI and strong AI is aimed at those that wonder whether we will be able to someday achieve true AI.
True AI is a loaded term that needs some clarification.
A true AI edition is an AI formula that can pass control of Turing, a kind of undeniable but revealing control that comes to ask questions of an AI formula and questions from a human being, necessarily two players separated into an intelligence management game, in a way. Array and if you can’t tell which, AI is probably the “equivalent” of human intelligence, as it was to distinguish from a human with intelligence.
Although Turing verification is practical and is a tool used to judge AI’s efforts to become a true AI, it has its own drawbacks and problematic considerations (see my research on this link here).
Anyway, how can we craft AI to succeed at the Turing Test and have AI be ostensibly indistinguishable from human intelligence?
One confidence is that we will have to include in the AI formula the same kind of intentionality, loss, idea and essence of sensitivity that exists in humans (and to some extent, in animals).
As a side note, the day that we reach AI sentience is often referred to as the singularity (see my explanation here), and some believe that it will inevitably be reached and we’ll have then the equivalent of human intelligence, whilst others believe that the AI will exceed human intelligence and we will actually arrive at a form of AI super-intelligence (see this analysis at the link here).
Keep in mind that not everyone with the precondition for a desire to realize and reinvent synthetic intentionality, claim that, however, we can achieve an AI that shows human intelligence while doing so without throwing this fluffy thing called intentionality and its vagrants into the car.
Anyway, apart from the last aspect, the other big question is whether “computer programs” will be the right tool to take us there (whatever).
This increases definition consideration.
What do you mean, PC programs?
At the time when this debate first flourished, computer programs generally meant hand-crafted coding using both conventional and somewhat unconventional programming languages, exemplified by programs such as ELIZA by Weizenbaum and SHRDLU by Winograd.
Today, we are using Machine Learning and Deep Learning, so the obvious question on the minds of those that are still mulling over weak AI and strong AI would be whether the use of ML/DL constitutes “computer programs” or not.
Have we progressed past the old-time computer programs and advanced into whatever ML/DL is, such that we are no longer seemingly have this albatross around our neck that computer programs aren’t the rocket ship that can get us to this desired moon?
Well, that opens another can of worms, though it is pretty much the case that most would agree that ML/DL is still a “computer program” in the meaning of even the 1980s expression, so, if you buy into the argument that any use of or a variant of computer programs is insufficient to arrive at thinking AI, we are still in the doom-and-gloom state of affairs.
Searle covers the ML/DL theme to some extent, as it mentions that a synthetic device might think about whether this:
“Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obvious, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.”
Note that today’s ML/DL is the same as human neurons and the human brain.
At best, it is a crude and incredibly simplified simulation, deploying synthetic neural networks (RNA), well below anything close to a human biological equivalent. We may one day approach and, in fact, some think we will succeed in the equivalent, but do not hold their breath at the moment.
Going back to the argument related to a weak and strong AI, no matter what you do in the case of a weak or strong AI, this is where you land according to Searle:
“But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?”
And his clear-cut answer is: “This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.”
Ouch!
That smarts.
However, there is a ray of hope for a strong AI, as it can potentially be reshaped into something that can succeed in the AI think mark (called Searle): “Any attempt to literally create artificial intentionality (strong AI) could not only succeed in program design, but also deserves to reflect the causal powers of the human brain.”
Practical for today
I hope it is clear that the original meaning associated with a weak and strong AI is a long way from what the popular press tends to use today for those catchy phrases.
As mentioned, some use weak AI to refer to narrow AI, but that’s not the spirit nor significance of what weak AI means in its original context.
Some use weak AI to suggest an AI system is feeble, but that’s also not at all what the original meaning of weak AI is about.
When they check to point out to others that their use of a weak AI and a strong AI is not aligned with the original meanings, sometimes they get angry and tell you not to be so sticky.
Or, they tell you to drop the cobwebs from your brain and connect with the intended age.
Fine, I suppose, you can change up the meaning if you want, just please be aware that it is not the same as the original.
This in many implemented uses of AI.
For example, consider the emergence of AI-based true self-driving cars.
Real self-driving cars are the ones that AI handles completely alone and there is no human assistance in the task of driving.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, Level Four efforts gradually seek to gain some strength by undergoing very close and selective road tests, there is controversy over whether those evidence deserves to be compatible with itself (we are all guinea pigs of life or death indies in inconsistent reluctance on our roads and roads, some point out).
Some media describe semi-autonomous ADAS as a weak AI, while autonomous AI is a strong AI.
Well, this doesn’t support the original definitions of weak AI and strong AI.
You want to be ready to separate the original definitions if you are going to use those terms in this way.
Personally, I don’t like it.
Similarly, I don’t like it when using low AI and strong AI to characterize the difference between independent AI.
For example, some say that Level Four is a low AI, while point five is a strong AI, again, this doesn’t make sense in the nature of what the terms meant.
If you need to verify to apply the argument to genuine self-driving cars, there is an ongoing dispute over whether driverless cars will have to be “intentional” to be safe enough for our public roads.
In other words, can we create an AI without any obvious embodiment of intentionality, yet this AI has to be smart enough to accept it as true with AI-based self-driving cars on our roads, roads, and daily streets?
It is a complex debate (see my elaboration here), and no one knows yet whether the driving box can be considered to have a limited enough scope that such intentionality is not a necessity, moreover, consultation in a consultation is what can be thought of as or enough for society to settle with self-employed cars as partners.
Conclusion
For those wishing to deepen their wisdom on this subject, they should also familiarize themselves with the argument of the Chinese Chamber (CRA), a sheet used in Searle’s argument, and anything that has a punching bag in the hallways. AI and philosophy.
It’s a story for the day.
AI professionals can see all this discussion about weak AI and strong AI as educational and a lot of noise for nothing.
Use those words however you want, some say.
Take it easy.
Perhaps we will heed the words of William Shakespeare: “Words without mind never pass into heaven.”
The words we use matter, in the high-risk goals and outcomes of AI.
Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced, high-tech executive
Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive and entrepreneur, it combines the practical delight of the industry with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former USC and UCLA professor, and head of a pioneering AI lab, he speaks at major AI industry events. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media outlets such as CNN and co-hosted the popular technotrends radio screen. He has served as an advisor to Congress and other legislative bodies and has won many awards/recognitions. He sits on several director forums, has worked as a venture capitalist, angel investor and mentor for marketing founders and start-ups.