The explosion of synthetic intelligence may be like this, which adds to autonomous AI cars

Sometimes you initiate an action and, like a domino, it begins, feeding on itself and temporarily agitating almost unstoppably. For example, you may be familiar with those popular YouTube videos of a beaker that, when filled with a special liquid, spontaneously sprouts from the foam, similar to a type of chain reaction.

History indicates that when the atomic bomb was originally created, some of the scientists concerned feared that if the atomic bomb caused it, it could cause a chain reaction due to the ignition of a breath explosion in the air and generate a fire.

There is now a position in which a chain reaction phenomenon is evoked through researchers and scientists.

He’s in AI.

Some vehemently claim that we will potentially have an “intelligence explosion” of artificial intelligence that will happen one day, and there are several bets that this could happen between 2050 and 2100.

To be clear, no one knows.

Frankly, this can just be rubbish and there will never be an explosion of AI intelligence, ever. Or, there may be an explosion of AI intelligence, and this will happen in the year 2500, in the long run, and none of us will be there to see it happen (unless, I suppose, you temporarily notice a miracle cure to avoid aging, or put yourself in a bloodless room to thaw more and witness the wonder of the AI explosion).

Maybe those forecasters are right, and that will be the case for the next thirty or eighty years.

Prepare.

In any case, let’s take a closer look at the alleged AI intelligence explosion.

Today’s AI systems definitely don’t live up to human intelligence. AI based on the calculation of the supply age can do sensitive and impressive things, but it is, at best, a “narrow AI” that can only paint in a limited scope or domains, and that also lacks any common-sense reasoning appearance that we associate with human intelligence.

The true AI, anything that can pass Turing’s unchecked (a popularized approach to verifying if something presents intelligence, indicated and indicated, see my research on this link here), is called AGI (Artificial General Intelligence).

There’s no AGI today.

Many are on their way to expanding the AGI.

Here’s the trick.

Suppose that humans work hard to create agi, genuine AI, and no matter how long we do it, and no matter how hard we work on it, in the end we will not succeed in the much cackling AGI.

Sad face. Or, smiling face if you’re concerned that genuine AI wants to annihilate or enslave us (for conspiracy theories about how AI can become our nemesis, or be our most productive friend of all time, see my discussion on this link here).

Well, suppose for the moment that humans don’t have what it takes to invent a genuine AI.

Does this mean we’ll have a genuine AI forever?

It’s time to invoke the perception of a chain reaction. In theory, maybe we can make a “smart enough” AI formula that then becomes an explosion of intelligence, as we’ve never seen, and behold, the real AI appears. Keep in mind, therefore, that we didn’t have what it took to achieve AGI, but just set up the right AI that would cause the chain reaction that would lead to true AI.

One can simply say that we could cause the advent of genuine AI.

Yes, we can also potentially plan the chain reaction, in which case, congratulations on being able to perceive it, when we would possibly also have no idea how to succeed at the starting point of an intelligence chain reaction and, therefore, our AI efforts. suddenly and enter an intelligence typhoon of his own free will.

Design opportunity.

If this happens, it can happen in an instant, for a moment, so temporarily that we didn’t even realize it’s happening, or some think it may be just a slower emergency, an emergency that could take your time. , in the meantime, we might be stunned with amazement as it unfolds.

How can the AI intelligence explosion look like?

Some that will avoid over time that reaches the full equivalent of human intelligence.

Why would you do that? Because it is claimed that perhaps humanity is appearing to be the greatest intelligence imaginable, and hence there is nothing beyond that. On the other hand, another view is that the explosion of AI intelligence will outweigh human intelligence and land somewhere around so-called superintelligence, also called ultraintelligence.

How tall is the top? I can’t say. I don’t know.

It is hoped that this superintelligence will certainly be beyond our intelligence point and yet it will also have a limit or one last advantage. It’s probably not an amorphous intelligence that consumes everything. Meanwhile, there are those who recommend that the intelligence explosion of artificial intelligence can be an endless chain reaction. Artificial intelligence can pass and go on, adapting smarter and smarter, doing it for the rest of eternity.

What would it mean for humanity if artificial intelligence exploded something that, once it began, steadily expanded?

Maybe he’s smart or bad.

Perhaps the AI intelligence generated by the AI superintelligence that generates the intelligence of the AI gifted that generates the ideal intelligence of the AI gifted, to infinity, and the will to improve.

Those who oppose the emergence of AI superintelligence are likely to wonder what they may think or do or see or say superintelligence that those of us with ordinary intelligence cannot think, do, see or say. The same old answer is that superintelligence is likely to solve cancer, solve global hunger, how the universe began, and necessarily answer all the annoying disorders and questions that humans, who only have superficial intelligence, have yet to understand.

One twist is that it might be that our ordinary intelligence isn’t even enough to identify what superintelligence might think, so we have no concept of what superintelligence, let alone superintelligence, could conceive.

It’s way beyond our pay, as they say.

We can simply be intellectual ants through comparison, and the superintelligence of AI would tolerate our existence, maybe it would help our existence, maybe it would check to push us up on the intelligence scale, or to keep us as “pets” (perhaps on Earth, or maybe move elsewhere), maybe teach us new tricks, or we just don’t value it and deporte it.

Are we too bold eventually to succeed in an explosion of AI intelligence?

There’s two in a medal like that.

The superintelligence of AI can simply kill us, a seemingly negative result, or it can simply save us by showing us how to be nonviolent and coexist with others and with AI.

A cornerstone of the AI box on the subject of an explosion of AI intelligence was first published in 1965 through John Good Irving in an article titled “Speculations on the First Ultra-Intelligent Machine” (see link here). He said: “The most likely is that, in the twentieth century, an ultra-intelligent device will be built and that this will be the last invention that guy will have to make, because it will lead to an “intelligence explosion.” “It will reshape society in some way.”

Well, we’re not there yet, even though a lot of other people are trying.

He finds it attractive that other people interpret those comments in other ways.

For example, the facet of an AI superintelligence (ultraintelligence in their language), would be the last invention the boy wants to make, is open to diversifications of meaning.

As:

· Does that imply that the superintelligence of AI will now make all new inventions without any help or contribution from humanity?

· Or does it mean that we might not want any more new inventions at all, because the superintelligence of AI will satisfy all our desires?

Another peculiarity is whether we can simply be connected to the AI intelligence explosion.

Here’s the idea. If AI becomes super smart, it may infect humans. Consider how, sometimes, when you are with someone else who is really intelligent, you increase your intelligence and you obscure, absorb, or magnify your own intelligence due to the interaction and exposure to the other person’s intelligence. Doesn’t it seem moderate to expect that if we reached the superintelligence of AI, it would inevitably lead to an accumulation of our own intelligence as a species?

Maybe, if you think the brain has some limited intelligence capability, and if we’re already there, the counterargument is that the superintelligence of AI can’t do much about our own physical limitations and that we’re not going to get smarter.

Perhaps the superintelligence of AI can turn us into reflective caps that, by sitting on our heads, would build our intelligence in a different, inadequate and restricted way through the brain.

Another fast value-added point here is the supposed arrival of something called the uniqueness of AI. It is thought that uniqueness is the time and time when AI becomes delicate (see my research on this link here). You would possibly think that the singularity will have to be the moment and the moment of the explosion of AI intelligence, going hand in hand.

Maybe not.

Suppose we don’t have an explosion of AI intelligence and a different way in achieving uniqueness through our craft AI production at this degree.

Another situation is that we achieve in the singularity, and at this point, the AI itself realizes that there is more to do, and is able to stimulate or provoke an explosion of intelligence itself to continue its evolution (which it chooses to do. Order at your own discretion).

The back is that the uniqueness and explosion of AI intelligence are not necessarily one and the same, and although they obviously have many potential contact problems in common, they are not necessarily joint (although, despite this obvious notion, some claim that they are actually co-existing and co-pending maximums).

Autonomous AI and intelligence explosion

Add to this discussion the role of genuine self-driving cars in AI.

Real self-driving cars are the ones that AI drives all alone and there is no human assistance for the driving task.

These cars without driving force are considered grades four and 5, while a car that requires a human driving force for percentage driving effort is considered a point 2 or 3. Cars that represent the percentage of the driving task are described as semi-autonomous and typically involve a variety of automated add-ons called Advanced Driver Assistance Systems (ADAS).

There is still a genuine self-driving car at point 5, which we even know if this will be possible or how long it will take to get there.

Meanwhile, Level Four efforts gradually seek to gain some traction through tests on very narrow and selective public roads, there is controversy as to whether such evidence deserves to be allowed according to the se (we are all guinea pigs of life and death indies in an existing context taking a stand on our roads and roads). some point out).

For the true autonomous vehicles of point four and five, there will be no human driving force involved in the driving task.

All occupants will be passengers.

The AI is driving.

Let’s take a look at 3 facets of autonomous AI cars and the explosion of synthetic intelligence:

(1) Could the ingredients of autonomous AI cars enter the beaker that will ferment and give the AI intelligence explosion?

(2) If autonomous AI cars are not in the beaker according to themselves, and if the AI formula exceeds them and becomes the burst of AI intelligence, would autonomous AI cars be swept anyway, secondly, in the AI Intelligence explosion?

(3) Could AI self-driving cars as we designed them to be abandoned as archaic and ridiculous through an explosion of AI intelligence that led to a superintelligence AI, which can locate anything else or means of transportation that is so powerfully impressive as it autonomous? do cars seem to be mere toys in comparison?

There’s a lot to talk about and debate.

One view is that self-driving AI cars designed today are unlikely to produce the explosion of AI intelligence. There’s not enough for there to be one.

In fact, this creates a debate about whether the effort to manufacture genuine self-driving cars requires us to also notice and code true agi or AI (see discussion on this link here). If that’s the case, we may wait a long time to see genuine self-driving cars.

Yet the perception of an explosion of intelligence really captures our imagination. Think of the wonder we would all delight in if self-driving cars suddenly began to see their AI driving systems suffer from an explosion of intelligence, and cease to be our car drivers and become the engines of our lifestyle and future.

It almost blew his head off.

Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive

Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced executive and high-tech entrepreneur, he combines industry hands-on experience with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former professor at USC and UCLA, and director of a pioneering AI lab, he speaks at primary events in the AI industry. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media such as CNN and has co-hosted the popular radio show Technotrends. He has served as an advisor to Congress and other legislative bodies and has won quite a few awards/recognitions. He is part of several director forums, has worked as a venture capitalist, angel investor and mentor of marketing founders and startups.

Leave a Comment

Your email address will not be published. Required fields are marked *