The AI will have to get the remarkable MacGyver trick, adding for autonomous AI cars

MacGyver solved it again!

Who or what is a MacGyver, would you be wondering?

Well, most people have heard of MacGyver, the TV series and the main character who manages to find a smart way out of a difficult situation, using his intelligence to design a solution from fairly everyday articles.

Fans know he’s carrying a Swiss knife instead of a firearm, because he believes that using his creativity and inventiveness will allow him to deal with any unfortunate case (the knife is convenient when he has to defuse a bomb or when he wants to disassemble a toaster and reuse his electronic components for an absolutely different purpose and ultimately saves his life).

It turns out that you don’t necessarily want to have noticed the screen or watched YouTube videos and still know what it means to be a “MacGyver” who faces a difficult task (this is now a component of our Talking Lexicon).

In short, we now have any kind of cutting-edge solution as a MacGyver-like approach, assuming it’s a sublime solution and something undeniable for a likely insoluble problem.

Let’s look at that statement.

A very important detail is that the challenge itself will have to be worrying.

If the challenge is undeniable and not full of complications, you can solve it only with your normal dog and don’t want to wear a Reflective Cap similar to MacGyver.

Another important aspect is that the solution cannot obviously be obvious.

In other words, if a monkey can see without delay how to solve the problem, you don’t want to interact in the problem-solving stratosphere, but you can just make a move to solve the problem.

Okay, the challenge will have to be, therefore, complicated or insoluble, and the solution will not be obvious and requires a mental effort to find it.

What else?

The challenge wants to be solved.

This is vital and difficult to know at the beginning of the troubleshooting process.

Often, when you’re challenged or emerges, you’re sure if there are tactics to solve it.

As such, you can explore a variety of forward-looking responses and, in doing so, notice a total of forward-looking responses that are feasible to solve the genuine problem.

In the case of the MacGyver tradition, it unearths a solution that is comforting, but you can’t expect to find a solution.

We can say that it would possibly be helpful to assume that a solution is findable, which can cheer you up when you’re looking to solve a thorny challenge and can also motivate you.

For those who surrender immediately and assume that there is no solution, it is as if they have thrown away the towel and, therefore, they will not put the obligatory power to verify and find a solution.

That said, there is also the genuine global facet that, in the end, might not be a solution (unlike the TV show, which provides a satisfied finish in a fairy book).

Another peculiarity is that there may be a solution, but only time will allow the solution to be realized, so you may not be able to without delay the difficult situation, even if you guess how it can also be solved.

How can you find a solution and yet there is a long time before you can solve the problem?

An example would be a candle that, when lit, will burn slowly through a rope, and once the rope is cut through the fire, it frees you from a trap.

In this example, you knew a viable solution, but the solution implementation took some time.

Suppose, however, that you have no apparent extinguishing the candle?

This becomes another form of challenge, one that is similar to the biggest challenge of possibly getting caught. This is a “new” challenge in the sense that it is similar to your solution and may or may not be a direct component of the original challenge of being trapped.

Perhaps there is a box of fites in the other room, which if you can succeed in it, it would be wise to appease the candle and then burn it through the rope to free you from the trap (reminiscent of a Rube Goldberg arrangement).

What you are proposing now stops in the search for those matches.

The weather can and it turns out that the matchbox is knocked down from a table due to a gust of wind that rises, knocking down the matches, one of which rolls into its trapped area.

Anyway, the fact is that it’s not necessarily the case that you can make a MacGyver right away and you have to give time to move forward for a solution to be viable or emerge (for a TV screen, you have to conclude the solution in a timely manner, since the screen only lasts thirty minutes or an hour (matrix), while in real life , things can last much longer).

To be a true MacGyver scenario, we hope the solution will be undeniable and sublime at the same time.

This criterion of elegance can be difficult to summarize and in words. This is one of the things through which, if you see it, you will be able to make a decision whether it was sublime or not (similar to the good appearance in the viewer’s eye).

In the TV show, MacGyver is nearly always faced with a life-or-death predicament, but for most real-world applications of the MacGyver-like approach, you usually aren’t dealing with life-or-death matters. The point being that sometimes the MacGyver is handy for ordinary matters involving difficult problems, while in other instances greater matters might be on the line.

This strikes us as much as how we think as humans, as well as how AI systems are designed and the limits of what they have achieved to date. Keep in mind that today’s AI isn’t even close to being anything equivalent to true human intelligence, which could be a shocking point for some, however, is the case.

Sure, there are pockets of situations whereby an AI application has seemingly been able to perform a task as well as a human might, these are constantly in the news. That though is a far cry from being able to exhibit a full range of intelligence and pass any kind of Turing Test (a famous method in AI to try and ascertain whether an AI system is able to exhibit human intelligence, see my analysis at the link here).

Current AI formulas tend to be classified as having a narrow AI, meaning they can eventually “solve” a limited problem, while such AI formula is not AGI (General Artificial Intelligence) and lacks human qualities such as non-unusual meaning reasoning (see link here).

In fact, a vital fear of the widespread use of device learning (ML) and deep learning (DL) is that computational style sets of rules tend to be fragile, most likely getting out of step when faced with rare exceptions. or cases. Array Any scenario that requires or implements a MacGyver is most likely, by definition, an exceptional or rare case (otherwise, it would use some other set of brute force rules or solution methods).

Here’s an appealing question to ask: “Will the arrival of true autonomous cars based on artificial intelligence possibly be delayed through exceptional circumstances or circumstances and, if so, may the use of MacGyverunfe-type approach assistance be delayed over these obstacles?”

The answer is yes, so-called excessive instances (another term for an exception or instances) are a primary fear for protecting genuine self-employed cars, and yes, if artificial intelligence systems can only use MacGyver-type capabilities, this can help. to deal with those difficult times.

Let’s see what’s going on and let’s see.

That of self-driving cars

It’s to explain what I mean when I communicate about genuine self-driving cars based on AI.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These cars without driving force are considered grades four and 5, while a car that requires a human driving force to calculate the percentage driving effort is considered a point 2 or 3. Cars representing the driving task are described as semi-autonomous, and commonly include a variety of automatic add-ons called Advanced Driver Assistance Systems (ADAS).

There is still a genuine self-driving car on Level 5, which we even know if this will be possible or how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, adopting such cars will not be much different from driving traditional vehicles, so there is not much new in itself for the canopy on this issue (however, as you will see at a time, the following problems apply).

For semi-autonomous cars, it is vital for the public to be aware of a disturbing facet that has emerged lately, despite those human motivating forces that continue to publish videos of themselves sleeping behind the wheel of a point 2 or In 3 cars, we will have to avoid being fooled into thinking that the driving force can divert their attention from the task of driving while driving a semi-autonomous car.

You are to blame for driving the vehicle, regardless of the automation point that may be thrown at point 2 or 3.

Self-Driving Cars And MacGyver

For true point four and five autonomous vehicles, there will be no human driving force involved in the driving task.

All occupants will be passengers.

The AI is driving.

To date, efforts to design self-driving cars have sometimes affected artificial intelligence that can drive in relatively driving situations.

This makes sense, i.e. making things “easier” first (for clarity, none of this is easy), implying that the AI driving formula can drive in a silent domain or in a daily traffic environment.

In addition, if you use the driving knowledge collected to drive an ML/DL system, chances are that the maximum driving knowledge is mainly due to daily driving and will be harmed by those driving opportunities.

Think of your own efforts.

Most of the time, you drive, think about what you eat for dinner that night, or repeat in your brain the complicated verbal exchange you had with your boss the other day, without supposedly paying much attention to the pavement.

This “reckless” behavior occurs indefinitely.

Then there are the weirdest (rare, hopefully) moments, where anything ordinary takes you away from your complacency and you will have to answer without delay.

It may be a life-or-death scenario that involves having to evaluate in real time a complicated challenge you face in the context of traffic, and that you want to evaluate what its characteristics are, in addition to having to adopt those characteristics soon. enough to save death or destruction.

All in a whisper of a moment.

Most would admit that today’s AI management formulas are definitely not yet in a position to deal with those times if the challenge is not a challenge that the AI management formula has not seen in a “viewed” position before or has not been previously programmed.

A new or wonderful scenario is not smart for AI handling systems at this time, and is not smart for human passengers, pedestrians or other cars powered by nearby humans.

What to do with these problems on board?

The same reaction above is to continue to progress in road tests and gather a lot of driving data, and I hope that eventually all the permutations and imaginable probabilities of the driving conditions have been captured and then analyzed so that they can be processed.

We’ll have to be careful with this approach.

Waymo, which has accumulated about 20 million kilometers of roads in total, and this at first glance is an impressive number, stays in the brain that humans travel more than 3 billion kilometers depending on the year and, therefore, the chances of locating a needle in a Haystack of many fewer kilometers is probably less likely.

Autonomous automotive industry experts also know that miles are not just miles, regardless of which automaker performs road testing, which means that if you drive over and over again in the same places, those miles aren’t necessarily as revealing as driving into more radical conversion and multiple roads and situations on the road (this is partly a complaint from so-called driverless car disconnection reports , see my research on this link here).

Another proposal is to do simulations.

Automakers and self-employed companies tend to use simulations in addition to driving on the roads, there is an ongoing debate about whether simulations deserve to be performed before allowing the use of public roads or whether it is acceptable to do so at the same time. time, and there is also a debate about whether the simulations are good enough to update the kilometers traveled (once again, it is based on the type of simulation performed and how it is built and used).

Some of the AI driving systems deserve to have a MacGyver-like component, able to cope with common disorders that occur while driving.

In particular, it would not be based on past conditions of strange or marginal cases, but rather on a generalized component that could be invoked when the rest of the AI management formula was not capable of a gambling situation.

In some manner of speaking, it would be like AGI but specifically in the domain of driving cars.

Is that even possible?

Some argue that AGI is either AGI or it is not, thus, trying to suggest that you might construct an AGI for a specific domain is counter to the AGI notion overall.

Others argue that when seated in a car, a human driver is making use of AGI in the domain of driving a car, not solving world hunger or having to deal with just any problems, and thus we ought to be able to focus attention to an AGI for the driving domain alone.

Hey, maybe we should apply MacGyver to the problem of solving edge cases and find an elegant solution to doing so, which might or might not consist of employing a MacGyver into the AI of the driving system.

That’s a mind twister, for sure.

Conclusion

A practical article on demanding AI situations in solving demanding MacGyver-type situations was written through Tufts University researchers Sarahthy and Scheutz (here’s the link). The authors note that an AI formula can likely take many arduous responsibilities and sub-responsibilities in the execution of any MacGyver-type situation, adding that it is capable of doing impasse detection, domain transformation, challenge restructuring, experimentation, discovery detection, domain extension, and so on.

Essentially, it is a very hard problem to get an AI system to act like MacGyver, regardless of whether there is a Swiss Army knife available or not.

In the case of an AI driving system, realize too that the MacGyver component would need to act in real-time, having only split seconds to ascertain what action to take.

In addition, the measures taken are maximum maximum, probably similar to the consequences of life or death, also adding the relevant scruples with the challenge of the cart (this is about having to make possible options between deaths or injuries in relation to other sets of deaths or injuries, see my explanation of the link here).

If you say that we ought not to seek a MacGyver-like capability, it raises the obvious question as to what alternatives do we have, and meanwhile, self-driving cars are proceeding along, absent such an ingenious or even similar capacity.

There is also the confidence that if we can only make MacGyver for the behavior chart, we could start extending it to other areas, allowing a slow realization of an AGI in all areas, this is a rather discussing statement and a story for another day.

MacGyver is known for saying you can do whatever you need if you think about it.

Can we get AI to do everything we can do if we think about it?

Time will tell.

Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced, high-tech executive

Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive and entrepreneur, it combines the practical delight of the industry with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former USC and UCLA professor, and head of a pioneering AI lab, he speaks at major AI industry events. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media outlets such as CNN and co-hosted the popular technotrends radio screen. He has served as an advisor to Congress and other legislative bodies and has won many awards/recognitions. He sits on several director forums, has worked as a venture capitalist, angel investor and mentor for marketing founders and start-ups.

Leave a Comment

Your email address will not be published. Required fields are marked *