AI Needs To Gain Famed MacGyver Shrewdness, Including For AI Self-Driving Cars

 MacGyver solved it again!

Who or what is a MacGyver, you might wonder?

Well, most people have heard of MacGyver, the TV series and main character that manages to always find a clever means to extricate himself from some puzzling predicament, using his wits to devise a solution out of rather everyday items.

Fans know that he carries a Swiss Army knife, rather than a gun, since he believes that using his creativity and inventiveness will always allow him to deal with any untoward circumstance (the knife is handy when you need to defuse a bomb, or when you need to take apart a toaster and reuse its electronics for a completely different purpose and ultimately save your life accordingly).

It turns out that you don’t necessarily want to have noticed the screen or watched YouTube videos and still know what it means to be a “MacGyver” who faces a difficult task (this is now a component of our Talking Lexicon).

In short, we now have any kind of cutting-edge solution as a MacGyver-like approach, assuming it’s a sublime solution and something undeniable for a likely insoluble problem.

Let’s look at that statement.

A very important detail is that the challenge itself will have to be worrying.

If the problem is straightforward and not filled with complications, you presumably could solve it with just your ordinary noggin and not need to put on a MacGyver-like thinking cap.

Another important aspect is that the solution cannot obviously be obvious.

In other words, if a monkey can see without delay how to solve the problem, you don’t want to interact in the problem-solving stratosphere, but you can just make a move to solve the problem.

Okay, the challenge will have to be, therefore, complicated or insoluble, and the solution will not be obvious and requires a mental effort to find it.

What else?

The challenge wants to be solved.

This is vital and difficult to know at the beginning of the troubleshooting process.

Often, when you’re challenged or emerges, you’re sure if there are tactics to solve it.

As such, you can explore a variety of forward-looking responses and, in doing so, notice a total of forward-looking responses that are feasible to solve the genuine problem.

In the case of the MacGyver tradition, it unearths a solution that is comforting, but you can’t expect to find a solution.

We can say that it is helpful perhaps to assume that a solution is findable, which can boost your spirits when in the throes of trying to solve a hairy problem and might inspire you.

For those who surrender immediately and assume that there is no solution, it is as if they have thrown away the towel and, therefore, they will not put the obligatory power to verify and find a solution.

That said, there is also the genuine global facet that, in the end, might not be a solution (unlike the TV show, which provides a satisfied finish in a fairy book).

Another peculiarity is that there may be a solution, but only time will allow the solution to be realized, so you may not be able to without delay the difficult situation, even if you guess how it can also be solved.

How can you find a solution and yet there is a long time before you can solve the problem?

An example would be a candle that, when lit, will burn slowly through a rope, and once the rope is cut through the fire, it frees you from a trap.

In this example, you knew a viable solution, but the solution implementation took some time.

Suppose, however, that you have no apparent extinguishing the candle?

This becomes another form of challenge, one that is similar to the biggest challenge of possibly getting caught. This is a “new” challenge in the sense that it is similar to your solution and may or may not be a direct component of the original challenge of being trapped.

Perhaps there is a box of fites in the other room, which if you can succeed in it, it would be wise to appease the candle and then burn it through the rope to free you from the trap (reminiscent of a Rube Goldberg arrangement).

What you are proposing now stops in the search for those matches.

The weather can and it turns out that the matchbox is knocked down from a table due to a gust of wind that rises, knocking down the matches, one of which rolls into its trapped area.

Anyway, the fact is that it’s not necessarily the case that you can make a MacGyver right away and you have to give time to move forward for a solution to be viable or emerge (for a TV screen, you have to conclude the solution in a timely manner, since the screen only lasts thirty minutes or an hour (matrix), while in real life , things can last much longer).

To be a true MacGyver scenario, we hope the solution will be undeniable and sublime at the same time.

This criterion of elegance can be difficult to summarize and in words. This is one of the things through which, if you see it, you will be able to make a decision whether it was sublime or not (similar to the good appearance in the viewer’s eye).

In the television program, MacGyver almaximum faces a difficult life-or-death situation, however, for the top concrete programs of the MacGyver approach, life and death problems are not regularly addressed. The fact is that infrequently the MacGyver is suitable for ordinary instances involving difficult problems, while in other instances, more vital problems would possibly be at stake.

This strikes us as much as how we think as humans, as well as how AI systems are designed and the limits of what they have achieved to date. Keep in mind that today’s AI isn’t even close to being anything equivalent to true human intelligence, which could be a shocking point for some, however, is the case.

Sure, there are pockets of situations whereby an AI application has seemingly been able to perform a task as well as a human might, these are constantly in the news. That though is a far cry from being able to exhibit a full range of intelligence and pass any kind of Turing Test (a famous method in AI to try and ascertain whether an AI system is able to exhibit human intelligence, see my analysis at the link here).

Today’s AI systems tend to be classified as having narrow AI, meaning that they can possibly “solve” a narrow problem, meanwhile such an AI system is not AGI (Artificial General Intelligence) and lacks human qualities such as common-sense reasoning (see link here).

In fact, one significant concern about the rampant use of Machine Learning (ML) and Deep Learning (DL) is that those computationally based patterns matching algorithms tend to be brittle, susceptible to falling out-of-step when faced with exceptions or unusual cases. The odds are that any situation requiring or deploying a MacGyver is by definition bound to be an exceptional or unusual case (otherwise, you’d use some other brute force algorithm or ordinary solving methods).

Here’s an intriguing question to ponder: “Will the advent of AI-based true self-driving cars potentially be stymied by exceptional or unusual circumstances, and if so, could the use of MacGyver-like approaches help overcome those impediments?”

The answer is yes, so-called excessive instances (another term for an exception or instances) are a primary fear for protecting genuine self-employed cars, and yes, if artificial intelligence systems can only use MacGyver-type capabilities, this can help. to deal with those difficult times.

Let’s see what’s going on and let’s see.

That of self-driving cars

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, adopting such cars will not be much different from driving traditional vehicles, so there is not much new in itself for the canopy on this issue (however, as you will see at a time, the following problems apply).

For semi-autonomous cars, it is vital for the public to be aware of a disturbing facet that has emerged lately, despite the human driving forces that continue to publish videos of themselves sleeping behind the wheel of a point 2 or In 3 cars, we will have to avoid being fooled by thinking that the driving force can divert their attention from the task of driving while driving a semi-autonomous car.

You are to blame for driving the vehicle, regardless of the automation point that may be thrown at point 2 or 3.

Autonomous and MacGyver

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is driving.

To date, efforts to design self-driving cars have sometimes affected artificial intelligence that can drive in relatively driving situations.

This makes sense, namely, get the “easier” stuff done first (for clarity, none of this is especially easy), involving having the AI driving system be able to drive in a somewhat calm neighborhood or a relatively everyday freeway traffic setting.

In addition, if you use the driving knowledge collected to drive an ML/DL system, chances are that the maximum driving knowledge is mainly due to daily driving and will be harmed by those driving opportunities.

Think of your own efforts.

Most of the time, you drive, think about what you eat for dinner that night, or repeat in your brain the complicated verbal exchange you had with your boss the other day, without supposedly paying much attention to the pavement.

This “reckless” behavior occurs indefinitely.

Then there are the weirdest (rare, hopefully) moments, where anything ordinary takes you away from your complacency and you will have to answer without delay.

It may be a life-or-death scenario that involves having to evaluate in real time a complicated challenge you face in the context of traffic, and that you want to evaluate what its characteristics are, in addition to having to adopt those characteristics soon. enough to save death or destruction.

All in a whisper of a moment.

Most would admit that today’s AI management formulas are definitely not yet in a position to deal with those times if the challenge is not a challenge that the AI management formula has not seen in a “viewed” position before or has not been previously programmed.

A novel or surprise situation is not good for AI driving systems, right now, and thus not good for human passengers, nor pedestrians, nor other nearby human-driven cars.

What to do with these problems on board?

The usual answer is to keep pushing along on roadway trials and collecting lots of driving data, and hopefully, eventually, all possible permutations and possibilities of driving situations will have been captured, and then presumably analyzed so they can be dealt with.

One has to be dubious about such an approach.

Waymo, which has accumulated about 20 million kilometers of roads in total, and this at first glance is an impressive number, stays in the brain that humans travel more than 3 billion kilometers depending on the year and, therefore, the chances of locating a needle in a Haystack of many fewer kilometers is probably less likely.

Autonomous automotive industry experts also know that miles are not just miles, regardless of which automaker performs road testing, which means that if you drive over and over again in the same places, those miles aren’t necessarily as revealing as driving into more radical conversion and multiple roads and situations on the road (this is partly a complaint from so-called driverless car disconnection reports , see my research on this link here).

Another proposal is to do simulations.

Automakers and self-employed companies tend to use simulations in addition to driving on the roads, there is an ongoing debate about whether simulations deserve to be performed before allowing the use of public roads or whether it is acceptable to do so at the same time. time, and there is also a debate about whether the simulations are good enough to update the kilometers traveled (once again, it is based on the type of simulation performed and how it is built and used).

Some of the AI driving systems deserve to have a MacGyver-like component, able to cope with common disorders that occur while driving.

It would not especially be based on prior oddball or edge case situations, and instead, be a generalized component that could be invoked when the rest of the AI driving system has been unable to resolve a playing out circumstance.

In some manner of speaking, it would be like AGI but specifically in the domain of driving cars.

Is that possible?

Some argue that AGI is either AGI or it is not, thus, trying to suggest that you might construct an AGI for a specific domain is counter to the AGI notion overall.

Others argue that when seated in a car, a human driver is making use of AGI in the domain of driving a car, not solving world hunger or having to deal with just any problems, and thus we ought to be able to focus attention to an AGI for the driving domain alone.

Hey, maybe we should apply MacGyver to the problem of solving edge cases and find an elegant solution to doing so, which might or might not consist of employing a MacGyver into the AI of the driving system.

It’s a twist, of course.

Conclusion

A practical article on demanding AI situations in solving demanding MacGyver-type situations was written through Tufts University researchers Sarahthy and Scheutz (here’s the link). The authors note that an AI formula can likely take many arduous responsibilities and sub-responsibilities in the execution of any MacGyver-type situation, adding that it is capable of doing impasse detection, domain transformation, challenge restructuring, experimentation, discovery detection, domain extension, and so on.

Essentially, it is very difficult to get an AI formula to act like MacGyver, whether or not a Swiss knife is available.

In the case of an AI driving system, realize too that the MacGyver component would need to act in real-time, having only split seconds to ascertain what action to take.

In addition, the measures taken are maximum maximum, probably similar to the consequences of life or death, also adding the relevant scruples with the challenge of the cart (this is about having to make possible options between deaths or injuries in relation to other sets of deaths or injuries, see my explanation of the link here).

If he says that we are not looking for a capacity similar to MacGyver, he raises the apparent question of what opportunities we have, and in the meantime, self-driving cars are advancing, in the absence of such inventive or even similar capacity.

There is also the confidence that if we can only make MacGyver for the behavior chart, we could start extending it to other areas, allowing a slow realization of an AGI in all areas, this is a rather discussing statement and a story for another day.

MacGyver is known for saying you can do whatever you need if you think about it.

Can we get AI to do everything we can do if we think about it?

Time will tell.

Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced, high-tech executive

Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive and entrepreneur, it combines the practical delight of the industry with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former USC and UCLA professor, and head of a pioneering AI lab, he speaks at major AI industry events. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media outlets such as CNN and co-hosted the popular technotrends radio screen. He has served as an advisor to Congress and other legislative bodies and has won many awards/recognitions. He sits on several director forums, has worked as a venture capitalist, angel investor and mentor for marketing founders and start-ups.

Leave a Comment

Your email address will not be published. Required fields are marked *