Several videos captured an unlikely scene on Saturday when a truck was seen driving down a Florida road with a man savagely clinging to the hood, hitting the windshield with his head and fists. The rarity of the stage seemed to defy comprehension and evoked everything a Hollywood studio could level for a film production.
It’s not a hit.
According to press reports, the semi-trailer had already been reported through the type while walking down the toll road, and once the trucker stopped, the guy jumped on the hood shockingly, opting to start crashing inexplicably. The truck’s driving force instinctively advanced amid confidence that the guy would decide to get out of the truck, but instead, the intruder we had decided to continue remaining fixed, so the truck’s driving force continued to move rapidly over the truck. road, driving sharply back and forth. alternating braking and accelerations, seeking to evict the alleged marauder.
Cars driving near the truck were surprised by the incident and, of course, pointed their smartphone camera at the show, watching the film at a time when it never happens in life, just like watching a bird. Haley’s comet.
For about nine kilometres, the truck behaved as if it were a bull looking to dismantle an unwanted detonator. Reports reported that the police, however, appeared, that the truck stopped, that the man who was caught was released from the hood, questioned and taken into custody. To date, no obvious reason or justification for the activity has been identified.
It would be simple to dismiss or dismiss the query as an informal curiosity, however, there is more than we see otherwise.
Consider this intriguing question: in the era of genuine AI-based self-driving cars, what if someone chooses to hang themselves from the hood of an AI-powered vehicle?
We already know what a human driving force can do, such as potentially driving in a way that shakes the clinging user (right or wrong), so it’s helpful to think what an AI driving formula can do in a circumstance
Let’s see what’s going on and let’s see.
Understanding self-driving cars
To be clear, genuine self-driving cars are the ones that AI handles completely alone and there is no human assistance in the task of driving.
These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is still a genuine self-driving car at point 5, which we even know if this will be possible or how long it will take to get there.
Meanwhile, Level Four efforts gradually seek to gain some strength by undergoing very close and selective road tests, there is controversy over whether those evidence deserves to be compatible with itself (we are all guinea pigs of life or death in exaggerated reluctance, taking a stand on our roads and roads, some point out, see my indication in this link here).
Since semi-autonomous cars require a human driver, adopting such cars will not be much different from driving traditional vehicles, so there is not much new in itself for the canopy on this issue (however, as you will see at a time, the following problems apply).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are to blame for driving the vehicle, regardless of the automation point that may be thrown at point 2 or 3.
Self-Driving Cars And Unwelcomed Hood Riders
For the real autonomous vehicles of points four and five, there will be no human driving force involved in the driving task.
All occupants will be passengers.
The AI is driving.
Suppose you get on the hood of a genuine self-driving car based on ai.
Before you, what AI can do, if you wonder why someone would decide to get on the hood of a car, especially a self-driving car, there are probably many moderate explanations and equally abundant absurd reasons.
As for the reasonableness of things, maybe someone can’t get into the car and tries eagerly to escape an unfortunate situation, like confronting a thief with the intention of damaging him, then the frightened user jumps into the hood with hope. to get rid of their existing danger. The trick of the dicy can be counterproductive, since AI can suddenly avoid the driverless car, leaving the user seated, so to speak, but it is also that the AI will continue and potentially allow the terrified user to escape.
Another possibility, marginally in the field of reasonableness, is a user hunting for a walk. A user would possibly be fun or exciting to hitchhike in a self-driving car, either as a joke or perhaps in a challenge, which is in fact harmful and undesirable. You may not forget the wave two years ago from other people jumping out of a moving car as a component of the Shiggy viral challenge of Drake’s song In My Feelings that has momentarily become a popular but silly trend (for my canopy at the time related to the phenomenon similar to autonomous cars, see the link here).
Let’s hope other people don’t offer careless stunts involving self-driving cars.
However, another angle might be who needs to avoid an autonomous car and believe that when jumping over the hood, the AI will avoid the vehicle. This can be done through a protester who seeks to obstruct traffic. More troubling, it is feared that if AI’s driving systems are too gently deceived, this can only inspire acts of theft-jacking, which is similar to car theft, but it has to do with tricking AI into allowing a thief to finally seize a vehicle and its passengers. (See my explanation in this link here).
Shifting gears, an example of a nonsensical reason might consist of someone that is not right in their mind and gets onto the hood due to hallucinations or other irrational thoughts. We might not ever know what possessed someone to take the action and only know that they chose of their own volition to do so.
Well, then it’s quite imaginable for a user to end up riding the hood of an autonomous car with AI. I think we can all agree that it’s not a crazy concept.
You may wonder why this hasn’t happened before.
This has not happened specifically due to the shortage of self-driving cars on the roads, and there is a driving force for internal human rescue of the self-driving car, so it is unlikely that we will encounter many self-driving cars. incidents to date.
Once the self-driving cars have been extended, you deserve to assume and, of course, expect all kinds of strange activities to come out of the carpentry. There have already been reported cases of other people looking to deceive AI management formulas, such as jaywalkers who “know” that they can cross the street illegally and that the AI driving formula will not challenge them, which would likely make a human driving force (it is a high-risk gambit, since AI is not foolproof and those human intruders are unwise to bet their lives on such problems.
In general, as it might seem to talk about the act of traveling on the hood of a car, this is a subject of intelligent religion and that artificial intelligence developers who employ self-driving cars deserve to consider.
One of the first reactions of some AI developers is that the chances of someone getting on the hood for any explanation are so infinitesimal and unlikely to be worth an appearance of reflection. Those who have this rather dogmatic view would apparently ask how such an act happens with today’s human-engined cars, for which there are no statistics showing that this happens with a minimum frequency.
Of course, this can be simply like comparing apples and oranges, in the sense that what happens today with human-powered cars can end up being very different in the era of synthetic intelligence cars. Acts that other people will not do due to a human driving force, the wheel will likely happen once there are no more humans on the wheel.
We still don’t know how other people will react to a global abundance of self-driving cars, so we can convincingly say that all bets are open about what other people might be doing today, and we want to think about what other people will do. in the long run. do long-term and especially how they will react to a lot of autonomous vehicles around them.
I mention this because there are automakers and autonomous automotive corporations that would reject uncontrollable facets like other people who get on the hood of a self-driving car. In addition to the confidence that this is an unlikely scenario, they are already with their hands full and are only looking to make autonomous cars that can drive safely from point A to point B, from a space to the grocery store, doing so without entering a car spin of fate or problems.
In that viewpoint, the hood clinging use case is admittedly an edge or corner-case, meaning that it is something placed way down on the priority list of things deserving attention. AI developers for self-driving cars already have a lot on their plate and trying to chew too many things at once can end-up diluting their efforts, leading to delays in getting the core stuff done.
Let’s pass and admit that the hood clinger is a rarity and a cutting-edge problem. Once so stipulated, we can concentrate a little and explore what AI may or may not do on such a strange or unlikely occasion (although, as I said, unfortunately this may end up being more unusual than expected).
What AI can or may not do
We can start this assessment through first that the hood hitch can occur when the auto-auto is stopped, or it can occur once the autonomous unit is already in motion.
If a self-driving car doesn’t move and someone chooses to get on the hood, the question arises as to whether AI may stumble upon a user having climbed the hood of the vehicle. In addition, once this resolution is taken, if possible, the AI will have to determine whether it deserves to continue the movement of the car or whether it deserves to remain stationary.
Set aside, for now, the case of using a parked self-driving car. While someone may move slowly over the hood of a parked auto car, let’s not worry about it, but let’s focus on the darkest cases of a temporarily impeded auto car, such as a soft red or a warning sign. To be clear, I’m not suggesting that having someone above a parked self-driving car be acceptable, and also to be clear, AI deserves to face this scenario at a time when the self-driving car is looking to start, so everything will have to be taken into account even though everything is taken into account in those machinings.
The other primary and very important use case is when the self-driving car is already on the move and someone chooses to get on the hood. For those who think it’s almost making its way into the hood of a moving car, keep in mind that if the self-driving car, for example, goes down an alley at a speed of 3 miles consistent with the hour, it would be relatively easy. run someone next door or jump over the hood.
Therefore, the mobile variant is obviously an option in the real world and obviously should be rejected.
We are now in a hurry to ask a probably undeniable question, whether AI can stumble upon what is on the hood of the self-driving car.
A first reaction of many would be that this will have to be detectable and that it would seem very unlikely or think that AI simply does not perceive that a user landed on the hood of the vehicle.
Don’t be like that with your assumptions.
Depending upon where the sensors are arrayed on the self-driving car, there could very well be a kind of blind spot in terms of someone clinging to the hood. The cameras are usually aimed at the street ahead, thus, someone standing directly in front of the car is likely to be detected, but a person laying down on the hood is not so readily seen. The same could be said of the radar and LIDAR, suggesting that they too might not detect a person that is straddled on the hood of the vehicle.
A key would be how he got on the hood.
There is a significant possibility that sensors would detect the user when approaching the vehicle, so the AI could determine if a user was near the car. If the user suddenly disappears, so to speak, and is no longer on the sides or front of the car, the AI, as designed today, would not be curious to know where it went. As long as the user is no longer an impediment to prosecution, that’s all AI would be regularly scheduled to consider.
Please note that the AI is not yet responsive and that we are far from getting there. Also, keep in mind that AI does not have a non-unusual reasoning, at least not the one that humans have today, and therefore the AI does not “reason” that if a user was nearby and now “disappeared”, the hood of the vehicle will have to be lit.
A human driving force sitting behind the wheel would see a user on the hood and react one way or another. Autonomous car cameras don’t necessarily look at the road from the same human driving force. Often, the cameras are placed absolutely on the front of the vehicle, beyond the hood view box. Possibly there would be cameras on the roof of the car, in which case there is a greater chance of detecting the user.
This discussion about detection is because AI wants to discover that someone exists on the hood to do something about it.
But even if detection occurs, because AI still doesn’t make sense and, infrequently, it’s very sensitive, AI is unlikely to be programmed differently today to think about what to do if a user is on the hood. In other words, even if detected, AI would possibly not have enough to do. In fact, it is thought that the on-board case is so rare that there is still nothing programmed in AI to solve the problem.
As such, the AI can probably drive the car as if there was no one on the hood. This would seem unwanted and very dangerous.
If there are passengers inside the self-driving car, they can shout at AI’s Natural Language Processing System (LNP), which is aimed at collecting drivers’ driving preferences, such as where to go, and those drivers can simply insist that AI avoid the car to allow the hood to descend.
This poses a best friend and a mind-boggling theme, venturing into the field of AI ethics, that is, those brokers are in some sense forced to warn AI of such a situation.
You can assume that fundamental humanity, of course, would alert AI, think that there is an individual driving force in the vehicle, maybe know the user who is on the hood and would want something bad to the user. In this case, the cyclist can remain silent and wait to see what happens. Or think that the user on the hood tries to succeed in the driving force, to damage the driving force, and the driving force, therefore, expects the AI to continue driving the car and that the opponent who clings to the hood will fall off. the car (to be more informed about those types of ethical AI puzzles, see my pavilion at this link here).
These are to keep in mind.
What else can happen?
It is imaginable that a user on the hood may end up hiding some of the sensors, avoiding the use of sensors or damaging the sensors when shaking the hood. In this case, AI is regularly in a position where it can be verified to treat sensor malfunction or sensors blocked through dust or debris. AI will not necessarily prevent if it can continue fairly safely, although in general any really extensive damage to detection will sometimes be a signal to the AI that the self-driving car will have to be safely prevented.
Conclusion
As may be obvious, there are many permutations and combinations of what can happen when clinging to the hood of a self-driving car.
A key point is that, unlike a human driver, artificial intelligence driving systems are still competent to deal with the act, albeit punctual, of an intruder clinging to the hood.
There is an intelligent possibility that the hooked human is inevitably detectable and that AI will enter a general mode to avoid the car, not necessarily because of some understanding of the importance of having someone on the hood and as a reactive mechanized reaction to anything that is wrong.
Some have suggested that auto-driving cars have microphones outside the vehicle, allowing AI to “listen” to what other people might tell the vehicle, such as a user clinging to the hood screaming to prevent the vehicle from. As a matter of reflection, think that AI hears such a call, but in the meantime, an internal driving force of the automobile screamed voraciously to continue, and AI had two other man-based commands or requests, each diametrically punctuating each other.
What does AI depend on?
Welcome to the global ethics of AI, which will be an increasingly prominent and important element in the adoption of AI.
Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced, high-tech executive
Dr. Lance B. Eliot is a world-renowned expert in synthetic intelligence (AI) with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive and entrepreneur, it combines the practical delight of the industry with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former USC and UCLA professor, and head of a pioneering AI lab, he speaks at major AI industry events. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media such as CNN and co-organized the popular technotrends radio screen. He has served as an advisor to Congress and other legislative bodies and has won many awards/recognitions. He sits on several director forums, has worked as a venture capitalist, angel investor and mentor for marketing founders and start-ups.