Like truthful or lying, driving self-driving cars

Will AI lie to you?

Of course, why not.

Most other people assume that AI is impartial, impartial, objective than subjective and, no doubt, a fact-teller.

No, it’s a myth.

Often favored through those AI-based sci-fi robots shown in the movies, it turns out that AI itself has controlled to gain a reputation that is in fact undeserved, perhaps enacted through a formidable Hollywood agent who sometimes helps keep AI in a smart way. position with the general public. (Well, apart from the cases where AI makes the decision to annihilate humanity and drive us out of the earth, we would be injured.)

In the genuine world, there is no specific explanation for why that synthetic intelligence will tell the truth.

Anyone who interacts with an AI formula can put themselves in a rather unpleasant posture and inadvertently suffer destructive consequences by worshipping each and every AI formula and cheerfully accepting that interaction as a form of absolute truth.

Let’s start with the now-known side of AI that potentially hurts and moves toward the bewildering side of AI’s lies.

There is already an awareness of the terrible prejudices that an AI formula can keep secret.

For example, an AI formula that makes the decision to provide a car loan may end up as a career as a key thing in the decision-making process, or perhaps an AI formula that verifies the approval of certain medical fitness care procedures. the insurer uses sex to give a positive or negative opinion on those claims.

All you know is that your efforts to get a car loan were denied or that your urgent medical procedure was denied through your insurer. You’re unlikely to ever notice that this is due to an underlying formula based on artificial intelligence that made a life-changing decision.

And let us not know or be aware that selection is taken into account through its ethnicity or gender.

Often referred to as algorithmic decision-making (ADM), our lives are inexorably formed and controlled through the way those AI systems make their decisions.

The peculiarity for any company of those AI systems is that you can claim that you do not know why the AI made its decision and therefore verify to divert any anger reactions from those affected. It’s simple for a company to just shrug and say that the PC did it, and therefore act as if it were the victim, even if they were the ones who put AI in the middle of things, to begin with.

The use of AI in this way is undoubtedly a smart way to deflect blame, automate procedures, decrease manual effort, decrease hard work and potentially faint without qualms about considerations about prejudices outside the point instilled in such issues.

How can you simply imblese an AI formula with a bias outside the point?

In fact, it’s easy.

One of the maximum non-unusual tactics for dragging prejudices into the flow of AI is to use Device Learning (ML) and Deep Learning (DL), which have so far been respected as glorious tactics for creating AI systems, there is an ugly abdomen involved. ML/DL is just a technique that adapts to computational models and regularly requires a lot of knowledge for educational purposes. It usually involves synthetic neural networks (RNS), AI attempts to locate patterns in knowledge, and then uses the models to help make predictions or review newly provided knowledge to imply whether they fit already detected models.

Let’s be transparent about the fact that there is no “reflection” in this type of AI. Fall into the non-unusual trap of assigning sensitivity to such trend matching techniques.

There’s none there.

Some think that all this ML/DL may one day lead to the emergence of a human-type intelligence, which potentially appears at a time or flash that invented the singularity, but do not hold the breath for that day (to be more informed about the singularity. my research on the link here).

Using ML/DL is convenient as a mathematical way complicated enough to design knowledge. An imaginable disadvantage is that the underlying mathematics is so difficult to understand that there is no way to interpret how AI nevertheless verified the relationships between the elements of knowledge. Much of the ML/DL used today does not have an inherent explanation that can logically show how AI makes its decisions, a surely mandatory facet known as XAI for an explainable AI.

In short, this means that AI may have landed on race or gender as something in the knowledge provided, and quietly stuck to a legally problematic measure as an important measure to decide whether to approve a car loan or settle for a medical procedure request.

Even those who introduced ML/DL into lifestyles might not realize that race or gender had very important points integrated into the AI system.

A type of implicit case of credible denial embedded in the very nature of the effort.

We will have to wait and see if this will be allowed in line with the se, as there are more and more regulatory requests to master such problems (see my assessment of those aspects, in this link here), and there is also a clever possibility that will be put in place and will win civil lawsuits that will then act to save companies from the unsofeed artificial intelligence systems that come with uncomfortable biases.

When replacing gears, consider how AI can become a liar.

Perhaps the most productive position to begin with is the famous story of President George Washington and his that he could not lie. We all grew up listening to the story, but only the sketched version.

The full edition of George Washington’s short story was originally directed through Mason Locke Weems, in the fifth edition of his e-book entitled “The Life and Memorable Actions of George Washington” which was published in 1806, several years after George Washington had already died in 1799. According to the story, George’s father had said that he would do fifty miles to see his son, a great distance at the time, and that any such arduous adventure was valuable as his little one had a center of honesty and pure lips. so much so that each and every one can count on each and every one of the words that young people can say.

One day, the father arrived on George’s other side and, obviously, he can see that the boy had broken up a beloved and cheerful tree with a new axe that his son had been given. Apparently, young George said, “I can’t lie, Dad. you know I can’t lie. I cut it with my axe.”

Was the father furious, or perhaps he just immediately rebuked his son for the bitter act? No, supposedly instead, the father, beaming with pride, said, “I’m glad, George, that you’ve killed my tree, because you’ve paid me a thousand times. An act of heroism like that in my son is more valuable than a thousand trees.

An underlying theme of the story is that George raised in a way that emphasized the importance of being honest. Therefore, it was not because of the possibility that he presented the truth, and instead of a schooling that helped him try to attain the truth. We can link that same concept to AI.

Remember that we have just discussed the fact that the adverse biases of THE AI systems that integrate ML/DL were necessarily because of education in such a way that they lead to such biases.

Similarly, AI can, in a sense, be trained to lie. If the knowledge used to drive an ML/DL AI formula comprises lies, the resulting AI will continue to tell those lies. Before delving deeper into this unexpected phenomenon, stay in the brain that we will have synthetic intelligence formulas immersed in our lives in a way that goes beyond the resolution of loans or medical procedures, adding cases of life and death in real time like the self-employed. Driving. Cars.

The intriguing question is: will genuine AI-based self-driving cars potentially have an AI that lie and, if so, what can this foreshadow for our addiction and driving in self-driving cars?

Let’s see what’s going on and let’s see.

The self-driving car

Real self-driving cars are the ones that AI drives all alone and there is no human assistance for the driving task.

These cars without driving force are considered grades four and 5, while a car that requires a human driving force for percentage driving effort is considered a point 2 or 3. Cars that represent the percentage of the driving task are described as semi-autonomous and typically involve a variety of automated add-ons called Advanced Driver Assistance Systems (ADAS).

There is still a genuine self-driving car at point 5, which we even know if this will be possible or how long it will take to get there.

Meanwhile, Level Four efforts are gradually seeking to gain some traction through very narrow and selective public road tests, controversy exists over whether such evidence deserves to be allowed according to the se (we are all guinea pigs of life and death in an existing context). .on our roads and roads, some point out).

Since semi-autonomous cars require a human driver, adopting such cars will not be much different from driving traditional vehicles, so there is not much new in itself on this issue (however, as you will see in a moment). , the following issues apply).

In the case of semi-autonomous cars, it is essential that the public be aware of a disturbing facet that has emerged lately, namely that despite those human driving forces that continue to publish videos of themselves falling asleep at the wheel of a point 2 or point 3, we will have to prevent everyone from fooling us thinking that the driving force can divert their attention from the task of driving while driving a semi-autonomous car.

You are to blame for driving the vehicle, regardless of the automation point that may be thrown at point 2 or 3.

Self-driving and lying down

For the true autonomous vehicles of point four and five, there will be no human driving force involved in the driving task.

All occupants will be passengers.

The AI is driving.

No wonder you can assume that AI will drive you safely and with due diligence, and if not, AI will not drive a multi-ton vehicle that can kill passengers by crushing or killing others by hitting nearby cars or pedestrians.

Would you also expect AI to be true?

It is true that this is nothing in the minds of leading automakers and developers of autonomous driving technologies, nor of regulators overseeing autonomous vehicles, and it is unlikely that it is something that is required for those who are recently travelling in a self-driving car.

Of course, it is naturally assumed that the AI driving formula says the fact unconditionally and there does not seem to be any explanation for why to doubt that you are doing so. How dare you recommend that the AI driving formula be a liar? These are combative words, of course.

But what’s the act of lying? First we’ll have to ask.

A non-unusual definition of mendacity is that it is the act of doing something false with the planned intention to deceive. There are two main elements related to mendacity, first, a false or false fix and time, the basis for doing so is a planned act intended to convey the lie. The moment component of this definition is somewhat problematic when it comes to implementing it in AI.

Without taking us too far in a rabbit hole, there’s a wonderful flaw here in intent. Can you honestly say that AI can shape an intent?

Unless we succeed in AI sensitivity or until we achieve, it’s difficult to attribute a target to some form of automation. In other words, sometimes we try to be an integral component of sentient beings. We wouldn’t want a toaster to burn toast. I may actually burn toast, but probably not because I “intend” to. Chances are that today’s AI is similar to toasters, meaning that AI cannot shape a target, even though ML/DL evolved from a new medium that we might wish to use.

If you argue that a lie should have the intention as a staple, then we cannot explicitly characterize AI as a liar in itself, because it still has nothing to include a human intent. One also wonders whether it is fair to say that AI is true, since we can also say that the fact also implies an intention, in which case we are living in murky ground.

Of course, we already know that human beings can deliberately lie and, in other cases, lie in an unintentionally way.

A friend tells you that they saw a ghost, which is fervently true, and yet it later turns out that it was a joke they were made through pranksters. Did your friend lie to you? Well, maybe it is, maybe it isn’t. They weren’t an intentional lie, even though it turned out to be a lying appearance. Some lies are lies by omission, through which the total fact is not told, while other lies are a composal fact and a composcial lie, mixing the two, which can make the false component more true through the remnant of the adjacent fact. -narrative.

In short, lies are of all flavors, shapes and sizes, and our daily lives bombard us with lies.

Let’s take a use case involving cars.

You get on a self-driving car and interact with AI through its herbal language processing interface (NLP), similar to Alexa or Siri. After telling AI you need to be taken to the local grocery store, he asks AI how long it will last.

The AI responds that it will take 18 minutes to arrive.

At the beginning, the adventure takes about 35 minutes, a longer time than the promised 18 minutes (almost double the estimated time).

Did the AI lie to you?

You can simply say that AI didn’t lie, you just estimated driving time, and we all know that traffic situations can interrupt any kind of estimate.

Suppose, however, that AI “knew” that structural paintings were being made that can have an effect on driving time, by having access to online databases that record all road infrastructure projects. After obtaining this data, the AI decided not to load the waiting time into the estimate, as it had calculated that travel time was unlikely to be affected by ongoing street repairs.

In short, AI provided an estimate, but concealed facts about the estimate, such as the appearance that there were paintings of structures to come and that AI did not come with the road’s efforts to assess time.

Is this a lie by default?

You can verify to argue that the AI provided an incomplete picture of the scenario and concealed the facts of the curtains that led to the time estimate. If a human did the same thing, we’d probably call him a liar, or we’d call him lies or play with the truth.

Going back to the intent question, does the AI intentionally seek to deceive you about the length of the trip?

As discussed above, intent is a convenient topic with respect to current AI. From what we can discern, ai has not chosen to lie in any way, and it is probably foolish to characterize a goal to AI’s actions.

Imagine that artificial intelligence is based on knowledge of human cyclists’ responses when receiving time estimates. Perhaps an inward-facing camera captured the reaction of the face when cyclists were informed of the estimated time of travel. Through sentiment analysis, facial expression was used to record happiness or disappointment and was used as a predictor similar to cyclists’ data on time estimates.

Using ML/DL, the AI formula “learned” that other people were happier when listening to shorter, unfortunate time estimates when listening to longer time estimates. Perhaps a stated purpose of the AI driving formula is to check that passengers are happy, which would be smart for the owner of the self-driving car, as drivers will use their ride-sharing service several times.

In the case of the 18-minute travel time estimate, think that AI made sure that the adventure time could be 35 to 40 minutes, and “knew” that this was the case, also “knows” that the indication of driving times led to disgruntled passengers and, therefore, the AI decided to propose “knowingly” a shorter estimate of timeArray than she “knew” highly unlikely.

Is AI a lie?

More egregious examples can be explained.

Conclusion

Some will say that the AI developers who designed, coded, and commissioned AI are the ones who are the “liars” in this example. They designed an automation that was in a position to lie. Don’t look at AI, look at the Wizard of Oz in the curtain.

Perhaps so, and in fact we look to hold those who design and implement AI systems accountable for what those AI-based decision-making systems do.

Some AI developers might argue that their AI creations go beyond what was originally requested, providing the merit of “learning” on the fly and, as such, it is AI to be to blame for such considerations and not its creator.

Note that the example used here was quite benign (a lie probably small or of incessant importance), however, it can seamlessly involve AI by making a “decision” about the car’s driving movements during a 65 mph maneuver on a road, and you don’t know and not be able to counteract your movements, adding whether the movements referred to a foundation involving a lie (foreshadowing a potentially vital lie or a potentially vital lie quite extensive lie).

Some people recommend that you deserve to ask the AI if it’s a lie or if it’s telling the truth, and then know it.

This recalls the famous history of the group, which had two types of members, those who told the fact and those who lied. When you meet a member, you ask the user if it’s still factual or if they’re still lying.

And the user replies that he is the real one.

This may very well be the answer to AI.

Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive

Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced executive and high-tech entrepreneur, he combines industry hands-on experience with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. A former professor at USC and UCLA, and director of a pioneering AI lab, he speaks at primary events in the AI industry. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media such as CNN and has co-hosted the popular radio show Technotrends. He has served as an advisor to Congress and other legislative bodies and has won quite a few awards/recognitions. He is part of several director forums, has worked as a venture capitalist, angel investor and mentor of marketing founders and startups.

Leave a Comment

Your email address will not be published. Required fields are marked *