AI ethics and laws reveal troubling concerns on Tesla’s AI Day and Elon Musk’s growing AI ambitions

Tesla AI Day 2022 is over and will go down in the history books to reexamine and analyze it to your liking.

You see, on Friday evening, September 30, 2022, and taking place in Silicon Valley with a global audience online, the latest installment of Tesla’s annual AI Day events took place. This long-awaited showcase would give Tesla and Elon Musk the opportunity to strut their stuff and show off their latest advances in artificial intelligence. Musk emphasizes that those events are primarily for recruiting purposes, hoping to whet the appetite of AI developers and engineers around the world who might be tempted to apply for a position at Tesla.

I have been inundated with requests to do in-depth research that goes beyond the already multitude of happy news descriptions that have been posted around the web about this new Tesla AI Day holiday. I’ll update it in a moment. and specifically touches on some vital considerations similar to AI ethics and AI legislation that so far don’t seem to have gained any notable attention online.

And, to be clear, these are questions that need to be clarified in an urgent and meaningful way. For my ongoing and in-depth global discussions on AI ethics and AI and the law, check out the link here and the link here, to call some.

Let’s describe what’s at this year’s Tesla AI Day.

A NOD TO AI DEVELOPERS AND ENGINEERS

Before we get into the meat of the presentations, let me say something very important about the AI ​​developers and engineers who presented or, in Herculean fashion, did the behind-the-scenes paintings of AI at Tesla. They deserve credit for trying to make sense of Musk’s outlandish directives about what they should work on and how long they should finish their work.

I have discussed Musk’s leadership taste and technical acumen in AI in several of my previous articles, such as linked here and linked here. On the one hand, he is wise enough to know what is happening in AI in general. , and is a super inspiring force to aim for the highest in the pursuit of AI achievements. No doubt about it.

At the same time, it proves devoid of practical meaning in itself and is packed with indomitable desires and AI dreams galore. It turns out that deadlines come out of nowhere. Hunches are the norm rather than any reasoned, indirect attempt to download real-world estimates. It conjures up fanciful visions of how world-changing AI will miraculously conceive of itself, and it conjures up unattainable timelines without probably a decisive touch of systematic, conscious idea (its multitude of predictions about the advent of AI-based fully autonomous self-driving cars have continually demonstrated far-fetched and indefensible claims).

Some insist that he is a genius and geniuses are like that. This, as it were, is the nature of the beast.

Others exhort that a shoot-from-the-hip calls-the-shots leader is inevitably going to stumble and potentially do so at a heavy cost that otherwise just wasn’t necessary.

It’s not that it’s not practical to have a senior leader who cares about being on the bottom of things. This can be incredibly useful. But when the wide-eyed visionary goes a little off-limits, it can be difficult, even difficult, to check in to give you clues about what’s going on. As discussed on social media, some of the AI ​​developers and engineers at the same level as Musk appeared to be silently embarrassed by his various exaggerated claims. Their minds were probably racing to figure out what they could do or say to check and save face, keeping this barrel exercise on a realistic minimum track and not derailing it completely.

Congratulations to AI developers and engineers.

THREE MAIN THEMES THIS TIME

Okay, with this key preface, we can get into the main points of Tesla’s AI Day.

Three main themes were necessarily addressed:

(1) Create a robot with humanoid features (i. e. Bumble C, Optimus)

(2) Advancements related to Tesla’s Autopilot and so-called autonomous driving (FSD)

(3) Efforts related to Tesla’s supercomputer called Dojo

Some quick facts in case you’re not familiar with those Tesla initiatives.

First, Elon Musk touted that Tesla’s next big breakthrough would involve the progression and deployment of a walking robot with humanoid features. At Tesla AI Day 2021 last year, there was an embarrassing “demonstration” of the imagined robot that involved a user dressed in a robotic-looking dress jumping and dancing on stage. I say embarrassing because it was one of the scariest moments of any AI exhibition. It wasn’t some kind of style or prototype. He was a human in a flimsy suit.

Imagine if those who worked tirelessly in artificial intelligence and robotics study labs throughout their lives were overshadowed by someone dressed in a suit and prancing in front of cameras broadcast around the world. What made this especially appealing was that much of the mainstream media swallowed it, with hook, line, and sinker. They posted photographs of the “robot” on their covers and seemed, thankfully and unmistakably, to gloat that Musk was about to produce the much-sought-after talking and walking sci-fi robots.

Not even close.

Anyway, this year, the suit wearer was no longer needed (although, perhaps, they were waiting in the wings in case it was suddenly necessary to reappear urgently). A somewhat humanoid robot-like formula was placed at the level of the aperture. of the session. This robot was called Bumble C. After showing us this initial edition of the imagined future robot, a second somewhat humanoid robot formula was leveled up. This second edition was called Optimus. Bumble C was indicated as the first. prototype attempt and is more complex in terms of existing features than Optimus. Optimus has been indicated as the most likely long-term edition of the imagined humanoid robot and may eventually become a commercial production model.

Overall, most of the action and attention at Tesla AI Day 2022 was focused on those types of walking robots. The headlines followed. Advances similar to Autopilot and FSD haven’t attracted the same attention, and the main points about Dojo haven’t appeared in many news stories either.

Speaking of Autopilot and FSD, we want to make sure that this component of Tesla’s AI Day is given some time. As unwavering readers know, I’ve covered Tesla’s Autopilot and fully autonomous driving (FSD) features several times and in detail.

In short, Tesla cars are now ranked 2 on the diversity scale. This means that an authorized human driver will have to be behind the wheel of the car at all times and be attentive when driving. The human being is the driver.

I mention this vital point about the autonomy point because many non-technical people mistakenly believe that today’s Teslas are at point four or point 5.

False!

A Level four is a self-driving car that wants or expects to be driven by a human driving force. Level four is then delineated in relation to an express operational design (ODD) domain. For example, an SDG may simply be that AI can only drive the car in a specific city like San Francisco, and only in stipulated situations like the sun, at night, and until it rains gently (but in snow, for example). A level five is an AI-based car—driving a car that can operate autonomously virtually anywhere and in any situation where a human motive force can drive a car in a manageable manner. To see my detailed explanation of grades four and five, check out the link here.

You may be surprised to learn that Teslas with Autopilot and so-called FSDs are only level 2. The term “fully autonomous driving” seems to mean that the cars will have to be at least point 4, if not point 5. And The outcry has been that Tesla and Musk have called their AI driving formula “Fully Autonomous Driving” when that is obviously not the case. Lawsuits followed. Some countries have criticized them for this name.

The same old counterargument is that “fully autonomous driving” is an ambitious goal and surely has nothing to do with naming the AI ​​driving formula for what it aims to eventually become. The counterargument to this counterargument is that other people who buy or drive a Tesla with FSD are being misled (or, according to critics, misled) into believing that the vehicle is in fact Level Four or Level 5. I wouldn’t possibly elaborate further. about it here and we recommend that you check out this link here for more data on topics like autopilot and FSD.

The third topic involved the Tesla supercomputer known as the Dojo.

As a useful background note, a large portion of today’s AI systems use device learning (ML) and deep learning (DL). These are computer-based trend matching techniques and technologies. The generation behind the hood of ML/DL uses synthetic neural networks (ANN). Think of synthetic neural networks as a type of crude simulation that attempts to mimic the way our brains use interconnected biological neurons. Don’t mistakenly believe that ANNs are the same as genuine neural networks (i. e. rain software in your head). They’re not even close.

When designing AI for autonomous driving, the use of synthetic neural networks is widely used. Most self-driving cars involve specialized PC processors adapted to house RNA cars. To program and identify RNA, an automobile manufacturer, or a manufacturer of autonomous driving technologies typically use a larger computer that allows for large-scale testing. The designed ANNs can then be loaded into the wireless (OTA) update capabilities of self-driving cars.

In the case of Tesla, they designed their own supercomputer suitable for wearing out ANN. This provides a proprietary capability that can potentially provide AI developers with the type of computing bandwidth they need to design the AI ​​that will run in their autonomous vehicles.

Another thing about synthetic neural networks and ML/DL.

In addition to this generation for autonomous vehicles, the same type of generation can be used to program robots such as humanoid-like systems such as Bumble C and Optimus.

All in all, I hope you can now see how the 3 main themes of Tesla’s AI Day relate to each other.

There is a specialized Tesla supercomputer called the Dojo that allows for the progression and testing of large-scale processing capabilities of ML/DL/ANN. These ML/DL/ANN can be programmed to serve as an AI driving formula and charged accordingly in Tesla cars. In addition, programming for Bumble C and Optimus robot formulas can also be designed in Dojo and loaded into the robots. The Dojo can perform a double function. AI developers assigned to Autopilot and FSD can use Dojo for their painting efforts. AI developers assigned to Bumble C and Optimus can use Dojo for their efforts.

As you can guess, there is a potential overlap or synergy between the ML/DL/ANN efforts of the AI developers for Autopilot/FSD and those of the Bumble C and Optimus businesses. I’ll say more about that, so stay out of it. from your seat.

Now you are officially aware of what it is and we can delve into the main points of Tesla AI Day 2022.

Congratulations on having come this far.

SOME KEY CONSEQUENCES OF VULNERABILITIES AND ISSUES

Many AI-related issues and considerations have arisen while observing Tesla AI Day 2022.

I can’t cover them all here due to area limitations, so let’s pick at least a few to delve into.

In particular, here are general questions I’d like to address:

1) Legal and AI-related issues related to intellectual asset (IP) rights

2) AI-like developments, such as COPPA

3) The ethics of AI and robotics

4) AI-like legislation for autonomous driving is the same for walking robots.

5) AI legal exposures similar to autonomous and walking robots

I’ll go through them one by one and then make a summary.

AI LAWS AND LEGAL INTELLECTUAL PROPERTY RIGHTS

We will start with a legal entanglement that has not yet arisen and that may be simply remarkable.

First, note that Bumble C and Optimus were presented as possible walking robot systems that appeared to have synthetic legs, feet, arms, hands, and part of the main torso and head-like structure. They look like humanoid systems that you’ve seen in all sorts of sci-fi movies.

During the presentations it was stated that Bumble C has semi-ready-to-wear pieces. This makes sense insofar as to temporarily design a first prototype, the quickest step is to play with other elements that are already known and tested. This allows you to get up and running temporarily. This saves you time on designing proprietary parts if that’s what you ultimately need to achieve.

The submissions also seem to imply that Optimus was mostly composed of local or proprietary pieces. It was unclear how much of the featured Optimus had this preponderance. Moreover, whatever it was, the implicit suggestion was that the purpose was to be as exclusive as possible. This may make sense, as it means you can have full control over the parts and not rely on a third party to source them.

So far, so good.

However, a small problem may occur.

Let me explain.

You may be vaguely aware that Musk has mocked the use of patents, a form of intellectual assets (IP). His recent tweets imply that intellectual assets are reserved for the weak. This implies that intellectual assets are mainly used for trolling purposes. In addition, this implies that rights to intellectual assets, such as patents, slow down or obstruct technological progress.

Given this philosophy emanating from Tesla executives, we want to ask ourselves some cautious questions.

Will Tesla seek patents for Bumble C and Optimus-exclusive parts?

If so, doesn’t that mean Tesla and Musk are “weak,” in the same sense that Musk has ridiculed others seeking intellectual asset protection?

If your purpose isn’t to download patents for robotic systems, you wonder how they’ll feel if others start designing walking robots of a similar nature and do so by imitation or direct copying. Will Tesla and Musk legally prosecute those who do, claiming that the parts are industry secrets and are of nature?

Or can they just patent the generation and then blatantly make the patents available to everyone? This was seen as an important way to enable the adoption of electric vehicles. Is this the same for robotic systems?

Perhaps what will be even more alarming for Tesla and Musk will be the fact that they will infringe other robotic systems that have patents and established IP addresses.

One might rather assume that at the frenetic speed of Tesla’s AI developers and engineers, they necessarily search patents hard and fast to ensure their parts do not infringe existing patents. Most likely it will not be a priority, or even the discussions will be shelved for the time being. Why wait now when the potential legal factor of intellectual assets can be further boosted? If you are faced with tight deadlines, you make do with the moment and assume that someone else, perhaps years from now, will pay the value of this existing negligence.

There are many patents in the AI box. There are a multitude of patents for robot hands, robot arms, robot legs, robot feet, robot torsos, robot heads, etc. It is a legal minebox. I’ve been an expert witness in intellectual asset rights cases in the AI realm and there’s a massive excess of patents, as well as their overlapping nature, which is troubling territory.

For those of you who hold patents on robot limbs and other walking robot components, continue to mine them. Start taking a closer look at Bumble C and Optimus. Stand in line with your intellectual asset attorneys. As the days go by, a gold mine is being built for you, a gold mine that, if built on your intellectual assets, will be a wonderful compliment to a gigantic corporation with gloriously deep pockets.

You can ignore the stinking “weak” label and still head to the shiny bank.

EMERGING AI LAWS AND RELATED LEGAL COMPLICATIONS

You might be wondering what Bumble C and Optimus will be used for. Since Bumble C turns out to be a quick and dirty prototype, let’s focus on Optimus, which will be the existing, long-term robot that’s getting a lot of attention. of interest to Tesla.

What will Optimus be used for?

According to Musk, he warned that with these types of robots, we will never again want to raise our hand or perform any hard physical tasks or labor. At home, the robot walker will be to take out the garbage, put the clothes in the washing machine. folding your clothes after taking them out of the dryer, preparing your dinner and doing all kinds of family chores.

In the workplace, Musk warned that such robots can simply draw seam lines. In addition to operating in factories or potentially harsh operating conditions, those robots can also paint on the Array. During the presentation, a short video clip of an environment showed the robot. moving a box as if handing it to a running human in the Array. We were even teased via a short video clip of the robot watering a plant in the Array.

I’m sure we can easily find a litany of tactics for using a walking robot that has a human-like set of characteristics.

I have a twist for you.

Imagine Optimus used in a house. The robot performs familiar tasks. Naturally, we’d assume that Optimus will have some form of conversational interactivity, like Alexa or Siri. Without a viable means of communication with the robot, you would struggle to get it to move comfortably around your home among you, your partner, your children, your pets, etc.

For those watching online, we don’t seem to be aware of any demonstration of Optimus’ ability to speak or verbally exchange. There is also no indication of processing capabilities.

Instead, we only saw Bumble C slightly able to catch himself (wobbly, unsteady, and I think it caused the engineers to go into cardiac arrest as they prayed to the robotics interns that this damn thing wouldn’t fall to the side or turn upside down). crazy). he was driven or manhandled at level. No march took place. We have been informed that Optimus is close to being able to walk.

A classic and incredibly true demonstration, adding the fact that it turns out that much of the mainstream media has accepted it.

Dancing robots around the world must have been embarrassed by what happened on that stage.

But I digress. Let’s go back to Optimus as a walking robot in a family and think that children are provided on this farm.

California recently passed a new law known as the California Age-Appropriate Design Code Act (COPPA). I will talk about this new law in my column and you can be sure that other states will soon pass similar laws. This is a law that anyone designing AI should be aware of (well, anyone designing any kind of computing that might come into contact with young people should also be aware of this).

The essence of the law is that any formula that can be used between children must comply with provisions designed to guarantee aspects related to the child’s privacy. Various non-public data that an AI formula or any other computer formula may collect about the The child shall respect the privacy of the express knowledge of young people and their rights. The consequences and other legal consequences in case of non-compliance with the law have been specified.

If Optimus is used in a family that may also have children, the robot can also seamlessly collect personal data about the child. Oral statements would possibly be recorded. The child’s location can be recorded. The robot can also detect all kinds of detailed data about the child.

Does the Optimus team have any ideas on how to comply with this new law and the plethora of new AI laws?

Again, this is probably at the bottom of the list of precedents. What I mean is that this law and other AI-related legislation are popping up like wildfire. An AI-based walking robot is about to introduce a nest of legislation. Tesla can bring in lawyers for this now and anticipate what will happen legally, hoping to end up in legal quagmires and provide guidance to AI developers and engineers, or do the same old tech-centric things and just wait to see what happens. (Same old thing, only after getting caught in a legal quagmire. )

Pay me now or pay me later.

Many times the technicians do not pay me now and end up being surprised and paying later.

THE ETHICS OF AI AND THE PROBLEM OF ROBOTICS

In previous columns, I have covered domestic and foreign efforts to expand and enact legislation regulating AI; See the link here, for example. I have also covered the ethical principles and rules of AI that many countries have known and adopted, adding, for example, United Nations Effort, such as the UNESCO AI Ethics Set, which nearly two hundred countries have adopted; Check the link here.

Here is a useful key list of moral criteria or characteristics of AI related to AI systems that I have already explored closely:

These principles of AI ethics need to be used seriously by AI developers, as well as those who manage AI progression efforts, and even those who ultimately put AI systems into effect.

The progression and use of the AI ​​lifecycle is considered by all stakeholders to be a component of meeting current AI moral standards. This is a vital point, as the common assumption is that “only coders” or those who program AI are subject to the notions. of AI ethics. It takes a village to design and implement AI, and the entire village will have to know and respect the precepts of AI ethics.

Have Tesla and Elon Musk paid serious and committed attention to the moral ramifications of AI from a walking robot?

Based on what was said in the presentations, it appears that this factor has so far received only superficial attention.

During the Q&A, Musk was asked if they had looked at the general aspects of the impact of walking robots on society. We all already know that Musk has continually stated that he sees AI as an existential threat to humanity. the link here. In fact, it can be assumed that if we are making robots that will walk among us, and we expect millions and millions of those robots to be sold for public and personal use, this naturally raises humanity’s moral problems with AI.

The answer to the question suggests that existing efforts are too ill-advised to explore the possibilities of AI ethics in particular.

This is still an old and depressing reaction from the technicians.

Many AI developers and engineers see AI ethics as a secondary issue. I don’t want to confuse existing AI painting efforts. Just keep going. One day, of course, AI ethics will rear its head, but until then, it is advancing precipitously and at full speed.

Unfortunately, the head-in-the-sand technique for moral AI is bad news for everyone. Once AI, or in this case the robotic system, moves down the development path, it will become increasingly complicated and costly to adopt the precepts of AI ethics. in the system. This is a short-sighted way to approach the moral considerations of AI.

Suppose they wait until the walking robot is already installed in the houses. At this point, the threats of harm to people are greater, and in addition to the threat of causing negative harm, a company that waited until the later stages will face enormous legal challenges. You can be sure that tough questions will be raised about why those kinds of aspects of moral AI have been given due attention and why they were addressed earlier in the AI progression lifecycle.

The fact that Musk has raised ethical considerations of AI when discussing the existential dangers of AI makes this obvious oversight or lack of existing fear of moral AI in his walking robots an even more appealing question.

Musk’s wisdom makes this puzzling.

Some senior leaders don’t even know that there are moral AI issues that need to be addressed. I have been passionately discussing the importance of corporations creating AI ethics committees; check the link here.

AI LAWS FOR AUTONOMOUS DRIVING ARE NOT THE SAME FOR WALKING ROBOTS

I mentioned earlier that existing Teslas that use Autopilot and FSD have Level 2 autonomy.

This is convenient for Tesla and Musk as they can cling to the concept that since a Level 2 requires active human motive power at the steering wheel, almost everything the AI ​​self-driving formula does eludes evidence from the point of view. from the point of view of responsibility. I can only insist that human motive power is guilty of driving.

Note that this will not be the case for levels four and five, where the automaker or fleet operator will either have to intervene as the culprit for the movements of an autonomous vehicle.

Also note that this claim that human motive power is culpable can only be extended to Level 2 and Level 3, and we will soon see legal instances of how this can happen.

During the presentation, several questions were raised about how the paints of AI-powered autonomous vehicles can be seamlessly transferred or reapplied to the walking robot box. It’s true. However, it is also a misleading and, in some cases, even dangerous representation.

We can start with the apparent handoff that involves AI-based vision processing. Self-driving cars use video cameras to collect photographs and videos of the vehicle’s surroundings. The use of ML/DL/ANN is used to computationally locate patterns in the data collected. You would do this to identify where the road is, where the other cars are, where the buildings are, etc.

In theory, you can reuse those same ML/DL/ANN or similar to check and perceive what a walking robot is experiencing. In a house, the robot’s vision formula would scan a room. The videos and photographs collected can be automatic to see where the doors are, where the windows are, where the sofa is, where other people are, etc.

That is reasonable.

But the twist.

For the autonomous vehicles in point 2, driving depends on a human driver. The legal duty for what the car does falls on the shoulders of the human driver. Most likely, there will be no such coverage in the case of a walking robot.

In other words, there’s a walking robot in your house. Suppose that, as an adult, the robot does not teleoperate. The robot moves freely through space thanks to the AI set up in the walking robot.

The robot crashes into a fish tank. The fish tank crashes to the ground. A boy who was nearby is cut by flying glass. Fortunately, the child is fine and the cuts are minor.

Who is for what happened?

I would venture to say that we would all agree that the robot is “at fault” in the sense that it hit the fish tank (all things being equal). There is an ongoing and heated debate about whether we will assign legal personality to AI and therefore potentially be able to hold AI accountable for bad acts. I covered this in the link here.

In this case or scenario, I don’t need to get bogged down in the question of whether this AI has legal personality. I will say no. We will assume that this AI has not reached a point of autonomy that merits legal personality.

The culprit user appears to be the manufacturer of the walking robot.

What did they do to design the robot to hit objects?Was it predictable that the robot could do this? Was there an internal error of the robot that caused this action?Again and again we can legally consult what happened.

Have Tesla and Musk learned that the legal wink they make with their cars will be passed on to the robots they intend to make?

Walking robots are a different animal, so to speak.

Once again legal and moral repercussions arise.

LEGAL DISCLOSURES FOR EQUIPMENT ROUNDING

The presentations recommend many crossovers between the AI autonomous driving team and the walking robotics team. Based on my previous advice, this is reasonable. Many facets of hardware and software have similarities, and whenever possible, it may serve two purposes. Additionally, this may boost the robotics aspect as it frantically tries to reboot from a standstill and catch up with Musk’s ambitious statements.

There is a difference, however.

It turns out there’s a twist, but again, life turns out to be like this.

Let’s say the AI ​​self-driving team is on edge looking to help the walking robotics team. In fact, we can believe that this could happen without problems. Here they are, hands full looking to reach autopilot and FSD up and up. degrees of autonomy and, in the meantime, they join the walking robotics team that runs in their efforts.

How distracted or defeated is the AI autonomous driving team through this two-pronged approach?Will this have an effect on autonomous driving ambitions?

And not only ambitions, however, it can be logically anticipated that the depletion of the autonomous driving team can also lead to the appearance of bugs in the autonomous driving system. Maybe they didn’t triple check like they usually did. Perhaps they gained feedback from the walking robotics team and replaced the autonomous code of conduct, this update may not have been as well tested and measured as it deserved.

In short, anyone seeking to sue Tesla for autonomous driving would now have a great opportunity to argue that any issues that might have been claimed or discovered in Autopilot or FSD would not have existed without the resolution of management seeking to reconcile the two. . differently disparate groups checkered together.

Imagine what that look would be like toward a jury.

The autonomous team rushed in and focused exclusively on autonomous driving. They then embarked on this new walking robotics effort. The argument may simply be that this has led to errors and omissions on the autonomous driving side. The company also wanted to keep its pie. and frosting, but ended up splitting the cake and some of the frosting fell to the floor.

We don’t know if the articulation created such vulnerabilities. It’s just a possibility. For expert lawyers to go after Tesla on the autonomous vehicle side, the door is opening to offer a legal opening.

CONCLUSION

Tesla AI Day 2022 attracted a lot of attention.

For example, Musk has indicated that walking robots will produce an economic output on the order of twice that of humans. He even went on to say that the odds of productivity were limitless.

Where are the final numbers that can shed light on productivity innovations multiplied by two or by N?

I’m not saying that one or N times they’re wrong. The challenge is that such unsubstantiated claims that come out of nowhere are otherwise natural hyperbole until substance is provided to back up those claims. The most disturbing facet is that the hounds report that he has made such claims, and those claims, in turn, will be repeated gradually until they become “factual” and no one realizes that they were invented perhaps out of thin air.

Another surprising aspect was that Musk said that walking robots can cost around $20,000.

Firstly, if this turns out to be the case, it is notable given the most likely cost of parts and prices related to the progression and commissioning of walking robots, as well as the desire to make a good profit. Did you locate this number? Because it sounds clever or because it is based on falsified analysis?

We also don’t know and there has been no discussion about maintenance related to those walking robots. Maintaining a car is very different from maintaining a walking robot. How will the robot get to the maintenance site, taking into account its length and weight?Will human handymen want to come to your home for maintenance?How much will maintenance cost? What expected frequency of maintenance will be required?

Let’s say the charge is $20,000 or similar. I’m sure that $20,000 turns out to be pocket money for Musk. How many other people would buy a walking robot like that for $20,000? I dare say not much. You could just try to argue that this is the load of a car (a low-end car). But it is said that a car is much more useful than a walking robot.

With a car, you can get to work and earn money to pay your bills. You can use a car to go shopping. A car can take you to the hospital or take you for fun. A walking robot that waters your plants at home or makes your sheets does not seem to have the same usefulness.

To be clearer, yes, many other people with higher incomes could simply have a walking robot in their home. In this sense, there would indeed be a market for walking robots. The question, however, is whether this will be fair in our society. Will there be those who can walk like robots and those who cannot?

We can also doubt that walking robots enjoy the same sense of social respect and seriousness as electric cars. Electric cars can be sold by pointing out that they are favorable for the environment compared to traditional cars. The government would possibly also be offering incentives for this purpose. Does all this also apply to walking robots?It is more difficult to sell.

A few more comments and we’ll close this discussion for now.

A remarkable look at walking robots leads to an unapologetic anthropomorphization of robots.

Anthropomorphization refers to the representation of AI as equivalent to humans. People can be fooled into believing that AI can do what humans can do, or even surpass what humans can do. These other people thus deceived threaten to find themselves in a disastrous situation. They will assume that AI can act like humans.

When the Bumble C reached the level, he waved his arms. The arm movement was precisely what you would expect from a human. Your first instinctive reaction will surely be that the walking robot was “thinking” and learned that it was walking. on one level in front of other people and got hereras. The robot that would be polite and sociable to greet the assembly.

I guarantee you that the robot “thought” in some ways like human thinking.

As far as we know, there is an operator who is somewhere nearby or probably runs remotely and controls the robot’s arms. In this sense, the robot did not have software to operate the arms.

Suppose there is software to operate the arms. The software is probably incredibly simplistic: once activated, you would raise your arms, wave your hand, and then do the same for a short period of time. It is highly unlikely that the software consisted of a vision. A processing formula that captured video photographs of the audience and then performed computational “reasoning” by opting to wave the robot’s arms.

What I mean is that having a walking robot is a false or misleading representation of what the robot can do, and it fools other people into thinking that the robot looks like a human. In fact, I expressed the same considerations about dancing robots. cute and makes headlines for having wavy robots and dancing robots. Unfortunately, this also overestimates what those robots can do.

Referring to the processor of walking robots as Bot Brain is still an example of anthropomorphization. These processors are not brains in the sense of human brains. This is a poor use of wording.

You might exclaim that everyone, or at least many, in AI are taking advantage of anthropomorphization to try to stand out and get their AI praise and attention. Yes, I agree with you. Does this make two mistakes good?I don’t think so. This is an incorrect technique and we will have to check it to decrease or at least decrease its popularity. In fact, it’s like pushing a giant boulder up a steep, endless hill.

Let us now make a definitive conclusion on this topic.

Elon Musk has already said the following about the direction AI is taking: “Take my word for it, AI is far more harmful than nuclear weapons. . . Why don’t we have regulatory oversight? He made statements at Tesla AI Day.

I agree with him on the need for regulatory oversight, I put up a little explanation that it has to be the right kind of regulatory oversight. I have taken on regulatory oversight of AI, which unfortunately misses the mark, as explained in the link here.

Tesla and Musk are expected to not only support the coming of prudent and correct AI legislation, but also to be the first to demonstrate the importance of comfortable legislation such as AI ethics and the strict legislation in place.

As good sense tells us, our words serve as a lamp to guide our steps and forge a light that illuminates us on the path ahead.

That covers things.

A community. Many voices.   Create a free account to share your thoughts.  

Our network aims to connect others through open and thoughtful conversations. We need our readers to share their perspectives and exchange ideas and facts in one space.

To do so, please comply with the posting regulations in our site’s terms of use.   Below we summarize some of those key regulations. In short, civilians.

Your message will be rejected if we realize that it seems to contain:

User accounts will be locked if we become aware that users are engaging in:

So, how can you be a user?

Thank you for reading our Community Standards. Read the full list of publication regulations discovered in our site’s terms of use.

Leave a Comment

Your email address will not be published. Required fields are marked *