A recent report shaking Hollywood comes to the casting of a robot to star in a $70 million sci-fi movie.
He may run out of words as to why an undue opinion of the selection of using a robot in a sci-fi film would arise, which turned out to be the case for decades.
Here’s the turn.
Fans of the film claim that the robot will use AI and act or behave in the same way as a human actor would, so it will be the first time a film will feature an artificially intelligent actor.
It is claimed that the robot has “learned” to act and adopts the well-known method of action.
So, just to clarify, it’s not some kind of CGI movie editing that the robot will provide, and the robot may not have a hidden human intern or a manager sitting outside the camera with a remote control. It is assumed that the robot will use its built-in AI and act by using voice and frame movements, completely on its own.
For all aspiring actors, if you weren’t yet worried about the sadness of getting a job as an actor, keep in mind that once those AI-based robots are part of the SAG (Screen Actors Guild) and start auditioning for juicy roles in movies and television, get even more discouraged through the career you’ve chosen.
Imagine going home after a whining audition for a role in a new series, and after a close friend asked him how it happened, so he then complains that a sacred robot won over the manufacturer and director and regretted it once. I lost an actor’s concert in front of one of those robot androids turned actor.
Damn the robots!
In case this sci-fi movie is still planned, the robot is thought of as a woman, at least as the filmmakers have said, and headlines say the robot is the film’s lead actress.
Besides, he’s been given a name, Erica.
How do we know she’s a woman?
Because the robot manufacturer says so, and because the robot has gained a female-like facial appearance and the voice and manners programmed in the robot are similar to what a woman is (according to the perspectives of those making the film).
If you’re wondering if gender facets go beyond, it’s quite unlikely.
Of course, an apparent and quick complaint is that this “woman” who is an “actress” will paint all the stereotypical assumptions that the robot maker and the other people involved in the film have about the nature of women and femininity.
This is worthy of concern.
There are many more considerations to stack about this perception of a supposedly artificially intelligent actor or actress.
From the AI point of view, baldness sucks and stinks, unfortunately.
How?
Wherever this happens, the filmmaker suggests that AI instills the same thought processes and talents as humans, necessarily as if AI were sensitive (for my explanation of AI and sensitivity, see the link here).
Know this: there is no AI that is delicate today, and there is no AI of this kind on the horizon, therefore any new medium or any means that try to say in a different way mistakenly perpetuates an unwanted myth and lie.
The danger of these attempts to anthropomorphize AI today is that it can lead the public to things that AI can do things it cannot do, and in that belief, it can cause problems by assuming that AI will conduct activities in a humane way. contemplatively.
Don’t fall for this forgery.
That’s why the concept of a well-budgeted film that you choose to watch to publicize the AI masquerade in terms of human talent is extremely disturbing and downright disturbing.
If the film gets a forged cash workplace once it’s finished and released, the film and its alleged marketing crusade will likely increase the extravagant nature of artificial intelligence. People who watch the film can simply get hooked, run away and sink into believing what they see.
Anyone who takes AI seriously will possibly be thrilled at first to be paid so much attention, this initial joy of AI developers becomes temporarily sober when asked to create an AI capable of doing things that only humans can do today.
Wow, knowing the limits of what AI can do will hit the proverbial fan.
In short, despite the allusion of launching an AI-based robot that turns out to be able to act and serve as alone, it is a programmed artifice that bears no resemblance to human intelligence and only represents various deceptions to look like a human. .
Ways to create fake AI impressions
The robot named as Erica is known amongst AI insiders and has been around for several years as an ongoing research project (see this research paper and this one here).
Occasionally, the robot has written stories about what it does.
The challenge with most of those flashy stories is that they are written through someone who probably has no idea what AI is or how robots work, so the editor has a tendency to emerge and fall in love with the latest advance in AI and robotics. happened (which they have no idea how to make such a judgment or proclamation).
It can be difficult to discern that writers are naive, or simply need to believe, or whether they are making a wonderful story, or what they have in mind.
instances involve being given a hyphen, containing predetermined questions to ask the AI-based robot, and do so, voluntarily and without wondering the relevance of such an technique to do an interview or appearance. research reports.
We are all accustomed to the advances in herbal language processing (NLP) that have emerged in recent years, as demonstrated by the popularity of Alexa, Siri, etc. At first, others who were not familiar with fashion LNP were surprised to notice that these NLP systems to respond to verbal commands.
Anyone who has tried to use those NLP systems during an era of time and for any kind of not-easy discussion is now aware that, despite the wonderful advances so far, NLP AI is still far from the same chat as humans (in fact, there are a lot of studies on conversational artificial intelligence , looking to advance those functions (see my indication in this link here).
If you provide a script of questions and ask them to an artificial intelligence system, you don’t want to be a rocket specialist to guess that the NLP will respond with potentially human answers, because it has been programmed in advance to do so.
By the time you deviate from the script, it’s imaginable to start tripped over the limits of what AI can do. This will possibly be maintained briefly, and then, as you delve into what a daily discussion with a human would be, AI will gradually weaken to remain a supposedly engaging discussion.
You may need to know some of the industry tricks used to give the impression that NLP AI is human or has human-like skills.
One technique is to ask the LNP to issue voice loads, such as saying “uhuh” or “ok” that a human can do when speaking. It gives you the feeling that AI is actively listening to you, but it’s more of a trick than an appearance of understanding or understanding.
Another tool is to use emergency statements when needed.
Let’s say you made a long comment and the LNP AI had no idea what you said, as it couldn’t analyze the words and verify to locate an aligned response. In this case, instead of directly and honestly asserting that the formula does not perceive what it said, which is clear evidence that LNP AI is weak, the answer would be something like “very interesting” or “tell me more”. “
The good thing about these declarations of withdrawal is that it will have a tendency to think that LNP’s AI has understood what it has said and is committed and willing to continue the discussion.
The parrot is also convenient to deceive someone.
If a human tells ai that he is tired, the answer can simply be conceived as “tell me why you are tired” and the human then thinks that AI is sympathetic and has understood the discourse (there is still no reasoning of the senses that is not unusual in today’s AI, and a long way to go to get there , see my discussion on this link here).
The icing on the cake comes with the addition of undoubtedly emotional actions, such as providing a laugh or a sigh, all of which appear to be human responses. However, this may be a double-edged sword in the sense that if the NLP AI emits bursts of laughter, but you haven’t said anything probably funny, it can potentially reveal that canned laughter is not honest and breaks human polishing.
Find an attractive concept called Strange Valley.
It is a theory that when an AI-based robot moves from an apparent robot to a human appearance, there will be a time when appearance will begin to evoke a human’s revulsion interacting with the robot. Apparently, when you can easily discern that a robot is just a robot, you’re tolerant and in a position to interact, but when you start to get too close to human behavior, the result is scary.
In this sense, the AI robot falls into a “valley” related to its emotions towards the system, and the only way out would be to retreat to its former lower self, or jump until it becomes indistinguishable. of a human.
Not everyone agrees that this strange valley proposal is valid, it provides an attractive fodding to think about the most productive way to implement an AI-based robot system.
I will briefly mention the side that might interest you over AI.
In the AI box, there is a type of verification known as Turing verification (for detailed coverage, see this link here). Perception comes to have an AI formula like a curtain, say, hidden from view, and having a human another curtain, and a moderator starts asking them various questions. If the moderator cannot distinguish between the two as to which IS is and what the human is, it can be assumed that AI has managed to demonstrate the equivalent of human intelligence and has passed a check that attempts to make this assessment (the check is named after its inventor, the celebrated mathematician Alan Turing).
At first glance, Turing’s check is perfectly reasonable.
There’s trouble.
Perhaps the biggest challenge is the moderator. If the moderator does a bad job of asking questions and engaging hidden competitors, the nature and scope of the interaction would possibly be inadequate to make a good judgment about who is who.
This is the same as my previous point about writers or hounds accompanying a predetermined script. In this sense, they are a “moderator” in conducting an AI test, but adhere to a series of predefined questions.
It also remains in the brain that the maximum NLP AI systems have a human-machine dialogue corpus, that is, a database that uses AI techniques, and once it exceeds that established base, there is a degradation of what NLP AI can do.
When you need to check how shallow or deep an AI NLP can be, the simplest way is to jump anywhere in terms of wisdom spaces involved, evaluating what the limits of the formula are.
Please do not misunderstand my comments as if the use of AI NLP was incorrect or should be avoided.
There are many useful uses for AI NLP and it will be advertised for what you can do.
You may have used some of the newest LNP AIs to prepare for a job interview, or for seniors, NLP AI can be an easy way to use devices in your home. Chatbots temporarily gave online printing of a full car loan application and similar automated support occurs through the NLP.
The challenge arises when AI is described as more decorated and capable of what it is.
The growing interest in AI ethics has been unleashed on the component through outlandish claims through AI developers and those who use AI systems that are too interested in describing their AI as human when this is by no means the case (for facets of the importance of AI. Ethical, look at my pavilion at this link).
Seeking artificial intelligence and LNP robots for the laudable purpose of being human-like is fine and encouraged, but the effects should be shared with the public in a way that offers the mandatory warnings and unas bragged limits of what the generation can do.
As for the robot Erica who allegedly used the “method of acting” to improve his craft, such a statement would lead Konstantin Stanislavski to turn around in his grave (he is a prominent practitioner of Russian theater known by The Method of the Game). In short, techniques mean that a human actor digs up his inner motivations, mixing his conscious and subconscious thoughts.
Trying to claim that today’s synthetic intelligence capable of doing the same is not only hyperbole, but also denigrates the substance of the action approach and its functioning.
But this is general when AI is described hyperbolicly.
Consider for a moment other spaces in which AI is poorly represented, such as the arrival of authentic self-driving cars.
Let’s see what’s going on and let’s see.
Understanding self-driving cars
To be clear, genuine self-driving cars are the ones that AI drives all alone and there is no human assistance for the driving task.
These cars without driving force are considered grades four and five (see my explanation in this link here), while a car that requires a human driving force for a percentage of the driving effort is considered a point 2 or 3. Percentage of Driving Tasks are described as semi-autonomous and typically involve a variety of automated add-ons called Advanced Driver Assistance Systems (ADAS).
There is still a genuine self-driving car at point 5, which we even know if this will be possible or how long it will take to get there.
Meanwhile, Level Four efforts gradually seek to gain some traction by conducting very narrow and selective tests on public roads, there is controversy over whether such evidence deserves to be allowed according to the se (we are all guinea pigs of life or death indies in an ex-consistente, taking a stand on our roads and roads, some point out , see my indication in this link here).
Since semi-autonomous cars require a human driver, the adoption of such cars will not be very different from driving traditional vehicles, so there is not much new in itself on this subject (however, as you will see in a moment). , the following issues apply).
For semi-autonomous cars, it is vital that the public be aware of a disturbing facet that has happened in recent times, namely that despite those human driving forces that continue to publish videos of themselves falling asleep at the wheel of a point. 2 or 3 cars. Array will have to prevent us from deviating ourselves thinking that the driving force can divert its attention from the task of driving while driving a semi-autonomous car.
You are to blame for driving the vehicle, regardless of the automation point that may be thrown at point 2 or 3.
Freelance and AI tracks
Going back to problems where synthetic intelligence is potentially flawed in terms of capabilities, there are many examples in the picture of autonomous cars.
The one just described includes Tier 2 and 3 cars, in which some automakers and self-employed corporations overestimate or tend to recommend that the semi-autonomous formula can do more than it can.
And for those of you who doubted the importance or seriousness of AI misrepresentations, know that, in the case of driving a car, this is a very serious matter that has life-or-death consequences.
A human driving force that does not perceive the limits of the AI performing the driving task must be placed in difficult conditions and injured or killed, as well as all passengers and others who would possibly be nearby in a car accident. . Happens.
In the case of levels four and 5, there will be no human driving force behind the wheel and therefore the driving-sharing factor is avoided.
That said, just because an automaker or an independent-generation company claims to have an AI capable of driving a car in a pleasant and safe way doesn’t mean we have to take your word for it. Having a genuine self-driving car on our streets becomes a multi-ton vehicle that can cause great damage and destruction if you are not fit to drive alone.
Conclusion
The challenge with sci-fi film and its obvious effort to exaggerate the functions of AI is that it can do so in other spaces of AI use.
Perhaps someone who sees the film will become bolder with his point 2 and dot 3 car, believing from the film that AI is magically capable and sensitive, so it’s appropriate to be less attentive to the task of driving.
It would be a disgrace (or worse) and turn what has been a sci-fi escape into a real-world disaster.
Not everything you see and doubt when today’s AI begins to speak or act as if you could think like a human, which, I guarantee, is but a form of programmatic method of acting, through which cunning AI techniques Take a look at taking on an actor role as humans, despite being away from human talents and betting outdoors in your league.
Well-known actor Sanford Meisner, author of actor Meisner’s technique, said, “Acting is behaving honestly in imaginary circumstances.”
I think we’re an AI that behaves sincerely in real-world circumstances.
Cut and print, it’s an envelope!
Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced high-tech executive
Dr. Lance B. Eliot is a world-renowned synthetic intelligence (AI) expert with over 3 million perspectives accumulated in his AI columns. As an experienced executive and high-tech entrepreneur, he combines industry hands-on experience with in-depth educational studies to provide cutting-edge data on the long-term supply and long-term supply of AI and ML technologies and applications. Former PROFESSOR of USC and UCLA, and director of a pioneering artificial intelligence lab, speaks at primary events of the artificial intelligence industry. Author of more than 40 books, 500 articles and two hundred podcasts, he has appeared in media such as CNN and has co-hosted the popular radio show Technotrends. He has served as an advisor to Congress and other legislative bodies and has won quite a few awards/recognitions. He is part of several director forums, has worked as a venture capitalist, angel investor and mentor of marketing founders and start-ups.