In 1775, a Swiss watchmaker named Pierre Jaquet-Droz visited King Louis VI and Queen Marie Antoinette, in Versailles, to show off his latest creation: a “living doll” called the Musician. She was dressed in a stiff rococo ball gown and seated at an organ; as her articulated fingers danced across the keyboard, her head and eyes followed her hands, and her chest rose and fell, each “breath” animating her apparent emotional connection to the music. Jaquet-Droz proceeded to demonstrate his automata in the royal courts of England, the Netherlands, and Russia, where they enthralled the nobility and made him rich and famous. The word “robot” would not enter the lexicon for more than a hundred years, but Jaquet-Droz is now considered the primary creator of some of the world’s first androids—robots with a human form.
I was thinking about the Musician in late May, when I shared the stage with a robot named Sophia and her creator, David Hanson, at the Mountainfilm Festival, in Telluride, Colorado. (I am using the female pronouns that Hanson uses, and which he programmed Sophia to adopt.) At the time, large language models such as OpenAI’s ChatGPT were big news, and the technologists who developed them—including Sam Altman, the C.E.O. of OpenAI—were making dire predictions about the future dangers of artificial intelligence. Although today’s L.L.M.s answer questions by stringing together words based on the statistical probability that they belong in that order, these thinkers were warning that A.I. might one day surpass human intelligence. Eventually, they argued, it might pose an existential threat to human civilization.
Sophia, who has a tripod of wheels where feet might have been, rolled onto the stage in a sparkly party dress. She had flawless silicone skin, rouged lips, luminous eyes, and a perpetually curious expression. Like Ava, the humanoid played by Alicia Vikander in “Ex Machina,” she had no hair. The audience seemed entranced; someone said that she resembled the actress Jennifer Lawrence. People asked her questions—“Sophia, where did you get your dress?”; “Sophia, what do you like to do?”; “Sophia, what is love?”—and after many seconds of supposed contemplation, during which she waved her arms and cocked her head like a thoughtful golden retriever, she proffered answers. We learned that Sophia spends some of her time watching cat videos. She has a large, handmade wardrobe. She said that she loves Hanson, her creator. He said that he loved her, too, and considers her his daughter.
Hanson, a sculptor with a doctorate in engineering and interactive arts, once worked at Disney, which was an early investor in his company, Hanson Robotics. His Hong Kong headquarters is filled with male, female, and non-gendered androids, cast with a Pantone of skin colors, but Hanson said that, out of them all, it is Sophia that people have made famous and “fallen in love with.” (She has had marriage proposals.) In 2017, the government of Saudi Arabia gave Sophia citizenship, making it the first state to grant personhood to a machine. (Hanson said that he was taken by surprise when that happened, but neither he nor she has renounced the honor.) Her personality—if you can call it that—is sassy and glib, which gives her an air of cleverness. This is undercut, however, by a noticeable lag between questions and answers. Before Sophia can respond to a query, the question must be transmitted to a server in the cloud that has been programmed with an ensemble of large language models, constructed in part from Open AI’s ChatGPT and other L.L.M.s. Essentially, Sophia is an embodied chatbot, generating answers to questions posed orally, in the same way that ChatGPT answers written prompts.
Seen this way, Sophia is a cool trick of engineering, a marionette whose strings are pulled by software and sensors, with gestures and expressions that are meant to mimic ours, much like Jaquet-Droz’s creations. But Hanson’s true innovation is to have made a robot that is just real enough to be appealing and relatable. Will Jackson, the C.E.O. of Engineered Arts, an “entertainment robotics” lab in England that makes an android called Ameca, named for the Latin word for “friend,” told me, “If you’re in the physical world with a piece of technology that interacts with you in a meaningful way—it makes eye contact with you, it recognizes your expressions, it follows the thread of your conversation—how does that feel? I mean, you could see it as an art installation, but I see it as connecting with people on an emotional level.”
As I watched the audience warm to Sophia’s manufactured humanness, it became clear that amiable anthropomorphic robots like her are preparing us to welcome more sophisticated, powerful androids into our daily lives. There is no doubt that the A.I. revolution we’re witnessing now will animate the robots of the future, giving them skills that match, surpass, or replace our own. It is already happening. At the end of June, Engineered Arts released a YouTube video of Ameca, now equipped with a text-to-image A.I. agent called Stable Diffusion. When asked to draw a cat, Ameca produced a recognizable, if rudimentary, sketch of a feline, using a black marker held between her dexterous fingers on a whiteboard.
Unlike Sophia and Ameca, most freestanding, ambulatory robots do not look much like people. They may have cartoonish faces that resemble a child’s drawing of a robot, or no head at all. The general trend is to build specialized machines that can take on routine, dirty, or dangerous work, such as unloading shipping pallets, collecting and sorting trash, stocking shelves, and detecting explosives. TALON, for example, is a tactical robot that rolls around on tank treads and is used by the military and law enforcement to dispose of I.E.D.s and other hazardous ordnance. Chinese security robots that resemble overgrown tubers patrol rail stations and airports, using cameras to relay problems to a human operator. But an obvious rationale for creating robots that mirror human form is that the built environment is meant to accommodate our bodies. Rather than rejiggering a worksite to conform to robotic workers, it can be more cost-effective, and ultimately more useful, to create robots that can move like us so that they can operate in the same settings that we do—as well as ones that we’d prefer to avoid.
Right now, so-called social robots, which have the more general goal of assisting or caring for humans, often look more like R2-D2 than C-3PO. Temi, whose software was developed by a startup in Boston called Thinking Robots, resembles an iPad perched on a wheeled, upright vacuum cleaner. Still, A.I. technologies are making these robots behave a little more like people. So far, Temi has been programmed to work in dental offices, heeding such spoken commands as “Temi, bring these dentures to a lab,” and “Temi, get Mr. Smith from the lobby and escort him to the exam room.” Users need not do anything more than say the words, and the robot reacts. Amazon’s sixteen-hundred-dollar household robot, Astro, looks like a scaled-down Temi, and can navigate through a home, responding to instructions such as “Call Mom,” “Bring this soda to Jeff,” and “Play charades with the kids.” It may be an expensive novelty now, but Astro’s ability to understand spoken language increases the chance that later iterations, with more capacities, will become commonplace in the future. “As machines get better at what they are able to do, natural language becomes even more important, because we can teach them tasks,” Matthias Scheutz, the C.E.O. of Thinking Robots and a professor of cognitive and computer sciences at Tufts, told me. “We can tell them what we want them to do, rather than having to program tasks or learn a complicated user interface.”
Artificial intelligence alone will not be sufficient to inaugurate a new era of androids; robots will need to gain physical intelligence, too. “I have a Roomba, but, it doesn’t matter how good your L.L.M. is, there is no piece of code that is going to result in it driving to my front hall and putting my HelloFresh box in the recycling bin,” Damion Shelton, the C.E.O. and co-founder of Agility Robotics, told me. Newer robots are gaining humanlike features—hands that grasp, knees that bend, and feet that provide propulsion and balance—that are expanding their functionality. On YouTube, I watched Agility’s biped robot, Digit, pick up and move plastic bins from one area to another. This is a relatively simple activity for many humans, but for a robot it is a feat of mechanical engineering. Digit’s legs reminded me of grasshopper legs; Jonathan Hurst, the company’s chief robot officer and co-founder, told me that the company spent years studying the physics of walking and running, and then had to figure out how to translate that into wires and pulleys. Something as basic as stepping off a curb, stumbling, and recovering without falling, which a person can do without much thought, was a problem for Hurst and his colleagues.
Only after the team succeeded in translating biodynamics into actionable engineering, Hurst said, did the company begin to build itself around Digit’s capacities. “That’s very different from saying, ‘We’re building a humanoid,’ which typically means we’re building a machine that looks like a person,” he told me. “It’s super easy to make a robot look like a person, but it’s very difficult to make a robot actually move like a person.” At the moment, a Digit robot has to be programmed to operate in the unique workspace where it is being deployed. Now that L.L.M.s like ChatGPT can draft code, however, the Agility team envisions a multipurpose robot that does not need to be programmed for each task. This achievement may be closer than designers once imagined. “What Digit—combined with an L.L.M.—can do is probably better than I would have bet robots were going to do ten years from now,” Shelton said. “And I made that bet six months ago.”
There is also a psychological reason for modelling robots after humans: people may be more comfortable living and working alongside machines that move in familiar ways and that look something like them. Marc Raibert, the founder of Boston Dynamics, has witnessed major advances in robotics in his forty years in the field. The company’s quadruped robot, Spot, looks a bit like a dog and can negotiate stairs, crawl into tight spaces, and dance with abandon. (A YouTube video of Spot dancing to the Rolling Stones song “Start Me Up,” in which the robot mimics Mick Jagger’s every move, has amassed more than three million views.) But when the company introduced Atlas, a humanoid robot with similar abilities, it provoked a much greater reaction from the public. “Clearly, there’s something about that construction that strikes a chord with people,” Raibert told me.
It’s often said that robots which look and move almost like people, and which share our space, might creep people out, in what is known as the uncanny-valley effect. But Scheutz, who directs Tuft’s human-robot-interactions program, told me that a resemblance with humans can also cause people to assume that robots have more capabilities than they actually do. This can trigger frustration and resentment when the robot does not live up to human expectations. (Think of how annoying it is when you ask a so-called smart speaker to tell you the temperature outside, and it gives you the weather for a city in a different state.) A darker prospect emerged a few years ago, when Samantha, an “intelligent” humanoid sexbot on display at the Ars Electronica Festival, in Austria, was mounted and badly damaged by attendees, who took advantage of her female form and compliant behavior to molest her.
In 1920, the Czech writer Karel ÄŒapek wrote “R.U.R. (Rossum’s Universal Robots),” a play set in the year 2000 that follows the evolution of robota, a new class of android worker-slaves that eventually rises up and annihilates its human masters. The play introduced both the word “robot” and a narrative of human-robot conflict that, by now, has become familiar in movies such as “The Terminator,” “RoboCop,” and “Blade Runner.” Will robots fashioned to look like us, and programmed to accede to our wishes, spur people to think of them as friends and co-workers—or to treat them like chattel? Onstage in Telluride, David Hanson said that the purpose of robots like Sophia is to teach people compassion. But it seemed counterintuitive to suggest that a machine that can only mimic human emotions has the ability to inculcate in us something so fundamental to the human experience.
In Hanson’s view, Sophia is no different than a character in a book, and we know that stories can engender empathy. But given the speed at which artificial-intelligence models are being deployed, and their tendency to behave erratically, we would be wise not to wholly abandon the caution inspired by ÄŒapek and his heirs. Matthias Scheutz, the C.E.O. of Thinking Robots, pointed out that unless designers build constraints and ethical guardrails into the A.I. models that will power robots of the future, there is a risk of inadvertently creating machines that could harm us in unforeseen ways. “The situation we need to avoid with these machines is that when we test them, they give us the answers we want to hear, but behind the scenes they’re developing their own agenda,” Scheutz told me. “I’m listening to myself talking right now, and it sounds like sci-fi. Unfortunately, it’s not.”
A New Generation of Robots Seems Increasingly Human
Source: News Flash Trending
0 Comments