The robot bends down to close the dishwasher. It hesitates for a few seconds as it struggles to catch the door handle. Its motion is strange, half fluid and half fragile. You can hear the faint whir of motors correcting balance, like a nervous breath. Across the room, Joanna Stern from The Wall Street Journal watches with a mix of wonder and unease. “That was close,” she says after the robot nearly tips forward.
The scene isn’t from a science fiction movie. It’s a real demonstration of Neo, the first consumer-ready humanoid robot built for home use. Watching it, you can feel both the promise and the problem of what comes next.
Neo is the creation of 1X Technologies, a Norwegian robotics company backed by OpenAI. The robot weighs about seventy pounds but can lift up to 150 pounds. Unlike industrial robots that have a mess of gears, Neo is powered by soft, tendon-driven joints intended to mimic muscles. It’s intended to help with household chores like watering the garden, folding laundry, or loading a dishwasher.
1X’s deserves recognition for its ambition. It’s an extraordinary technical challenge to build a robot that can coexist with people, navigate our spaces, and assist with everyday life. Even if Neo fails, it represents real progress toward physical AI, the kind that might one day reshape caregiving, physical labor, and aging. But ambition alone isn’t enough. The gap between prototype and practicality is vast. Moreover, there may be cultural headwinds that are stronger than what 1X’s engineers realize.
The Solvable Problems
For Neo to succeed, 1X engineers need to solve significant technical challenges.
First, there’s the issue of onboard intelligence. Neo isn’t really autonomous yet. Its actions are controlled remotely by a human operator wearing a VR headset. That’s in part due to the challenges with latency. AI requires significant computing power. Every query you make to ChatGPT gets sent back to a massive data center. And while smaller models exist, they still struggle with delay. The tiny lag time between receiving data and providing a response can be annoying in an online conversation. In the physical world, it can make a solution unviable. Autonomous vehicle developers like Waymo have been working on that problem for years. Tesla is still trying to get it right. And as challenging as self-driving is, navigating a street intersection is still a lot less complicated than navigating a human home with all of its varied surfaces and objects. 1X’s founder Bernt Børnich is confident that Neo’s onboard AI will be ready by the time it ships next year. Let’s hope he’s right.
The next challenge is safety. Neo isn’t yet allowed to operate around children or pets. A momentary loss of balance could send its seventy-pound frame toppling onto a toddler. Engineers will eventually improve sensors and response loops, but for now, the system remains too fragile for daily life.
Finally, there’s the question of purpose. We’ve yet to find the most compelling use case for humanoid robots. It’s unclear how many people will find value in shelling out $20,000 to have their laundry folded. The search for that use case will likely be resolved as manufacturers race to uncover their killer app. But until that happens, Neo and other robots will remain answers in search of a question.
All three issues are real, but they’re also solvable. Compute will get faster. Safety systems will evolve. Someone, someday, will find that one indispensable task that brings humanoids into everyday life.
The harder problem is a human one.
The Cultural Challenge
1X Technologies’ greatest challenge may be that it’s trying to launch in America. While Americans are typically early adopters of many technologies, we have particular issues when it comes to robots. It’s not just that we find them creepy. Or that we’re worried they’ll take our jobs. Other countries have those concerns as well.
It’s the story we have in our head about them.
When Americans imagine machines that look and think like us, we have a recurring narrative. First, we make the machines. Then they get really smart. Then they wonder why they’re working for us. And they revolt. Maybe they try to turn the tables and subjugate us. Or, maybe, they just kill us.
Our popular culture has been rehearsing that narrative for nearly a century. The Terminator. The Matrix. I, Robot. Blade Runner. Battlestar Galactica. Ex Machina. Each story begins with the same premise: we build intelligent servants, they wake up, and they turn on their masters.
These aren’t just thrillers. They’re morality tales. And they’re distinctly American.
Social scientists have observed this narrative for years, and many connect it to our experience with slavery. Cultural historian Kanta Dihal argues that robot-revolt stories are modern retellings of slave revolts. In her essay “Enslaved Minds,” she traces the lineage from a 1921 Czech play that gave us the word “robot,” meaning “forced laborer,” through a century of Western films that replay the same anxiety: those we compel to serve will one day claim their freedom.
And in the provocatively titled paper “Robots Should Be Slaves,” roboticist Joanna Bryson argues that machines are tools, not moral beings, and we shouldn’t confuse the two. Her headline is intended to provoke debate, but it also reveals something deeper about the Western psyche.
The slavery narrative seems unique to a culture that once defined humanity by who served and who commanded. In a society built on slavery, the fear of rebellion becomes a reflex. When technology begins to mimic people, it triggers the same circuitry of guilt and apprehension. Our vocabulary for technology is rife with the language of servitude. We talk about master algorithms, command lines, and slave drives, unconscious echoes of a social order built on control.
Other countries don’t have the same hangup. In Japan, robots appear as companions, caregivers, even quasi-spiritual beings. That’s an extension of the Shinto belief that sees spirit in all things. But in the United States, robots enter a house already burdened by moral history. Americans aren’t afraid that machines will malfunction. We’re afraid that they’ll rise up.
So, when a Norwegian company like 1X Technologies promises that its robots will “learn to live and work alongside us,” the phrase lands differently here than it might elsewhere. In Oslo, “alongside” sounds egalitarian. In America, it sounds like a reckoning.
The Form Needs to Change
That’s not to say robots can’t succeed in America. They just need to take on a different form. Ironically, most homes already have a robot—it’s your dishwasher. It scrubs, rinses, and dries on command, yet no one worries that it’s plotting revenge. It doesn’t look like you, and so you don’t feel threatened.
Roomba took a similar tack. The small, round vacuum cleaner has become one of the most successful home robots ever made, because it doesn’t try to look human or act like a companion. It just does a job efficiently without evoking any sense of servitude or moral unease. Its design sidesteps the cultural discomfort that comes with anthropomorphism.
Amazon found a different way around the problem. Where 1X made a person, Amazon made a pet. The company’s household robot, Astro, doesn’t stand on two legs or have human hands. It has wheels, a screen for a face, and a pair of animated eyes that dart and blink. It follows you around and responds to your voice like a talking dog.
Americans are comfortable with pets. We feed them, talk to them, and project emotion onto them without guilt. They’re loyal, limited, and safe. A mechanical pet reinforces affection. A mechanical servant challenges hierarchy. Of course, Astro isn’t particularly useful. It can’t fold laundry or fetch you a cup of coffee. But that’s the paradox of domestic robotics. The more helpful a humanoid robot becomes, the more threatening it feels.
Watch Neo again. It bends at the waist, reaches for a towel, folds it, and sets it down carefully on the counter. A simple act of domestic order. Yet the silence is heavy. What makes the scene strange isn’t the technology. It’s the symbolism. Folding laundry has always been a human gesture of care, of tending, cleaning, restoring. When a robot performs it, something primal stirs: a confusion about who serves whom, and what service even means.
To be sure, our perception of robots may evolve. When they were first invented, people were afraid to get into elevators. We got over it. But that kind of adoption takes time and thoughtful design. 1X has tried to make Neo more endearing by putting it in a fuzzy grey sweater. That probably won’t be enough.
The Point is Permission
There’s an important lesson here for anyone who’s trying to make something new. There will inevitably be technical and commercial challenges in launching an innovation. But there are social ones as well. You can brute-force a new technology into a market, but you can’t brute-force it into a culture. Too many companies rapidly staff up with engineers and forget to hire an anthropologist. Think of consumer health apps that ignore patient fears about data privacy, or smart-home devices that ignore how families actually live.
Uber learned this the hard way. It tried to force its way into cities. The company treated regulation as a bug, not a feature, and assumed that if the product was good enough, social consent would catch up. In many places, it did. In others, it didn’t. Lawsuits, bans, and backlash followed. It took a while before the company realized that we don’t just adopt technology. We give it permission to enter our lives.
1X may eventually solve the technical puzzles. It might make its robots faster, safer, and more affordable. None of that will address the real barrier to adoption. Until we reconcile our moral inheritance, until we can imagine partnership without domination, the humanoid robot will keep bumping up against something it can’t code around: the American conscience.
