On The Net
Tomorrow’s Bots
by James Patrick Kelly
in fictiontime
The future is way behind schedule, at least according to the Big Three science fiction writers whom I grew up reading. Those would be Robert Heinlein, Arthur C. Clarke, and Isaac Asimov. You see, while the timeline of Henlein’s famous Future History series imagines the first Moon landing (1976) a little later than in reality, he thought we would have spread throughout the Solar System by now. His Future History stories, mostly written during the 1940s, imagines that triumphant humanity would have established a thriving city on the Moon and colonies on Mars and Venus by 2026 fictiontime. (Noted without comment: his Future History novella “If This Goes On” (1940) forecasts the presidency in the early 2000s of a populist, anti-intellectual, and anti-science politician who appeals to racists and fundamentalist Christians.)
In Clarke’s story “The Sentinel” (1951), a scientific expedition exploring the Moon from a well-established base in the Mare Serenitatis comes across an anomalous structure protected by an invisible shield that could only have been made by aliens. The date of this strange discovery? In Clarke’s fictiontime of 1999. However, in 1963, he and Stanley Kubrick reimagined the enigmatic monolith and moved the lunar expedition forward two years, giving us both a novel and a film called 2001, a Space Odyssey. One of the film’s most memorable sequences is the approach of a Pan Am spaceliner (Pan American Airlines ceased operations in 1991) to an orbiting wheel space station as the Blue Danube Waltz plays in the background.
The stories in Isaac Asimov’s 1950s classic I, Robot collection are set in the early part of his fictiontime twenty-first century. His famous Three Laws of Robotics first appears in one of them, “Runaround”(1942), set in 2015. The nine stories in this collection are linked as reminiscences by his continuing character Dr. Susan Calvin(1982–2064), chief robopsychologist at U.S. Robots and Mechanical Men, Inc. In Asimov’s early twenty-first century, robots were blocky and machine-like, but as the stories progressed in time they become more humanoid, with two arms, two legs, heads, and faces. For instance, meet Herbie, a telepathic(!) robot in fictiontime 2020: “RB 34’s photoelectric eyes lifted from the book at the muffled sound of hinges turning and he was upon his feet when Susan Calvinentered.” (She brings him books to read.) “‘Hm-m-m! Theory of Hyperatomics.’ He mumbled inarticulately to himself as he flipped the pages and then spoke with an abstracted air, ‘Sit down, Dr. Calvin! This will take me a few minutes.’”
humanoids
Leaving aside the plausibility of telepathy, rest assured that there are no robots like Herbie in our 2026, nor is there a city on the Moon nor colonies on Mars. However, the Big Three were not necessarily mistaken, just overly optimistic. That’s the risk of writing near future science fiction. My own backlist is rife with predictions that are dead wrong or have yet to happen. Near future stories have the shelf life of lettuce. However, we humans have sent our probes throughout the solar system, if not ourselves. And, unseen by all but a few, robots weld, paint, pack, sort, assemble, drill, and cut in factories and warehouses around the world. Ironically, our spacecraft are some of the most sophisticated robots ever made.
The current generation of robots has no need of the services of a robopsychologist like Dr. Calvin. That’s because we have yet to devise a useful humanoid robot that can hold a conversation, walk at speed through an office or a home, sit in a chair, flip the pages of a physical book and read it for understanding. Not that there aren’t any number of companies trying! Take for instance Tesla’s Optimus. Elon Musk has been promoting this slick five-and-a-half foot tall humanoid bot with a white plastic and black metal body and five fingered hands for years. According to Tesla, it will be capable of basic industrial and home tasks and could serve as a nurse or companion to the likes of you and me. New and improved conversational abilities powered by Grok! In2024 Musk promised that production would begin in 2025 with a goal of selling a million units within five years. This has turned out to be yet another Musk overpromise, as Tesla is currently bucking economic, technological, and political headwinds. Because Optimus has problems with battery power, hand dexterity, overheating, durability, and navigation in dynamic environments (Oops! Sorry, Rover!), it struggles with real world tasks and decision making.
Here’s a fairly comprehensive list of humanoid robots. They are a mixed bag of full-sized bots and torsos only, stationary and mobile (wheels or legs), hands with fingers or powerful claws, heads with human faces or plastic visors. Each is capable of doing some of the things an Asimov robot could do, but none are capable of doing all of those things. Consider one of the most advanced talking bots with an expressive human face: the Ameca, from Engineered Arts. Unlike Optimus, Ameca is immobile and billed as a “platform for development into future robotics technologies.” This is where I think most of today’s humanoid robots really are—not all that far from the starting point on our path to a machine like Herbie. Designers of humanoid robots have yet to come up with solutions to all of the problems of safe and expeditious navigation, fine motor control, battery life, and maintenance. As Will Jackson, founder of Engineered Arts, says, “The biggest blocker is having something that is equivalent to human muscle. There is no electric motor that has those kinds of properties.”
agency
Designing robots to the humanoid form factor is a tough problem, but there’s a reason why it is so attractive. The built infrastructure of our civilization is designed to accommodate bodies like ours. At the fundamental level, a robot consists of two systems, an artificial intelligence (AI)and an artificial body (AB). Computer scientists have produced AI that has far outpaced the capabilities of the humanoid ABs created by today’s mechanical engineers. Robots that can bake a cake, fold the laundry, and mind the kids exist only in the pages of Asimov’s. But increasingly in 2026 we live mediated lives in a ubiquitous digital infrastructure where bodies are not required. This is why technology companies are in a headlong race to design disembodied robots called agents that can act for us online.
Imagine if you could tell your phone to write an email to your mom, order dim sum takeout, or book your next Disneyworld vacation on its own? What if someday you asked your computer to research AI agents, write a1,700 word column about them for a major science fiction magazine, and then create an AI narrated podcast based on that column that you could post to your AI designed website? Just kidding—no current AI agent is that capable! But that’s the promise that agents like Perplexity’s Comet, Anthropic’s Claude 3.5, and Open AI’s Operator, among others, are making. While that someday is not necessarily tomorrow, its agents are much closer to reality than Tesla’s robot butlers.
AI agents are applications built on top of LLMs that can understand, plan, and execute tasks that you give them. Most important, they are autonomous. How do they do this without your guidance? First they try to understand the context of your request by gathering input and consulting databases. Some call this perceiving. They then analyze options and plans, perhaps using reasoning engines or machine learning to recognize patterns. Some call this thinking (don’t blame me!). Having arrived at a course of action, they take whatever steps they need to complete the task. If the task is sufficiently complex, agents may break it down into subtasks, delegating each to specialized agents or other digital tools, and then combining all outputs to finish the job.
There is a lot of computation under the surface of this oversimplified explanation, and language purists may well object to using words like “perceive” and “understand” and “think” when we are talking about LLMs. For those interested in more detail, click over to What are AI agents? or Understanding AI Agents: A Beginner’s Guide. Visual learners might like Jeff Su’s AI Agents, Clearly Explained.
Or just prompt Chat GPT to explain AI agents to you. Pro hint: tell it you’re a sophomore in high school.
exit
If the era of useful if not brilliant agents is indeed around the corner, what should we be worrying about? In addition to old concerns about AI bias and hallucinations, new kinds of privacy and security attacks are likely. As we saw, as part of completing a task, the agent may try to interface with other agents and various digital tools, leaving it vulnerable to malicious prompt injection and credential theft. More malware nightmares to keep you awake at night, especially since the bad guys will be using their agents to attack ours! Also, given their autonomy, some agents may behave aggressively or unpredictably, with embarrassing or even harmful overreach possible. Studies are already showing a correlation between AI usage and erosion of critical thinking among humans; widespread acceptance of agents might contribute to this problem and even affect our ability to plan and execute tasks due to our overreliance on them. Finally, agents are already disrupting the workplace and eliminating jobs. They will continue to do so.
Speaking of unemployment, while I am confident that no agent could write a better James Patrick Kelly column than me, it’s entirely possible that next year’s model could credibly fill this space—although not with all my witty asides and penetrating insights.
Let’s hope Sheila doesn’t read this!
