When Rodney Brooks talks about robotics and artificial intelligence, you should listen to him. Currently the Panasonic Distinguished Professor of Robotics at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot, and his current company, Robust.ai. Brooks also led MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997.
In fact, he likes to make predictions about the future of AI and keeps a dashboard on his blog to see how it’s doing.
He knows what he’s talking about, and he thinks it might be time to put a stop to the screaming hype around generative AI. Brooks thinks it’s an impressive technology, but perhaps not as powerful as many suggest. “I’m not saying LLMs aren’t important, but we need to be careful about how we evaluate them,” he told TechCrunch.
The problem with generative AI, he says, is that even if it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can do, and humans tend to overestimate its abilities. “When a human sees an AI system perform a task, they immediately generalize to similar things and make an estimate of the AI system’s proficiency; not just performance in that domain, but competence in that domain,” Brooks said. “And they’re usually very optimistic, and that’s because they’re using a model of how a person performs on a task.”
He added that the problem is that generative AI is not human, or even human-like, and that it is wrong to try to attribute human capabilities to it. He says people see it as so capable that they even want to use it for applications that make no sense.
Brooks cites his latest venture, Robust.ai, a warehouse robotics system, as an example. Someone recently suggested to him that it would be interesting and efficient to tell his warehouse robots where to go by building an LLM for his system. According to him, however, this is not a reasonable use case for generative AI and would actually slow things down. However, it is much simpler to connect the robots to a data feed from the warehouse management software.
“When you have 10,000 orders that just came in and you need to ship in two hours, you need to optimize your production for that. Language is not going to help you, it will only slow things down,” he said. “We have massive data processing and massive AI optimization and planning techniques. And this is how we can fulfill orders quickly. »
Another lesson Brooks has learned when it comes to robots and AI is that you shouldn’t try to do too much. You need to solve a solvable problem in which robots can be integrated easily.
“We need to automate tasks that have already been cleaned. My company does quite well in warehouses, which are actually quite small. The lighting doesn’t change in these big buildings. There’s no stuff lying around on the floor that people pushing carts would bump into. There’s no plastic bags floating around. And it’s generally not in the interest of the people working there to be malicious toward the robot,” he said.
Brooks explains that it’s also about robots and humans working together. So his company designed these robots for practical purposes related to warehouse operations, rather than building a human-looking robot. In this case, it looks like a shopping cart with a handle.
“So the form factor we’re using is not humanoids walking around — although I’ve built and delivered more humanoids than anyone else. It’s like a shopping cart,” he said. “It has a handlebar, so if there’s a problem with the robot, a person can grab the handlebar and do whatever they want with it,” he said.
After all these years, Brooks has learned that it’s about making technology accessible and purpose-built. “I always try to make the technology easy to understand so people can deploy it at scale and always look at the business case; the return on investment is also very important.
Even with that, Brooks says we need to accept that there will always be hard-to-solve outliers in AI, which could take decades to resolve. “Without careful analysis of how an AI system is deployed, there is always a long list of outliers that take decades to discover and resolve. Paradoxically, all these fixes are carried out by the AI itself.
Brooks adds that there is a mistaken belief, mainly due to Moore’s Law, that technological growth will always be exponential. The idea is that if ChatGPT 4 is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees a flaw in this logic: Technology doesn’t always grow exponentially, despite Moore’s Law.
He uses the iPod as an example. In a few iterations, the storage size actually doubled, from 10GB to 160GB. If he had continued on this trajectory, he thought we would have an iPod with 160TB of storage by 2017, but of course, that’s not the case. The models sold in 2017 actually came with 256GB or 160GB because, as he pointed out, no one really needed more than that.
Brooks acknowledges that LLMs could be useful at some point with household robots, where they could perform specific tasks, especially with an aging population and not enough people to care for them. But even that, he says, could come with its own set of unique challenges.
“People say, ‘Oh, big language models are going to allow robots to do things they couldn’t do.’ ” The problem is not there. The problem of being able to do things is related to control theory and all sorts of other high-level mathematical optimizations,” he said.
Brooks says this could eventually lead to robots with language interfaces that are useful for people in care situations. “It’s not useful in the warehouse to tell an individual robot to go out and get an item for an order, but it could be useful in nursing homes because people can say things to the robots,” he said.