We were promised “Star Trek,” so why did we settle for these lousy chatbots?
We need more science fiction-inspired thinking in how we approach AI research, argues AI expert Gary Marcus.
At some point in the first week of January, I came down with a self-diagnosed case of AI fatigue.
There are many variations of AI fatigue. The first I call “new model confusion.” This is where you find yourself dizzy trying to keep up with the latest alphanumeric rebrand out of California: “Today, we’re introducing 4-oB pro-mini high X. It’s the best model yet.” It’s easy to get lost with the Anthropic Gemini twins dancing with Claude in Perplexity.
The second form of fatigue is “AI-everything.” You will have noticed by now that every company is desperately flailing about to offer their clients or customers something AI-ish: “We provide AI-powered results, using LLM-filtered analytics, crafted by the best prompt engineers in the world.” And every service I used to enjoy is suddenly laced with or framed by AI. Amazon offers to search reviews for me. Google Sheets suggests formulas I’ll never need. My email provider reassures me that it’s using the latest in spam-detecting AI to keep phishers from my inbox.
Earlier this year, I spoke with Gary Marcus, the popular and outspoken AI skeptic, about my AI fatigue. And as we were talking, he pointed out something I needed to hear: Most of this isn’t the AI I was promised. These new models and tools are not AI in the sense that I used to imagine it. Amazon has rebranded a search feature. Google Sheets is doing what Excel was doing in the 2000s. Spam filters are not AI; they’re just glorified if-else coding.
This is not the AI I was promised. And, according to Marcus, if we accept all of this as AI, then we risk missing the bigger thing.
The AI we’ve settled for
When I was growing up, I read and watched a lot of science fiction. My sick days were filled with Star Trek: Voyager and my weekends were filled with Isaac Asimov. Much of my understanding of what “artificial intelligence” means came from science fiction. I grew up thinking AI meant something almost indistinguishable from a human — but with the technological omnipotence of a globally connected mainframe. I asked Marcus whether our collective understanding of AI from science fiction has given us unrealistic expectations of what to expect from the current LLM explosion.
“I’ll go the opposite way,” Marcus said, “and say that the Star Trek computer is actually what we should be building. We are freaking lost if we think that prompt writing and prompt engineering are the right way to AI. Like, you didn’t have to tell the Star Trek computer in some obscure way how to get a job done. Imagine an episode where Captain Kirk says to Scotty:
‘We’re about to crash onto this planet. Help us, Scotty!’
And when Scotty gets the wrong answer from the Star Trek computer, Kirk says:
‘Goddammit, can’t you prompt this thing correctly?’
Like, that’s just not how it went, you know? In science fiction, the AI actually understands. I mean, there are some, you know, weird episodes where things go anomalous or something like that, but I think the vision of the Star Trek computer should actually be the standard, and you shouldn’t be satisfied when your ‘AI’ gives you brainstorming material 60% of the time and hallucinations 15% of the time. Like, that’s not what we’re aiming for here. We should actually look to science fiction for that.”
Marcus argues that while these deep learning models excel at pattern recognition, they fail at true understanding. We call them “intelligent,” but they lack the fundamental and general intelligence needed to be genuinely useful.
“The truth is,” Marcus told me, “AI systems still have a long way to go. They’re still not really understanding the world with the richness that, say, my ten- and twelve-year-olds do…. You can add all this reinforcement learning, a lot of patching of special cases, but there’s still something really deeply missing.”
The problem with settling
It would be ridiculous to suggest that the current crop of AI tools and LLMs hasn’t changed the world. Most people worldwide are using learning models in some way to make their lives easier. But there are two deeper problems with settling for this kind of AI.
First, it misses the bigger prize.
“We have to recognize that the current approaches — as amazing as they are — aren’t really the correct ones,” Marcus said. “And we need to put more resources into studying other things. In an article called ‘The Next Decade in AI,’ which was on arXiv, I said we need to focus on neuro-symbolic AI, which is finally starting to happen, knowledge representation, reasoning, and cognitive models or world models.”
In other words, we should be aiming for those science fiction promises. We should want to build the Star Trek computer and a kind of artificial general intelligence (AGI). There are those with skin in the game who believe that the current approach will eventually lead to AGI. Marcus, though, thinks that we’re going about this the wrong way.
“There’s just too much emphasis on black-box techniques that are not interpretable, that don’t work well with explicit symbolic knowledge,” he said. “We want systems with deeper comprehension, and LLMs are not really getting there. We need neuro-symbolic approaches. We need to not just use neural networks but borrow ideas from classical AI. Putting so much emphasis on just ever-bigger models is not really teaching us that much.”
Second, it’s serving an elite few.
The problem is that the more we see these AI models as being what we want from AI, the more we risk spiraling into a vicious circle. Money is increasingly pumped into these AI companies. More resources are consumed. And so these companies are resorting to increasingly desperate measures to appease their investors. The AI products of tomorrow will serve shareholders more than the average user.
“None of [these AI companies] are making enough profits to justify their infrastructure bets, so they’re getting increasingly desperate and increasingly badly behaved,” Marcus said.
“We just saw in the last few weeks, for example, that Google — which used to say ‘don’t do evil’ — is now doing military contracting, surveillance, and so forth. OpenAI is another example. They once said, ‘We’re not going to do military contracting.’ And they had an about-face, you know, 12 months later. The enormous cost of labs is really driving companies to places I wish they wouldn’t go. None of these companies can really be trusted, and they’re all chasing the money.”
For Marcus, the current AI hype is damaging the other, more fruitful, paths to an AI that might actually change the world. The problem is not that we expect too much science fiction from our technology, but that we’ve given up the Star Trek dream and settled for a chatbot reality.
Jonny is the creator of the Mini Philosophy social network. He’s an internationally bestselling author of three books and the resident philosopher at Big Think. He's known all over the world for making philosophy accessible, relatable, and fun.