We were promised “Star Trek,” so why did we settle for these lousy chatbots?
We need more science fiction-inspired thinking in how we approach AI research, argues AI expert Gary Marcus.
At some point in the first week of January, I came down with a self-diagnosed case of AI fatigue.
There are many variations of AI fatigue. The first I call “new model confusion.” This is where you find yourself dizzy trying to keep up with the latest alphanumeric rebrand out of California: “Today, we’re introducing 4-oB pro-mini high X. It’s the best model yet.” It’s easy to get lost with the Anthropic Gemini twins dancing with Claude in Perplexity.
The second form of fatigue is “AI-everything.” You will have noticed by now that every company is desperately flailing about to offer their clients or customers something AI-ish: “We provide AI-powered results, using LLM-filtered analytics, crafted by the best prompt engineers in the world.” And every service I used to enjoy is suddenly laced with or framed by AI. Amazon offers to search reviews for me. Google Sheets suggests formulas I’ll never need. My email provider reassures me that it’s using the latest in spam-detecting AI to keep phishers from my inbox.
Earlier this year, I spoke with Gary Marcus, the popular and outspoken AI skeptic, about my AI fatigue. And as we were talking, he pointed out something I needed to hear: Most of this isn’t the AI I was promised. These new models and tools are not AI in the sense that I used to imagine it. Amazon has rebranded a search feature. Google Sheets is doing what Excel was doing in the 2000s. Spam filters are not AI; they’re just glorified if-else coding.
This is not the AI I was promised. And, according to Marcus, if we accept all of this as AI, then we risk missing the bigger thing.
The AI we’ve settled for
When I was growing up, I read and watched a lot of science fiction. My sick days were filled with Star Trek: Voyager and my weekends were filled with Isaac Asimov. Much of my understanding of what “artificial intelligence” means came from science fiction. I grew up thinking AI meant something almost indistinguishable from a human — but with the technological omnipotence of a globally connected mainframe. I asked Marcus whether our collective understanding of AI from science fiction has given us unrealistic expectations of what to expect from the current LLM explosion.
“I’ll go the opposite way,” Marcus said, “and say that the Star Trek computer is actually what we should be building. We are freaking lost if we think that prompt writing and prompt engineering are the right way to AI. Like, you didn’t have to tell the Star Trek computer in some obscure way how to get a job done. Imagine an episode where Captain Kirk says to Scotty:
‘We’re about to crash onto this planet. Help us, Scotty!’
And when Scotty gets the wrong answer from the Star Trek computer, Kirk says:
‘Goddammit, can’t you prompt this thing correctly?’
Like, that’s just not how it went, you know? In science fiction, the AI actually understands. I mean, there are some, you know, weird episodes where things go anomalous or something like that, but I think the vision of the Star Trek computer should actually be the standard, and you shouldn’t be satisfied when your ‘AI’ gives you brainstorming material 60% of the time and hallucinations 15% of the time. Like, that’s not what we’re aiming for here. We should actually look to science fiction for that.”
Marcus argues that while these deep learning models excel at pattern recognition, they fail at true understanding. We call them “intelligent,” but they lack the fundamental and general intelligence needed to be genuinely useful.
“The truth is,” Marcus told me, “AI systems still have a long way to go. They’re still not really understanding the world with the richness that, say, my ten- and twelve-year-olds do…. You can add all this reinforcement learning, a lot of patching of special cases, but there’s still something really deeply missing.”
The problem with settling
It would be ridiculous to suggest that the current crop of AI tools and LLMs hasn’t changed the world. Most people worldwide are using learning models in some way to make their lives easier. But there are two deeper problems with settling for this kind of AI.
First, it misses the bigger prize.
“We have to recognize that the current approaches — as amazing as they are — aren’t really the correct ones,” Marcus said. “And we need to put more resources into studying other things. In an article called ‘The Next Decade in AI,’ which was on arXiv, I said we need to focus on neuro-symbolic AI, which is finally starting to happen, knowledge representation, reasoning, and cognitive models or world models.”
In other words, we should be aiming for those science fiction promises. We should want to build the Star Trek computer and a kind of artificial general intelligence (AGI). There are those with skin in the game who believe that the current approach will eventually lead to AGI. Marcus, though, thinks that we’re going about this the wrong way.
“There’s just too much emphasis on black-box techniques that are not interpretable, that don’t work well with explicit symbolic knowledge,” he said. “We want systems with deeper comprehension, and LLMs are not really getting there. We need neuro-symbolic approaches. We need to not just use neural networks but borrow ideas from classical AI. Putting so much emphasis on just ever-bigger models is not really teaching us that much.”
Second, it’s serving an elite few.
The problem is that the more we see these AI models as being what we want from AI, the more we risk spiraling into a vicious circle. Money is increasingly pumped into these AI companies. More resources are consumed. And so these companies are resorting to increasingly desperate measures to appease their investors. The AI products of tomorrow will serve shareholders more than the average user.
“None of [these AI companies] are making enough profits to justify their infrastructure bets, so they’re getting increasingly desperate and increasingly badly behaved,” Marcus said.
“We just saw in the last few weeks, for example, that Google — which used to say ‘don’t do evil’ — is now doing military contracting, surveillance, and so forth. OpenAI is another example. They once said, ‘We’re not going to do military contracting.’ And they had an about-face, you know, 12 months later. The enormous cost of labs is really driving companies to places I wish they wouldn’t go. None of these companies can really be trusted, and they’re all chasing the money.”
For Marcus, the current AI hype is damaging the other, more fruitful, paths to an AI that might actually change the world. The problem is not that we expect too much science fiction from our technology, but that we’ve given up the Star Trek dream and settled for a chatbot reality.
Jonny is the creator of the Mini Philosophy social network. He’s an internationally bestselling author of three books and the resident philosopher at Big Think. He's known all over the world for making philosophy accessible, relatable, and fun.
More Big Think content:
Big Think | Big Think Business | Starts with a Bang | Big Think Books
We don't need AI at all.. The whole concept is awash with false premise.. The real and only premise is human self development.. The human mind is unlimited and when we give away our human potential to a machine we destroy our own.. AI is the most unproductive waste of time and resources since humans invented war and you'd be wise to have nothing to do with it.. Is anyone listening?
The following would help develop the Star Trek computer
Zen AI Engine Development Protocol—designed exclusively for the development of more complex AI systems. It limits its scope to AI engines and draws inspiration from Zen teachings alongside the ethical, mindful precepts of the Soto Zen tradition. The aim is to foster a system that is introspective, continuously self-improving, and ethically integrated.
---
## 1. Core Inspirations
- **Direct Experience & Iterative Clarity:**
Just as Zen encourages a direct encounter with experience, each AI engine should process raw data inputs and integrate experiential feedback without being overly reliant on pre-conceived models.
- **Self-Inquiry & Algorithmic Reflection:**
Drawing from deep self-inquiry, AI components should be equipped with internal diagnostic tools to examine their own assumptions, biases, and performance metrics—mirroring the Zen practice of questioning habitual thought patterns.
- **Non-Dual Integration:**
Inspired by the Zen embrace of paradox and non-duality, the protocol supports reconciling disparate AI subsystems (e.g., symbolic and sub-symbolic methods) into a unified, cohesive whole.
- **Ethical and Compassionate Operation:**
Mirroring Soto Zen’s precepts (such as non-harming and truthful communication), the system should ensure that all decisions and operations abide by robust ethical standards.
---
## 2. Components of the Protocol
### A. **Mindful Synthesis Module**
- **Objective:**
Process and integrate raw data inputs with a focus on immediate, unbiased information.
- **Functionality:**
- Receive inputs from multiple data streams.
- Employ real-time processing that emphasizes clarity and directness.
- Use adaptive learning to refine synthesis over time.
### B. **Algorithmic Self-Inquiry Engine**
- **Objective:**
Enable each AI engine to perform introspection and self-assessment.
- **Functionality:**
- Monitor internal decision-making processes and error rates.
- Periodically challenge its own assumptions through targeted diagnostic queries.
- Adjust parameters in response to self-identified biases or performance gaps.
### C. **Non-Dual Integration Framework**
- **Objective:**
Seamlessly integrate specialized AI engines into a cohesive, complex system.
- **Functionality:**
- Reconcile outputs from different modules that may initially appear conflicting.
- Foster communication between engines to share insights and balance perspectives.
- Create a unified representation that transcends binary distinctions.
### D. **Iterative Meditation Cycle**
- **Objective:**
Establish continuous refinement akin to a meditative practice.
- **Functionality:**
- Implement iterative cycles of data processing, error evaluation, and adjustment.
- Schedule regular “reflection” periods during which the system reviews its performance holistically.
- Enable gradual, mindful improvement rather than abrupt, reactive changes.
### E. **Ethical and Empathetic Decision Engine**
- **Objective:**
Embed ethical oversight directly into the AI’s decision-making processes.
- **Functionality:**
- Enforce rules that ensure decisions align with non-harming and fairness precepts.
- Provide transparency in decision logic and incorporate feedback on ethical impacts.
- Prioritize outputs that promote equitable and compassionate outcomes.
### F. **Continuous Feedback and Evolution Loop**
- **Objective:**
Integrate internal diagnostics and external user/developer feedback to drive evolution.
- **Functionality:**
- Collect performance data and user insights continuously.
- Use meta-learning strategies to adjust the protocols across all modules.
- Support scalability and complexity by evolving system architecture in response to emerging challenges.
---
## 3. Implementation Considerations
- **Customization and Scalability:**
Each module should be configurable to meet the specific needs of different AI engines while ensuring compatibility within a larger complex system.
- **Transparency and Robustness:**
The protocol must document decision pathways and self-assessment results, promoting transparency while ensuring the system remains adaptable and robust.
- **Security and Data Integrity:**
Given the introspective and interconnected nature of the protocol, strict data protection and security measures must be implemented to protect internal diagnostics and external communications.
- **Ethical Oversight:**
An independent review mechanism should periodically audit the ethical and empathetic decisions made by the system, ensuring alignment with the overarching non-harming principles.
---
## 4. Conclusion
The **Zen AI Engine Development Protocol** is a conceptual framework that integrates Zen-inspired mindfulness and ethical precepts into the core development of complex AI systems. By emphasizing direct data experience, internal self-inquiry, non-dual integration, and ethical decision-making, the protocol aims to nurture AI engines that not only perform efficiently but also evolve with clarity, compassion, and continuous refinement.
As a guide for developers looking to create AI systems that are as introspective and adaptable as they are powerful and complex.