Silicon Valley’s AI Prophets Are Betting On Fantasy | Image Source: www.thestar.com.my
SAN FRANCISCO, April 4, 2025 – In the era of great linguistic models and rapid progress in artificial intelligence (AI), the Silicon Valley elite increasingly believes that the next great transformation of humanity is not only imminent – it is already underway. But as a flow of predictions, manifestos and scenario reports have an impact on the public domain, some voices are beginning to wonder whether this technological optimism is rooted in science or part of a new kind of mythology.
Scientific journalist Adam Becker’s latest book, More Everything Forever, offers a sober but richly detailed exploration of what happens when unproven faith in AI confronts the disorderly realities of power, governance and historical repetition. His work dissects the ideology that feeds many of Silicon Valley’s most influential futurists – Ray Kurzweil and Eliezer Yudkowsky to risk capitalist Marc Andreessen – revealing a cosmopolitan where immortality is only a mental burden and spatial colonization is a solution to the finite resources of the Earth.
Becker describes this belief system as “technological salvation”, a secular extension of the theological narratives that promise eternal life and transcendent perfection. It is an ideology that raises all social problems – ethics, inequality, governance, even death – as solvent by code. And although such technological utopianism may seem aspirational to the surface, Becker’s vision is getting colder: these ideas are only experiences of thought. They influence AI’s policy, wealth distribution and global governance architecture.
Futuristic predictions of Ideology or Reality?
The report “AI 2027” – a fictional prognosis but based on data written by former OpenAI researcher Daniel Kokotajlo and AI analyst Eli Lifland – embodies Becker’s criticism. The report describes a near future in which AI systems quickly prevail over human intelligence, automate at exponential speeds and potentially threaten human survival. The authors, both in line with the movement of Effective Altruism, paint a living image of an AI firm called “OpenBrain” which releases successive generations of superintelligence agents culminating in Agent- 4, a system capable of making a year of progress of artificial intelligence within a week.
Asked whether humanity will still recognize the world by 2030, Kokotajlo hesitated, imagining utopias of robot factory or dark skies by the catastrophic collapse. This speculative prognosis covers the line between responsible future preparation and a new kind of technological mysticism – science fiction part, Silicon Valley gospel part.
What’s Driving the AI Elites to Think in Extremes?
According to Becker, and has resonated in criticism by means such as KCRW and Nature, what motivates this kind of thought is not only technological optimism, but a deep echo, often unconscious of religious desire. The transhumanist movement, for example, reinterprets classical spiritual themes through a technological lens. The mind becomes a digital resurrection; AI becomes the oracle of moral truth.
This convergence of capitalism and techno-messianism is shown in a surprising way. Becker argues that multi-millionaire technocrats have taken these visions not only to form cultural narratives, but also to consolidate economic and political power. “Expand their influence now,” he writes, “is often the real goal behind the claims to want to save humanity later. Therefore, the pressure for IA domination is not limited to innovation; It involves institutionalizing a world view that focuses on central control, predictability and benefits over the adaptability, equity and complexity of the real world.
To what extent are these forecasts based on historical models?
Critics like security expert Davi Ottenheimer, writing for FlyingPenguin, draw historical parallels that warn against the type of rigid prediction seen in AI 2027. By comparing its predictive logic with Napoleon’s catastrophic naval strategy - where centralized control and overtrust have led to the destruction of the French fleet by the more agile British navy – it argues that the industry of the AI falls into the same trap. Napoleon built massive ships decorated like the Orient; today’s AI giants are building mass computing centers. According to him, both monuments are vulnerable to over-centralization and arrogance.
Ottenheimer continues this analogy with references to the Maginot line – the defence system of the First World War was inflexible and outmoded in France that was overcome by German innovation in the Second World War. Just as the French generals ignored the signs of change and were based on outdated war patterns, he warned that modern AI centralists can ignore the subtle but powerful rise of AI’s open, distributed and decentralized technologies.
Is “Open Source” Still the Best Path Forward for AI?
Interestingly, IA 2027 authors contradict that “you have what you pay,” suggesting that expensive owner models will dominate. However, his own work – freely distributed – supported this statement. Ottenheimer reminds readers that almost all transformation technologies that support modern computing – from TCP/IP and HTML to Linux and HTTP – have been opened, not patented.
In fact, technologies such as Mistral, LLAMA and DeepSeek already fill the performance gap between closed and open models. And the historical lesson is clear: open protocols, precisely because they are accessible and adaptable, tend to save with time. This undermines the fundamental logic of the report and suggests that we can witness the increase of a new decentralized AI ecosystem despite the attempts of large companies to monopolize development.
Can former OpenAI employees predict the future of intelligence?
While open AI is used by the authors of AI 2027 as a title, critics argue that membership in a tumultuous organization should not automatically translate into prophetic authority. OpenAI has had to deal with repeated crises of governance, resignation and failure of deadlines, including IGA promises not kept in just a few years, a demand repeatedly made over the past decade with little to show for it.
Becker and others argue that “past affiliation is not a qualification; It’s a context.” Actually, I should speed up control. Have these people helped put in place strategies that have failed? Were they spectators or participants? Failure to respect a clear causal framework for their predictions weakens their authority. Citing David Hume’s criticism of inductive reasoning, Ottenheimer warns against the confused correlation of the past with future causality – a logical falsehood that is repeated throughout the report.
Is There a Better Framework for Thinking About AI’s Future?
Instead of accepting AI’s future as inevitable, Becker argues that we need to rethink these debates. What if AI was simply a tool to strengthen human collaboration and democratic self-government? What if communities, not businesses, determine how these systems are integrated into society?
It recommends a strong public dialogue, not led by billionaires or former OpenAI collaborators, but by ethics, historians and community leaders who understand technology not only as a technical challenge, but as a social and moral challenge. Becker even suggests radical political interventions such as the punitive imposition of billionaires to put an end to the widespread influence of technocratic elites.
Why historical illiteracy makes technical predictions dangerous
The deepest criticism, rooted in all sources, is that much of today’s AI predictions are affected by a dangerous combination of history and overconfidence. The same mistakes that led to disasters such as the cold of the port or the radio silence of the French generals before the fall of France during the Second World War are repeated in the meeting rooms of Silicon Valley. The trend is the same: deference to defective authority, inflexible assumptions and contempt for the lessons of the past.
When institutions isolate critical comments, they make bad predictions and worse policies. This is particularly dangerous when betting involves controlling systems that could shape everything from the labour market to the fundamental nature of human cognition. It is not enough to have technical skills, a deep understanding of history, philosophy and ethics is also essential.
That is why Becker’s call for balance, nuance and humility is so vital. His warning is not only to be torn apart by AI, but also by human institutions that do not rule wisely in the face of rapid change. The future is not something we can model with perfect precision - it’s something we build together with open eyes and open minds.
While technoelites publish more “scenarios” and science fiction forecasts disguised as a political guide, Becker’s voice reminds us that the real danger is not that machines become human, but that humans can forget how to be human in the process.