We should prepare for "mass intelligence", not just "superintelligence"
Policymakers need to prepare for potentially powerful AI systems without overregulating for a future that may not come
In the first televised presidential debate in September 1960, presidential candidate John F. Kennedy challenged his opponent, Vice President Richard Nixon: “The Soviet Union now has in the field more missiles than we do … and unless we are willing to reappraise our whole defence policy, we are in danger of losing our position in the world.”
The “missile gap” became a central theme of Kennedy’s campaign and contributed to Nixon’s narrow defeat. But it was a myth: the U.S. had always maintained a clear lead in intercontinental ballistic missiles.
Yet, the narrative of the “missile gap” had already become a widely accepted fallacy in public discourse. Unable to walk back from his campaign rhetoric, Kennedy doubled down and justified the increase in defence spending with the need to maintain U.S. superiority in defence capabilities.
The “missile gap” is a striking example of how a flawed narrative can set the course for real-world decisions. Today, a similar dynamic is playing out in AI policy, where speculations about “superintelligence” shape the policy agenda, despite disagreement about what it is or would mean for society.
Towards the AI frontier
Shortly before his promotion to UK’s Trade Minister this month, Technology Minister Peter Kyle expressed confidence that we would achieve “superintelligence” in the very near future: “If my dream of winning a second term happens, I think it’s an inevitability”, he told TIME magazine.
Among industry representatives, this enthusiasm is met with scepticism: POLITICO quotes a lobbyist saying that Kyle’s statement feels “disconnected” to the corporate debate where many companies struggle to get a return on investment from their AI investments.
AI critic Gary Marcus dismissed the “superintelligence” narrative after what he viewed as a disappointing release of ChatGPT-5. According to Marcus, the technology behind current transformer models is not capable of reaching “superintelligence”.
What does the fox say?
Policymakers should approach both extremes – the superintelligence evangelists and the AI skeptics – with caution. Both narratives are more like strongly held beliefs rather than empirically tested theses. While there is some evidence that the pace of AI progress is slowing down, AI labs are also investing in new technologies that may continue to deliver breakthrough results.
In his book “Expert Political Judgment”1, Philip Tetlock finds that while people with one big idea (“hedgehogs”) often get a lot of attention because of their storytelling skills, they are not very good forecasters. By contrast, experts who build their knowledge on multiple, diverging perspectives and draw conclusions based on probabilities, not world views (“foxes”), are more accurate forecasters. Their predictions, however, are less headline-grabbing.
When it comes to “superintelligence”, a “fox” might advise policymakers that it does not really matter whether we're on the path to “superintelligence” or not. Even if AI progress came to a halt today, there would still be an enormous opportunity to integrate this technology into our education systems and our economies. And if AI progress continued at scale, the diffusion of this technology would still take time which would allow for adjusting our societies and our economy.
Towards a “mission economy”
The benefit of the race to “superintelligence” is not necessarily the end goal of reaching a final frontier in AI research. It may be more useful to view “superintelligence” through the lens of Mariana Mazzucato’s “Mission Economy”, an approach where governments define bold public policy goals and then mobilize resources to achieve them.
Without any doubt, “superintelligence” is a powerful, inspiring mission, just like the Apollo program was in the 1960s.
But importantly, the U.S. did not strive to bring a man to the moon because the moon was such an attractive destination. The U.S. launched the moon mission because of all the technology that needed to be invented to eventually reach the moon.
Some of this technology will be highly specialised, but some will appeal to the broad masses. Business professor Ethan Mollick recently coined the term “mass intelligence” or powerful AI that is “as accessible as a Google search”. While “superintelligence” will surely require policy responses, “mass intelligence” also needs the attention of policymakers, both with regard to managing its risks and accelerating its adoption..
So instead of asking whether we're close to “superintelligence” or not, perhaps policymakers should think about how artificial intelligence can help improve our societies today. If and when “superintelligence” arrives, this will will give us a head start – and if it does not arrive in the end, we would still have made the best of this technology.
If you want to read more about AI and its impact on how we work and live, subscribe to Work/Code, a new newsletter I am launching with Markus Albers in October. In interviews with technologists, academics, creatives, workers, and managers, we look behind the headlines and uncover the quiet revolutions unfolding inside companies, economies, and countries.
Tetlock, Philip E. (2005): Expert Political Judgment: How Good Is It? How Can We Know? Princeton: Princeton University Press.




I like this -- I sometimes think that cognition is a better term than intelligence -- because it decouples it a bit from the human associations. The availability of mass cognition is certainly an intriguing policy challenge, and opportunity.