What’s The Point of Chasing Superintelligence?
“Putting ‘safe’ next to ‘superintelligence’ is kind of an oxymoron.”

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
We aren’t close to the singularity. But it’s catching investor attention anyway.
Safe Superintelligence, a startup founded by OpenAI co-founder and former chief scientist Ilya Sutskever, has raised an additional $2 billion, bringing its valuation to $32 billion. Though the company has no product and little to publicly show but a bare-bones website, its backers reportedly include Google, Nvidia and a host of major venture capital firms, including Andreessen Horowitz and Lightspeed Venture Partners.
Despite the eye-popping valuation, superintelligence as it stands is only theoretical, said Bob Rogers, chief product and technology officer of Oii.ai and co-founder of BeeKeeper AI. For reference, superintelligence is the concept of an AI system that can far outperform human intelligence in all domains, a benchmark that current generative AI models aren’t even close to.
Though AI hype and the push towards artificial general intelligence has set up superintelligence as a distant goal post for ambitious AI developers, Rogers said scientists aren’t even close to being close to scratching the surface of superintelligence:
- Though superintelligence is limitless in theory, current AI systems run up against walls where accuracy and quality start to deteriorate after a certain point.
- Defining when an AI model reaches superintelligence is going to require “deep interaction with experts across the spectrum,” he said. And because every human thinks differently, “even a single system isn’t going to necessarily prioritize every kind of way of thinking about things.”
“They’re talking about a single system that is smarter than any extremely capable human in every single category, with self-awareness,” said Rogers. “It’s a pretty tall order.”
While the goal of Sutskever’s startup is in the name – “to advance capabilities as fast as possible while making sure our safety always remains ahead,” according to the firm’s website – creating this kind of technology in a safe and ethical way is a lofty goal, Rogers said.
For one, modern AI models are already unpredictable in their current form, and can grow more unpredictable the larger they get, he said. Just because a model is intelligent doesn’t mean it’s “reliable or predictable, or the answer to every problem,” he added.
And because superintelligence involves self-awareness, building a “kill switch,” or something to turn off a theoretically super-intelligent model before it spirals out of control, becomes even more tricky, Rogers said. “Putting ‘safe’ next to ‘superintelligence’ is kind of an oxymoron.”
Despite investor excitement, superintelligence is far from the radar of most enterprises, said Rogers. But with the rate of advancement of AI, the question of “how much power is too much” is preoccupying tech innovators. The idea that bigger, smarter and more powerful is always better for enterprises isn’t entirely true, he said: “Constrained, intelligent tooling that’s reliable and predictable” might be a better bet.
“There’s a whole lot of things that need to be done in this world that don’t actually require massive intelligence,” said Rogers. “We don’t need an army of super-intelligent agents when enterprises aren’t getting all that much value from today’s large language models.”