|

AI May Never Completely Conquer Hallucination, Bias

“These things, unsupervised, they still are acting like 4-year-olds.”

Photo of abstract code
Photo by Getty Images via Unsplash

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

How hands-off can we really be with AI? 

Models from two major AI developers went haywire in separate incidents this week. Grok, the chatbot from Elon Musk’s xAI, responded to a variety of queries on X with commentary on “white genocide in South Africa” — something the company blamed on “unauthorized modification” of the model. Meanwhile, Anthropic’s lawyers admitted to using an erroneous citation hallucinated by the company’s Claude chatbot in its legal battle with music publishers. 

Such incidents highlight fundamental problems with AI – ones that developers can limit but may never be able to fully eliminate, said Trevor Morgan, senior vice president of operations at OpenDrives

Hallucination and bias are core elements of AI, he said. Even if rates of inaccuracies and bias are reduced to minimal levels, AI learns from a “human-centered world,” said Morgan. Often, enterprises utilizing AI don’t know exactly what it learned from, or what is missing from its training datasets. Whether the result of negligence or intent, biased data can leak in and become amplified, he said, or models can simply make up answers for queries they weren’t trained to respond to. 

That makes taking human hands off the reins incredibly tricky. “These things, unsupervised, they still are acting like 4-year-olds,” said Morgan. “These things can still run around the pool and fall in.” 

While such issues aren’t new, they aren’t always fully considered as enterprises move full steam ahead with AI deployments, Morgan said. Failure to create adequate safeguards up front may create massive risk later, especially as autonomous AI agents continue to sweep the industry. “AI – more and more – is doing the doing within companies,” he said:

  • Tech giants are more than eager to push the agentic narrative, with the likes of Microsoft, Salesforce, Google, Nvidia and more throwing their hats into the ring. 
  • And AI agents are already taking over processes that were once human-dominated: According to Neon, an open source database startup recently acquired by Databricks, 80% of the databases created on its platform were produced by AI agents as of April. 
  • When platforms begin to be optimized for usage by AI “rather than human beings,” said Morgan,“what do human beings do then?” 

The first step in avoiding risk is reckoning with it. As much as we want to let AI Jesus take the wheel, Morgan said, the initial deployments need to be “very intentional and first-level.” The best-case scenario for working side-by-side with AI is that it drives ideation, but “human beings are still in control of the ultimate decision-making,” he said. In the event that something slips through the cracks, these systems also need guardrails and monitoring, he added. 

Amid the speed of adoption and the pressure to keep up, though, the conversation about necessary guardrails, safety and governance often gets lost, said Morgan. “But are we dealing with something that, when we realize we need guardrails, are we too late?” 

In essence, it’s the question underpinning the first Jurassic Park movie, Morgan said. “They went so fast to see if they could do it that they didn’t stop to think, ‘should we do it?’”

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.