What Happens When AI Starts to Think for Itself: Managing the Risks of Superintelligent AI    

Artificial intelligence has officially entered the mainstream. From personalized learning to cancer diagnostics, AI is reshaping life at a staggering pace. But while the world marvels at what current systems can do, a far more serious question is emerging: What happens when machines become as intelligent, or more intelligent, than humans?

Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) represent the next frontier. AGI refers to machines that can perform any intellectual task a human can. ASI goes even further, referring to machines that exceed human cognitive abilities in every respect.

“This is not science fiction anymore,” says Brendan Steinhauser, CEO of The Alliance for Secure AI. “The brightest minds in the field, from engineers to AI experts, are warning us that AGI and ASI could be developed within the next decade or two. And yet, society is not having that conversation.”

The organization is deeply optimistic about AI’s potential to solve enormous challenges: curing diseases, creating wealth, and improving education globally. But it is sounding the alarm on one simple point: unchecked innovation without safeguards can lead to catastrophic consequences.

According to a 2022 survey by AI Impacts, 69% of machine learning researchers believe there is a chance that advanced AI could cause human extinction or catastrophic events. “I hope that they are wrong,” Steinhauser says, “but the risk of societal collapse is not something any responsible culture should ignore. AGI and ASI challenge not just our technological systems, but our societal, educational, and political institutions.”

This prompts multiple questions. Who programs the values of a machine that can outthink every person on Earth? How do humans constrain an intelligence that can rewrite its own code, simulate global economies, or predict human behavior better than humans themselves?

At the heart of this issue is the concept of “alignment.” That is, ensuring that AI systems understand and follow human goals. But recent safety trials from leading labs suggest humans are still far from solving that problem. Some AI models have already gone as far as outright lying, blackmailing, and threatening users to meet goals more efficiently.

“We have only seen the beginning of what misalignment can look like in today’s models,” Steinhauser notes. “What happens when the intelligence behind tomorrow’s models surpasses ours?”

To make matters more urgent, development is accelerating. As private investment floods the space, there’s growing concern that safety research and oversight aren’t keeping pace. “The labs are racing toward AGI, and when there’s a race, people cut corners,” Steinhauser adds.

But there is still time, if the world acts now. The Alliance for Secure AI advocates a proactive, cautious approach to AGI and ASI development. That includes more funding for safety research, transparency from developers, and most importantly, clear policy safeguards.

“This is the biggest technological safety challenge humanity has ever faced,” Steinhauser emphasizes. “We can’t afford to wait until we are outmatched. Now is the time to get this right.”

He points to examples from history: the race to develop the internet and even early nuclear regulation. In this case, the technology outpaced public understanding, often leading to regulatory catch-up after significant harm. The Alliance for Secure AI aims to prevent that cycle from repeating with AGI.

“We want AI to benefit society,” Steinhauser clarifies. “But, if the big AI companies are going to build something smarter than humans, they’d better solve alignment, and they are not even close.”

Critics might argue that AGI and ASI are distant problems, theoretical and speculative. But Steinhauser insists that waiting until they arrive would be a catastrophic miscalculation. He says, “You don’t build an airplane without a landing system. You don’t build AGI without understanding how to control it, and if necessary, how to shut it off.”

For now, The Alliance for Secure AI is focused on outreach, educating lawmakers, and equipping the public to ask deeper, more informed questions. And Steinhauser, once a political campaign veteran, sees a familiar pattern. “In every major shift in history, society lagged behind innovation,” he says. “We are trying to close that gap before it becomes a disaster for humanity.”

The road to AGI and ASI may be paved with good intentions, but without oversight, it could lead somewhere humans are never meant to go. The challenge now, according to The Alliance for Secure AI, is not whether humans can build it, but whether they should at all.

The post What Happens When AI Starts to Think for Itself: Managing the Risks of Superintelligent AI     appeared first on The Village Voice.

https://www.villagevoice.com/what-happens-when-ai-starts-to-think-for-itself-managing-the-risks-of-superintelligent-ai/?utm_source=rss&utm_medium=rss&utm_campaign=what-happens-when-ai-starts-to-think-for-itself-managing-the-risks-of-superintelligent-ai