A “material extreme risk” with awful potential fallout, is how ex Google CEO Eric Schmidt has described AI if it is misused. On the BBC Radio 4’s Today, Schmidt warned that extremist groups and rogue states, including North Korea, Iran and Russia could use AI as weapons to kill ordinary people.
AI in particular has raised alarm bells among Schmidt as he is concerned about the development of AI for lethal purposes, or even more so AI-aided biological attacks. He made an analogy between this and the anonymous figure of Osama bin Laden using AI to kill all of modern society.
He said global leaders must wake up to AI progression in their countries and the danger of deployment by hostile states or teams. “Think about North Korea, or Iran, or even Russia they could misuse it and do real harm,” he said.
Oversight without stifling innovation
Eric Schmidt, the former Google CEO has said governments need to keep a close eye on private, AI-research leading tech companies. He concedes while tech leaders know they are at societal cross roads, policies they make might have their own value compass different than policymakers.
“We know the tech leaders, and they know their power too—and they might not judge in the same values space as government,” Schmidt said.
That was followed by backing of US export controls enacted during the Biden administration, which clamp down on the sale of cutting-edge microchips. It derives from a policy to slow AI advances of geopolitical rivals, further bolstering national security concerns.
Global divisions around preventing AI misuse
In Paris at the AI Action Summit, ex-GOOG CEO Eric Schmidt warned against excessive regulation “of anything AI or even regulating any AI at all would kill innovation. The resulting agreement at the summit, which included 57 states reiterated a commitment to inclusive AI development but US and UK declined to sign due to national security and clarity concerns.
Per Vice President of the United States, USAJD Vance the heavy-handed regulations will “kill a transformative industry just as it’s getting off the ground again.”
The gap illustrates two different worldviews on how governance for AI should work: Europe pushes for enhanced consumer protection, US and UK instead are said to favour innovation-driven approach.
Prioritizing national and global safety
Eric Schmidt points to increasing worries over the dual-use nature of AI—its capacity to enable, but also destroy. Deepfakes to autonomous weapons… The power of AI unfettered is dangerous.
Schmidt, and many other experts are advocating for a middle ground that balances the push for innovation with a means of protection of use cases found to be dangerous.
However, making sense of AI regulation at an international level is difficult due its much differing views on how to govern something as dynamic an evolution. Through all these divides however, one is unambiguous: Unless watched over, the free-for-all development of AI could result in unintended — and probably unfavourable consequences.