'If this technology goes wrong, it can go quite wrong', OpenAI's Sam Altman calls for regulations amid greatest AI fears
The latest generation of ‘generative AI’ tools, including ChatGPT, has raised worries about misinformation, copyright infringement, and job displacement
In the recent Senate hearing, Sam Altman, the CEO of OpenAI, emphasized the need for government intervention to mitigate the risks associated with increasingly powerful AI systems. Altman's testimony shed light on the concerns surrounding AI technology and the urgent necessity to address potential issues.

He proposed the establishment of a licensing agency, either on a national or global scale, that would regulate the most advanced AI systems and possess the authority to enforce safety standards and revoke licenses if necessary.
Addressing Societal Concerns Surrounding AI
Altman's startup, OpenAI, gained significant attention following the release of ChatGPT, a chatbot tool that generates remarkably human-like responses to queries. While initially educators worried about its potential misuse for cheating on homework, broader concerns soon emerged. The latest generation of "generative AI" tools, including ChatGPT, has raised worries about misinformation, copyright infringement, and job displacement.
Government Action on the Horizon
Altman's testimony comes in the wake of mounting societal concerns and increased scrutiny of AI technologies. Although no comprehensive AI regulations have been proposed by Congress yet, recent discussions at the White House with tech CEOs and promises from U.S. agencies to tackle harmful AI products violating existing civil rights and consumer protection laws indicate a growing willingness to address these issues.
During the Senate hearing, Senator Richard Blumenthal highlighted the importance of requiring AI companies to test their systems and disclose known risks before their release. He expressed particular apprehension about the potential destabilization of the job market by future AI systems. Altman shared similar sentiments but presented a more optimistic view regarding the future of work.
Altman's Worst Fears and Proposed Safeguards
When pressed about his greatest fears regarding AI, Altman refrained from providing specific examples but acknowledged that the industry could have significant negative consequences.
Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”
But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.
Balancing Immediate Concerns and Long-Term Goals
Some experts caution that the focus on hypothetical, super-powerful AI systems might distract from addressing current challenges related to data transparency, discriminatory behavior, and disinformation. Suresh Venkatasubramanian, a computer scientist who co-authored the Biden administration's AI bill of rights, noted that unfounded fears about these future systems hinder progress in addressing present concerns.
Also read | Tech tycoon Sam Altman shakes up Congress with AI showdown. All you need to know
Congress Takes the First Step
The Senate hearing marked a critical initial step for Congress in comprehending the complex issues surrounding AI regulation. Both Democrats and Republicans expressed interest in Altman's insights and sought to understand the best path forward. While the exact nature of regulation is still being debated, the recognition of the importance of regulation in the AI landscape is widespread and growing.