OpenAI Urges Global Policy Reforms as AI Advances

Developer of ChatGPT, OpenAI, is urging global leaders to anticipate and plan for an era dominated by advanced artificial intelligence.

In their recently released paper titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First,” OpenAI emphasizes that rapid advancements in AI could drastically alter economies. They suggest new strategies for taxation, labor policy, and social protections as society braces for potential superintelligence scenarios.

“The path this transition will take remains uncertain,” OpenAI stated. “At OpenAI, we advocate navigating this through a democratic process that empowers individuals to influence the AI future they desire, while preparing for various outcomes and enhancing adaptability.”

While acknowledging that AI could boost productivity and accelerate scientific breakthroughs, OpenAI cautions against potential disruptions in labor markets and wealth concentration without appropriate policy adjustments. They recommend governments start preparing now for changes in work dynamics, income distribution, and economic growth.

The paper proposes several policy initiatives: treating AI access as a fundamental economic resource akin to global literacy efforts, modernizing tax frameworks to reflect automation impacts, and establishing systems that enable citizens to benefit from AI-driven industry gains.

“Advanced AI holds the promise of not just technological advancement but also an enhanced quality of life for everyone,” OpenAI noted. “It is crucial that all individuals can engage with the new opportunities AI presents. Standards of living should improve through reduced costs, better health and education, and increased security and opportunity.”

Additionally, OpenAI suggests reinforcing worker protections and bolstering social support in response to abrupt job losses due to technological changes. They call for oversight mechanisms such as auditing frontier models, incident reporting systems, and “model-containment playbooks” for situations where dangerous AI systems can’t be easily retracted once deployed.

“If AI benefits only a select few while the majority lack agency and access to its opportunities, we will have failed in its promise,” OpenAI warned.

This policy initiative emerges amid challenges for OpenAI CEO Sam Altman, who faces renewed scrutiny after an extensive investigation by The New Yorker. The report highlights that Ilya Sutskever, OpenAI’s co-founder and then-chief scientist, had previously accused Altman of misleading the company regarding safety protocols and other crucial operations through internal memos.

According to the magazine, these trust issues led to Altman’s dismissal by the board for not being “consistently candid.” The firing sparked turmoil within the company as employees threatened departure in protest, while influential investors like Josh Kushner demanded his reinstatement to continue funding.

The investigation highlights significant internal conflicts over governance and safety, with some former insiders such as Sutskever and Anthropic co-founder Dario Amodei claiming Altman prioritized growth and expansion over the company’s original mission focused on safety. OpenAI did not immediately respond to a request for comment by Decrypt.