Elon Musk’s Lawsuit Puts OpenAI’s AI Safety Practices Under Scrutiny

Elon Musk’s Lawsuit Puts OpenAI’s AI Safety Practices Under Scrutiny

OpenAI is facing increased scrutiny over its approach to AI safety as part of a legal battle brought by Elon Musk.

During a federal court hearing in Oakland on Thursday, former OpenAI employee and board adviser Rosie Campbell testified that the company gradually shifted away from its original safety-focused mission and became more centered on commercial AI products.

Campbell, who worked on OpenAI’s AGI readiness team from 2021 to 2024, told the court that discussions around artificial general intelligence and safety were once central to the organization. However, she said the culture changed as OpenAI increasingly focused on releasing products to the market.

She also revealed concerns over an incident in which a version of GPT-4 was reportedly deployed through Microsoft’s Bing platform in India before completing review by OpenAI’s Deployment Safety Board. According to Campbell, the issue highlighted the importance of establishing strict safety procedures as AI systems become more powerful.

The testimony forms part of Musk’s broader lawsuit arguing that OpenAI abandoned its founding mission of developing AI for the benefit of humanity after transitioning into a major for-profit business.

Former OpenAI board member Tasha McCauley also testified about internal governance concerns surrounding CEO Sam Altman. McCauley claimed the board struggled to properly oversee the company’s for-profit arm because it lacked confidence in the information being provided by leadership.

She cited several controversies, including allegations that Altman failed to fully inform the board about the public launch of ChatGPT and potential conflicts of interest within the company.

Despite those concerns, OpenAI’s board ultimately reversed its brief decision to remove Altman in 2023 after employees and Microsoft supported his return.

The case has reignited debate about whether advanced AI development should rely on internal corporate governance or stronger government regulation. McCauley told the court that leaving major AI decisions to a single executive could be “very suboptimal” given the public risks associated with powerful AI systems.

OpenAI has continued to publicly release safety frameworks and model evaluations, but the company declined to comment on its current AGI alignment strategy during the proceedings.