OpenAI, the $80 billion AI company known for its ChatGPT model, is facing a major setback as it dissolved its team focused on preventing AI from going rogue. This decision comes after two key executives leading the effort left the company, raising concerns about AI safety.
The controversy surrounding AI safety emerged shortly after OpenAI unveiled its latest AI model, GPT-4o, which features a voice eerily similar to actress Scarlett Johansson. The company decided to pause the rollout of this particular voice following the backlash.
Sahil Agarwal, a Yale PhD in applied mathematics and founder of Enkrypt AI, emphasized the importance of balancing innovation and safety in the development of AI technologies. He believes that ensuring the safety and security of AI systems is essential for the long-term success of any company.
The departure of former OpenAI chief scientist Ilya Sutskever and research lead Jan Leike, who were in charge of the superalignment team responsible for keeping AI under human control, has raised concerns about the company’s commitment to AI safety. Leike expressed his disappointment with OpenAI’s prioritization of product development over safety measures, calling for a shift in focus.
As AI technologies become more advanced and multifaceted, the risk of implicit bias and toxic content increases. Agarwal’s company recently released a safety leaderboard ranking AI models based on their safety and security features. The findings suggest that the new GPT-4o model may contain more bias and produce more toxic content than its predecessor.
The debate over AI safety continues to be a pressing issue for companies like OpenAI as they strive to balance innovation with responsible development practices. The future of AI technology hinges on the ability to prevent AI from going rogue and ensure its safe integration into society.