Leading artificial intelligence companies have signed up to a new round of voluntary commitments on AI safety, the UK and South Korean governments announced on Tuesday. Tech giants Amazon, Google, Meta, and Microsoft, along with OpenAI, xAI, and Zhipu AI, have pledged to publish frameworks outlining how they will measure the risks of their “frontier” AI models.
The companies have committed not to develop or deploy a model if severe risks cannot be mitigated, setting a precedent for global standards on AI safety. This announcement follows the Bletchley Declaration made at the inaugural AI Safety Summit hosted by UK Prime Minister Rishi Sunak in November.
According to the agreement, the AI companies will assess the risks posed by their frontier models or systems before deployment and outline thresholds for intolerable risks. They will also provide transparency on their plans to develop safe AI, ensuring accountability and collaboration with other research labs, companies, and governments.
While it remains unclear how companies might be held accountable if they fail to meet their commitments, the 16 companies involved have agreed to provide public transparency on the implementation of their pledges. UK science secretary Michelle Donelan expressed confidence in the voluntary agreements, emphasizing the importance of both company and government efforts in ensuring AI safety.
The announcement at the global AI summit in Seoul highlights the evolving field of AI safety and the commitment of industry leaders to prioritize the safe and transparent development of AI technology. As the landscape of AI continues to evolve, these voluntary commitments serve as a foundation for future regulations to address AI-related risks effectively.