Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

First State-Backed AI Safety Tool Unveiled in the UK

Reading Time: < 1 minute

The U.K. has taken a significant step in the realm of artificial intelligence (AI) safety testing with the launch of a groundbreaking toolset called “Inspect.” This new offering, unveiled by the AI Safety Institute, is a software library that allows testers, including startups, academics, and AI developers, to assess specific capabilities of individual AI models and generate a score based on their results.

According to a news release from the institute, Inspect is the first AI safety testing platform overseen by a state-backed body and made available for wider use. Michelle Donelan, the U.K.’s secretary of state for science, innovation, and technology, emphasized the importance of U.K. leadership in AI safety, stating that the release of Inspect as an open-source platform solidifies the country’s position as a global leader in this space.

This announcement comes on the heels of a recent commitment between the British and American governments to collaborate on safe AI development. The two countries have agreed to work together on tests for advanced AI models and form alliances with other nations to promote AI safety worldwide. This partnership underscores the need for global cooperation in addressing the potential risks associated with AI technology.

As the AI landscape continues to evolve rapidly, the responsibility is increasingly placed on companies to ensure that their products are safe, trustworthy, and ethical. This new era of AI safety testing marks a significant milestone in the ongoing effort to harness the power of AI while mitigating its potential risks.

Taylor Swifts New Album Release Health issues from using ACs Boston Marathon 2024 15 Practical Ways To Save Money