The National Institute of Standards and Technology (NIST) has announced the launch of a new program called NIST GenAI, aimed at assessing generative AI technologies such as text- and image-generating AI. This initiative comes in response to the growing concern over deepfakes and the need for tools to detect and combat fake or misleading information.
NIST GenAI will release benchmarks, develop content authenticity detection systems, and promote the creation of software to identify the source of fake content. The program will issue challenge problems to evaluate the capabilities and limitations of generative AI technologies, with a focus on promoting information integrity and responsible use of digital content.
The first project under NIST GenAI is a pilot study to differentiate between human-created and AI-generated media, starting with text. Teams from academia, industry, and research labs are invited to submit AI systems to generate content or systems to identify AI-generated content.
Registration for the pilot study will begin on May 1, with results expected to be published in February 2025. This initiative is part of NIST’s response to President Joe Biden’s executive order on AI, which emphasizes transparency and standards for labeling content generated by AI.
The launch of NIST GenAI also marks the first AI-related announcement since the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Safety Institute. Despite some controversy surrounding Christiano’s views on AI development, NIST states that NIST GenAI will inform the work of the AI Safety Institute in addressing the risks and challenges posed by AI technologies.