Microsoft prohibits American law enforcement agencies from utilizing enterprise AI tool

Reading Time: < 1 minute

Microsoft has made a significant policy change that will impact U.S. police departments’ use of generative AI technology through the Azure OpenAI Service. The company has updated its terms of service to explicitly ban police departments in the U.S. from utilizing OpenAI’s text- and speech-analyzing models.

The new policy also prohibits the use of real-time facial recognition technology on mobile cameras, such as body cameras and dashcams, to identify individuals in uncontrolled environments. This move comes in response to concerns raised by critics about the potential risks associated with using AI technology in law enforcement, including the introduction of racial biases and the creation of false information.

It remains unclear whether the policy change was prompted by the recent announcement from Axon, a tech and weapons manufacturer for military and law enforcement, regarding the use of OpenAI’s GPT-4 generative text model in their products. Microsoft’s updated terms only apply to U.S. police departments and do not restrict international law enforcement agencies from using the Azure OpenAI Service.

This decision aligns with Microsoft’s and OpenAI’s approach to AI-related contracts with law enforcement and defense agencies. While OpenAI has previously restricted the use of its models for facial recognition, Microsoft has been actively pursuing partnerships with government agencies, including the Department of Defense, to implement AI technology in military operations.

The implications of this policy change on the future of AI technology in law enforcement remain to be seen. Microsoft and OpenAI have not yet provided further details or comments on the updated terms of service.

Taylor Swifts New Album Release Health issues from using ACs Boston Marathon 2024 15 Practical Ways To Save Money