Microsoft is more explicitly banning police departments from using its artificial intelligence models to identify suspects, under a new behavioral language from the Azure OpenAI partnership.
The new language explicitly prohibits the use of its artificial intelligence model services “for use by or for facial recognition by U.S. police departments.” It also bans the use of mobile cameras “in the field” by any law enforcement agency worldwide, or the use of body-worn or dashboard-mounted cameras by police officers on patrol to verify identities. Microsoft also does not allow individual identification in databases of suspects or former inmates.
The company’s Azure OpenAI system provides API access to OpenAI language and coding models through Microsoft’s cloud storage, and recently added Chat GPT-4 Turbo with Vision, OpenAI’s advanced text and image analyzer. In February, the company announced it would submit its generative artificial intelligence services for use by federal agencies.
Mix and match speed of light
ChatGPT now saves chat history even if you choose not to share training data
Microsoft’s Code of Conduct prohibits the use of the Azure OpenAI system for the following actions:
-
To identify or verify an individual based on their face or other physical, physiological or behavioral characteristics; or
-
To identify or verify an individual’s identity based on media containing a human face or other physical, biological, or behavioral characteristics.
The new language outlines a more specific ban on police agencies using artificial intelligence systems for data collection.recent Puplica The report documents the extent to which police departments across the country have implemented similar machine learning, including using artificial intelligence tools to examine millions of hours of footage of traffic stops and other civilian interactions. “Much of the data collected from these analyses, and the lessons learned from them, remain confidential, and findings are often subject to nondisclosure agreements,” the publication writes. “This echoes the same problem with body camera video itself: Police departments are still the ones who decide how to use technology that was originally designed to make their activities more transparent and hold them accountable for their actions.”
While some players have taken similar steps to protect user data from law enforcement investigations, including Google’s recent location data privacy protections, others are leaning toward the possibility of cooperation. Last week, police camera and cloud storage provider Axon launched Draft One, an artificial intelligence model that can automatically transcribe body camera audio to “significantly improve the efficiency of police report writing.”