In its Responsible AI Transparency ReportMainly covering 2023, Microsoft promotes its achievements in safely deploying artificial intelligence products. The annual AI transparency report is one of the company’s commitments after signing a voluntary agreement with the White House last July. Microsoft and other companies are committed to building responsible AI systems and working on security.
Microsoft said in the report that it created 30 responsible AI tools in the past year, expanded its responsible AI team, and required teams developing generative AI applications to measure and Mapping risks. The company noted that it added content credentials to its image generation platform, which adds a watermark to photos to mark them as produced by an AI model.
The company said it provides Azure AI customers with tools to detect problematic content such as hate speech, sexual content and self-harm, as well as tools to assess security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect hint injection, where malicious instructions are part of the material ingested by the artificial intelligence model.
It is also expanding its red team efforts to include internal red teams that deliberately try to bypass security features in its artificial intelligence models, as well as red team applications to allow third-party testing of new models before releasing them.
Its Red Team units, however, have their work cut out for them. The company’s AI rollout has not been immune to controversy.
Natasha Crampton, chief artificial intelligence officer at Microsoft, said in an email to Microsoft edge The company understands that artificial intelligence is still a work in progress, and so is responsible artificial intelligence.
“Responsible AI has no finish line, so we would never consider our work under the Voluntary AI Pledge. But we have made tremendous progress since signing and look forward to continuing our momentum this year,” said Crump Dayton said.