Microsoft is calling on members of Congress to regulate the use of artificial intelligence-generated deepfakes to prevent fraud, abuse and manipulation. Microsoft Vice Chairman and President Brad Smith called on policymakers to take urgent action to protect elections, protect seniors from fraud, and children from abuse.
“While the tech industry and nonprofits have recently taken steps to address this issue, it’s clear that our laws also need to continue to evolve to combat deepfakes,” Smith said in the blog post. “The most important thing the United States can do is One of the things to do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from ordinary Americans.”
Microsoft wants to create a “deepfakes fraud statute” that would provide law enforcement officials with a legal framework for prosecuting artificial intelligence-generated scams and fraud. Smith also called on lawmakers to “ensure that our federal and state laws regarding child sexual exploitation and abuse and non-consensual intimate images are updated to include AI-generated content.”
Microsoft has had to implement more security controls over its own artificial intelligence products after a flaw in the company’s designer AI image creator allowed people to create explicit images of celebrities such as Taylor Swift. “The private sector has a responsibility to innovate and implement safeguards to prevent the misuse of artificial intelligence,” Smith said.
While the FCC has banned the use of AI-generated voices for robocalls, generative AI can easily create fake audio, images, and videos — something we’ve already seen in the run-up to the 2024 presidential election. Elon Musk shared a deepfake video of Vice President Kamala Harris on X earlier this week, a post that appeared to violate X’s own policy against synthetic and manipulated media .
Microsoft wants posts like Musk’s to be clearly labeled as deepfakes. “Congress should require providers of artificial intelligence systems to use state-of-the-art provenance tools to flag synthetic content,” Smith said. “This is critical to building trust in the information ecosystem and will help the public better understand whether content is generated or manipulated by artificial intelligence.”