Artificial intelligence seems to be everywhere these days, doing good by helping doctors detect cancer and doing bad by helping fraudsters prey on unsuspecting victims. On Wednesday, a day after Microsoft said the U.S. needs new laws to hold people who misuse artificial intelligence accountable, the U.S. Copyright Office released the first part of its report on legal and policy issues related to copyright and artificial intelligence, specifically regarding deepfakes The problem.
A government report recommends that Congress enact a new federal law to protect people from the knowing distribution of unauthorized digital copies and provides recommendations on how to enact such a law.
“We believe there is an urgent need for effective nationwide protections to prevent potential damage to reputations and livelihoods,” said Shira Perlmutter, Registrar of Copyrights and Director of the U.S. Copyright Office. “We look forward to working with Congress as they consider our recommendations and evaluate future developments.”
The government’s report will be released in several parts, with upcoming parts addressing copyright issues involving AI-generated materials, the legal implications of training AI models on copyrighted works, licensing considerations and the allocation of any potential liability.
Microsoft calls for regulation
Microsoft said in a blog post on Tuesday that U.S. lawmakers need to pass a “comprehensive deepfake fraud statute” to target criminals who use artificial intelligence technology to steal or manipulate ordinary Americans.
Microsoft President Brad Smith wrote: “AI-generated deepfakes are real, can be easily produced by almost anyone, and are increasingly being used for fraud, abuse, and manipulation — especially is for children and the elderly. “The biggest risk is not that the world does too much to solve these problems, but that the world does too little. “
Microsoft’s call for regulation comes as artificial intelligence tools are spreading across the tech industry and are becoming increasingly available to criminals, helping them gain the trust of victims more easily. Many of these programs misuse legitimate technologies designed to help people write messages, conduct project research, and create websites and images. In the hands of fraudsters, these tools can create fake forms and trustworthy websites to deceive and steal users’ information.
“The private sector has a responsibility to innovate and implement safeguards to prevent the misuse of artificial intelligence,” Smith wrote. But he said governments need to develop policies that “promote responsible development and use of artificial intelligence.”
already behind
Although artificial intelligence chatbot tools from Microsoft, Google, Meta, and OpenAI have only become widely available for free in the past few years, the data on how criminals abuse these tools is already alarming.
Earlier this year, AI-generated porn of global music star Taylor Swift spread “like wildfire” online, racking up more than 45 million views on X, according to a February report from the National Sexual Violence Resource Center .
“While deepfake software was not originally designed to create sexual images and videos, it has become its most common use today,” the group wrote. However, despite the widespread recognition of the problem, the group noted that “deepfake porn Victims of content have little legal recourse.”
Meanwhile, a report this summer from the Identity Theft Resource Center found that fraudsters are increasingly using artificial intelligence to help create fake job listings as a new way to steal people’s identities.
“The rapid improvement in the look, feel and messaging of identity fraud is almost certainly a result of the introduction of artificial intelligence-driven tools,” ITRC wrote in its June trend report.
On top of this, AI-manipulated online posts go viral in an attempt to undermine our shared understanding of reality. The most recent example came shortly after an assassination attempt on former President Donald Trump in early July. Photoshopped photos circulating online appeared to show Secret Service agents smiling as they carried Trump to safety. In the original photo, the agents have neutral expressions.
Even in the past week, X owner Elon Musk shared a video using a cloned voice of Vice President and Democratic presidential candidate Kamala Harris to disparage President Joe Biden and put Harris calls them “diverse employees.” X service rules prohibit users from sharing manipulated content, including “media that may cause widespread confusion about public issues, affect public safety, or cause serious harm.” Musk argued that his post was parody.
Microsoft’s Smith said that while many experts are focused on deepfakes used in election interference, “their broader role in other types of crime and abuse requires the same attention.”