OpenAI is using its artificial intelligence models to weed out more bad actors. And, for the first time, they discovered and removed Russian, Chinese and Israeli accounts used for political influence operations.
The platform discovered and terminated five accounts engaged in covert influence operations, such as propaganda-laden bots, social media scrubbers, and fake article generators, according to a new report from the platform’s threat detection team.
“OpenAI is committed to enforcing policies that prevent abuse and increase transparency around AI-generated content,” the company wrote. “This is especially true when it comes to detecting and disrupting clandestine influence operations (IOs) that seek to manipulate public opinion or influence political outcomes. , without revealing the true identity or intentions of the actors behind it.”
OpenAI establishes new internal security team, led by Sam Altman
The terminated accounts include those behind a Russian Telegram operation dubbed “Bad Grammar” and those facilitating Israeli company STOIC. STOIC was found to be using OpenAI models to generate articles and comments praising Israel’s current military siege, which were then published on platforms such as Meta Platform, X, and others.
Mix and match speed of light
OpenAI says the group of covert actors are using a variety of tools to perform “a range of tasks, such as generating short comments and longer articles in multiple languages, making up names and biographies for social media accounts, conducting open source research, and debugging simple programs. Code, translate and proofread text.
In February this year, OpenAI announced that it had terminated multiple “foreign bad actor” accounts that were found to have engaged in similar suspicious behavior, including using OpenAI’s translation and encoding services to support potential cyberattacks. This work was conducted in partnership with Microsoft Threat Intelligence.
As communities prepare for a series of global elections, many are paying close attention to AI-fueled disinformation campaigns. In the United States, deepfake AI videos and audio of celebrities and even presidential candidates have led the federal government to call on tech leaders to stop their spread. A report from the Center to Counter Digital Hate finds that despite election integrity pledges from many AI leaders, AI voice cloning remains easily manipulated by bad actors.
Learn more about how artificial intelligence may play a role in this year’s elections and what you can do about it.