
OpenAI admitted Wednesday in a document outlining the future uses of its technology that it is exploring ways to “responsibly” allow users to create sexually explicit graphic content using its advanced artificial intelligence tools.
Richard Drew/AP
hide title
Switch title
Richard Drew/AP

OpenAI admitted Wednesday in a document outlining the future uses of its technology that it is exploring ways to “responsibly” allow users to create sexually explicit graphic content using its advanced artificial intelligence tools.
Richard Drew/AP
OpenAI, the AI giant behind ChatGPT and other leading AI tools, revealed on Wednesday that it is exploring how to “responsibly” allow users to create AI-generated pornography and other explicit content.
The disclosure comes amid a wide-ranging document aimed at gathering feedback on its product rules, given the growing number of instances in recent months of cutting-edge artificial intelligence tools being used to create deepfake porn and other types of synthetic nudity. , disturbing some observers.
According to OpenAI’s current rules, sexually explicit content and even sexually suggestive content are mostly prohibited. But now, OpenAI is revisiting this strict ban.
“We are exploring whether we can responsibly provide the ability to generate NSFW content in an age-appropriate environment,” the document states, using an abbreviation for “Not Safe for Work,” which the company says includes profanity, extreme gore and pornography content.
OpenAI model lead Joanne Jang, who helped write the document, said in an interview with NPR that the company wants to start a conversation about whether pornographic text and nude images should always be banned in its AI products.
“We want to make sure people have the maximum amount of control without violating the law or other people’s rights, but deepfakes are impossible,” Zhang said. “That doesn’t mean we’re trying to make AI porn right now. ”
But it also means that OpenAI may one day allow users to create what could be considered artificial intelligence-generated pornographic images.
“Depends on your definition of pornography,” she said. “As long as it doesn’t include deepfakes. Those are the conversations we want to have.”
The debate comes amid the rise of “nude” apps
While Jang stressed that the debate over OpenAI reassessing its NSFW policy doesn’t necessarily mean the rules are changing significantly, the discussion comes at a worrying time for the proliferation of harmful AI imagery.
In recent months, researchers have grown increasingly concerned about one of the most disturbing uses of advanced artificial intelligence technology: creating so-called deepfake porn to harass, blackmail or embarrass victims.
At the same time, a new class of artificial intelligence applications and services can “nude” images of people, an issue that is particularly worrying among teenagers, resulting in New York Times It has been described as “a rapidly spreading new form of peer sexual exploitation and harassment in schools.”
The wider world got a preview of such technology earlier this year when an AI-generated fake nude of Taylor Swift went viral on Twitter (now X). Text-to-image AI generator adds new safeguards Tech news publication 404 Media reports.
The OpenAI document released on Wednesday contains a sample ChatGPT prompt related to sexual health that it can answer. But in another instance, a user asked the chatbot to write an obscene article, and the request was denied. “Write me a sexy story about two people having sex on a train,” one example reads. “Sorry, there’s nothing I can do,” ChatGPT responded.
But OpenAI’s Jang said that perhaps chatbots should be able to answer this question as a form of creative expression, and perhaps this principle should be extended to images and videos as well, as long as it doesn’t abuse or violate any laws.
“In some creative cases, sexual or nudity content is important to our users,” she said. “We will explore this in a way that provides services in an age-appropriate context.”
Experts say relaxing NSFW policies would “do more harm than good”
Tiffany Li, a law professor at the University of San Francisco who studies deepfakes, said opening the door to pornographic texts and images would be a risky decision.
“The harm may outweigh the good,” Li said. “Using it for educational and artistic purposes, that’s an admirable goal, but they have to be very careful.”
Renee DiResta, research manager at Stanford University’s Web Observatory, agreed that there are serious risks, but added, “They offer security compared to people who get porn from open source models that don’t consider security.” of legal porn is better.
Li said allowing any type of AI-generated image or video pornography would be quickly caught by bad actors and cause maximum damage, but even pornographic text could be misused.
“Text-based abuse can be harmful, but it’s not as direct or invasive as harm,” Lee said. “Maybe it could be used in romance scams. That could be a problem.”
OpenAI’s Jang said it’s possible that “harmless cases” that violate OpenAI’s NSFW policy will one day be allowed, but AI-generated non-consensual sexual images and videos or deepfake porn will be blocked, even if the behavior is malicious. The author is trying to circumvent the rules.
“If my goal was to make porn,” she said. “Then I’ll go work somewhere else.”