Bandung, IndonesiaSentinel.com — In response to growing concerns over the misuse of artificial intelligence (AI) to create deepfake pornography, tech giants Microsoft and Adobe have announced they will remove nude images from the datasets used to train their AI products. This move is part of a broader industry effort to prevent the spread of deepfake pornographic content.
As part of a voluntary agreement brokered by the Biden administration, several tech companies, including Adobe, Anthropic, Cohere, Microsoft, and OpenAI, pledged to eliminate explicit images from their AI training data, when necessary. This decision was announced by the White House as part of a larger initiative aimed at combating image-based sexual abuse of children and the creation of AI-generated pornographic deepfakes of adults.
The White House Office of Science and Technology Policy (OSTP) highlighted the urgency of addressing this issue, noting that the number of explicit AI-generated images has skyrocketed, disproportionately targeting women and children. The OSTP emphasized that these images represent one of the most harmful applications of AI technology today.
Deepfake Content Spread Across South Korea, Forcing Women to Delete Photos on Social Media
According to a report by AP News, Common Crawl, an open-source data repository frequently used for AI training has joined forces with these tech companies in their commitment to curbing deepfake pornography. As a widely-used resource for collecting vast amounts of internet data, Common Crawl’s open-source datasets are often used to train AI chatbots and image generators.
The organization has now committed to responsibly overseeing its data sources and safeguarding them against image-based sexual exploitation.
In a separate announcement on Thursday, September 12, another group of companies, including Bumble, Discord, Match Group, Meta, Microsoft, and TikTok, unveiled a set of voluntary principles to prevent image-based sexual abuse. This declaration was made in conjunction with the 30th anniversary of the Violence Against Women Act (VAWA), reinforcing their commitment to addressing this growing digital threat.
These efforts reflect a growing recognition of the dangers posed by AI-generated deepfake pornography, as the technology continues to evolve and become more accessible. As tech companies commit to better safeguarding their data sources, the focus will now shift to ensuring that AI products are developed responsibly and that image-based sexual abuse is curtailed.
(Raidi/Agung)