MOUNTAIN VIEW, Calif. — In a new paper, Google researchers continue to classify explicit sexual material in the same category as “fake, hateful, or harmful content” that requires filtering.
In the research paper about the company's proprietary AI technology, Google researchers write that although generative text-to-image AI models, like the popular Dall-E 2 viral images, have made “tremendous progress,” Google has decided not to release its version — called Imagen Video — until “concerns are mitigated” regarding potential misuse, “for example to generate fake, hateful, explicit or harmful content.”
In other words, as tech news site Tweak Town, which first flagged the Google paper, editorialized, Google “has subtly said that it won't be releasing its new video-generating artificial intelligence system over it producing gore, porn and racism.”
Google’s researchers and policymakers view depictions of human sexuality as part of the “problematic data” they see as presenting “important safety and ethical challenges.”
The researchers remain hopeful that, one day, they can develop better tools for censoring sexual content, but conclude that, at present, “though our internal testing suggests much of explicit and violent content can be filtered out, there still exists social biases and stereotypes which are challenging to detect and filter.”
The company has therefore decided not to release Imagen Video until it can fully censor “problematic content,” including “explicit material.”
The only paper quoted by the researchers directly concerning explicit content is titled “Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes, which similarly bundles “pornography” into a category of “troublesome” material such as rape and racist slurs.