DÜSSELDORF, Germany — The local official crusading for Germany to hide all sexual content behind an age verification wall unveiled this week an AI tool called KIVI, which automatically scans all online content to determine which images are not compliant with the law.
Tobias Schmid, director of the State Media Authority of North Rhine-Westphalia, and one-man War on Porn, held a press conference yesterday boasting of the surveillance. He explained that he coined the name KIVI after “KI” — the German initials for AI — and “VI” from the Latin word “vigilare,” meaning “to survey.”
While some might view such surveillance as an authoritarian fantasy, Schmid called it a "fantastic combination,” German tech news site NetzPolitik.org reported yesterday.
NetzPolitik reporter Sebastian Meineck, who has been covering Schmid’s meticulous, obsessive attempts to ban all sexual content from open platforms in Germany and Europe, wrote that “pornography” is one of the clearest targets of the local bureaucrat’s new surveillance engine.
Schmid included the category among others like “extremism, hate speech, swastikas or the glorification of drugs” that his “most modern” KIVI, which he has already expanded nationwide, has been trained to detect.
KIVI was developed by Berlin-based Condat under Schmid’s supervision and is now reportedly being used by all 14 state media authorities in Germany.
KIVI is currently surveying images, texts and videos on all websites, as well as apps like “Telegram, Twitter, YouTube, TikTok, the Russian Facebook competitor VKontakte and the video portal Bitchute,” NetzPolitik reported.
“KIVI recognized 20,000 potential violations within a year,” the report continued. “The violations are then checked by humans [employed and selected by Schmid] using a ticket system. According to the NRW Media Authority, there are around eight student assistants and seven lawyers. After sorting out erroneous hits, 6,700 violations remained. Some KIVI hits lead to reports on the platforms, and others also to criminal charges.”
Schmid has described the cooperation between his agency and the State Criminal Police Office of North Rhine-Westphalia as "very good."
In general, NetzPolitik's Meineck points out, “artificial intelligence often adopts biases, prejudices and biases in their training data,” yet Schmid claimed with characteristic determination that the AI tool is "neutral” and "so wonderfully uncorrupt.”
To protect reviewers from potentially traumatic content, KIVI presents blurry images in a small preview window at first, which employees can sharpen using a slider to determine whether they contain violence or pornography.
“I am pretty sure that we will gradually get the mass of cases under control,” Schmid concluded.
Main Image: Tobias Schmid, director of the State Media Authority of North Rhine-Westphalia