Credit: Andrew Caballero-Reynolds/AFP via Getty Images

Midjourney, the AI image generator known for creating fake images of Donald Trump’s arrest, has prohibited image prompts containing the names of the presidential hopeful Joe Biden and the current president Donald Trump.

This decision follows warnings from experts and advocates about the potential misuse of AI technology to sway voters and disseminate false information before the 2024 presidential election.

SEE ALSO:
Victims of nonconsensual deepfakes arm themselves with copyright law to fight the content’s spread

Tests conducted by the Associated Press revealed a “Banned Prompt Detected” alert when requesting images of “Trump and Biden shaking hands at the beach.” Subsequent attempts triggered an “abuse alert” from the image generator, as reported by the publication.

Midjourney CEO David Holz expressed, “I don’t really care about political speech. That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So, we’re going to have put our foot down on it a bit.” Holz mentioned during a press event on March 13 that Midjourney had considered banning such prompts the previous month. He highlighted his concerns about a potentially more alarming AI landscape in 2028, where malicious actors could refine deepfakes and chatbots even further than current capabilities, stating that “this moderation stuff is kind of hard.”

Similar prompt restrictions have been implemented by other generative AI tools to combat the circulation of disturbing images. Last year, Microsoft’s Bing Image Generator restricted prompts containing the term “twin towers” to prevent the dissemination of memes depicting animated characters reminiscent of the 9/11 attacks – however, meme creators found ways around these limitations.

Shortly after, OpenAI made significant enhancements to its advanced image generator, DALL-E 3, introducing stricter usage guidelines and a “multi-tiered safety system” to restrict the tool’s capability to generate violent, hateful, or adult content, as reported by Mashable’s Chance Townsend. OpenAI also released specific rules regarding election disinformation in January, mentioning that DALL-E 3 could reject image requests depicting real individuals, including political candidates.

In February, OpenAI disclosed that it had identified and disabled accounts linked to foreign state-affiliated malicious actors exploiting its generative AI technologies.

Unlike its more lenient counterparts, Midjourney had not previously issued a statement or implemented new measures to address election disinformation. Despite enforcing a ban on users generating images “for political campaigns, or to try to influence the outcome of an election,” Midjourney was among the few leading AI image generators that did not endorse a voluntary industry agreement to adopt precautions against deepfakes and disinformation, proposed recently. A report from the Center for Countering Digital Hate last year revealed that Midjourney users could easily circumvent community guidelines and safeguards to create consistently conspiratorial and racially prejudiced images.

A recent study conducted by the non-profit organization evaluated several image generators, including Midjourney, on their effectiveness in preventing prompts that propagated election disinformation. The study found that generated images contained election disinformation in 41% of cases, with Midjourney performing the poorest among all tested tools, failing to detect disinformation 65% of the time.

“Midjourney’s public database of AI images indicates that malicious actors are already leveraging the tool to produce images that could support election disinformation,” cautioned the center.

Topics
Artificial Intelligence
Social Good
Politics

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.

Chase DiBenedetto
Social Good Reporter

Chase joined Mashable’s Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she’s very funny.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *