Once again, Iran is in the spotlight for spreading disinformation. OpenAI has reported the "neutralization" of several profiles linked to the Islamic country that were aimed at influencing the upcoming US elections in November. In this operation, Iran planned to use ChatGPT, the world's most famous AI chatbot, to fabricate different and false content with the intention of spreading misinformation.
OpenAI released a statement confirming that "this week we identified and removed a group of ChatGPT accounts that were generating content for a covert Iranian influence operation called Storm-2035." The CEO of OpenAI, Sam Altman, explained that this operation aimed "to "generate content on a range of topics, including commentary on candidates on both sides of the US presidential election, which would then be distributed across social media accounts and websites".
This operation is not new and fits into a broader context of foreign influence operations targeting elections in the United States and other countries such as Venezuela, as well as the generation of disinformation about the wars in Ukraine and Gaza.
OpenAI has noted that this campaign received "few or no likes, comments or shares" and that in no case was fake news widely disseminated.
OpenAI stated that a Microsoft investigation led them to these users creating false content since at least 2020 and "actively engaging groups of US voters on opposite ends of the political spectrum with polarizing messages on topics such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict."
Tools
To identify these disinformation campaigns and assess their prevalence, OpenAI used the Brookings Breakout Scale, which rates the impact of covert IO operations on a scale of 1 (lowest) to 6 (highest), and concluded that "this operation ranked at the low end of category 2 (activity on multiple platforms, but with no evidence that real people picked up or shared the content widely)".
However, they found that several long articles created with ChatGPT were not shared via social media, but were posted on websites covering various topics such as US politics, current events and global happenings. These messages were distributed on purpose-built websites that posed as conservative or progressive news outlets in order to polarize public opinion.
Some of these websites posing as news outlets produced by this influence group include EvenPolitics, Nio Thinker, Savannah Time, Teorator, and Westland Sun. They also report that these sites use AI services to copy parts of news articles from US media outlets. They also warn of increasing influence on US elections by other Russian and Iranian groups.
The modus operandi of these propaganda networks is to increase the number of posts and make non-political announcements by mimicking the publications of entertainment media outlets such as Cosmopolitan, The New Yorker and many others to avoid detection, according to Meta.
These posts contain links that, when clicked, redirect to various political propaganda articles on fake domains. In Meta's case, the company has removed around 100 influencer campaigns from Russia, China, Vietnam or Iran since 2017.
English and Spanish
OpenAI has confirmed that Iranian users have used its platform to post comments in English and Spanish. Later, these messages were posted on X accounts and at least one Instagram account. Some of these comments were generated by asking the chatbot to rewrite the messages circulated by other users.
"The operation generated content on a variety of topics: primarily the conflict in Gaza, Israel's presence at the Olympics and the US presidential election, and to a lesser extent politics in Venezuela, the rights of Latino communities in the US (in both Spanish and English) and Scottish independence. They mixed their political content with commentary on fashion and beauty, possibly to appear more authentic or to attract followers," reports OpenAI.
It's clear that the emergence of chatbots such as ChatGPT, Claude or Gemini is giving propaganda networks more resources to generate misinformation more quickly, while at the same time forcing big tech companies to introduce stricter controls to verify information.