OpenAI said it has removed a network of ChatGPT accounts used by an Iranian threat actor for influence operations. The company has linked the accounts to a group tracked as Storm-2035, first identified in April 2024.
Earlier this month, Microsoft detailed a covert influence operation by the same group involving websites masquerading as news outlets targeting US voter groups.
In the operation discovered by OpenAI the threat actor used the accounts for generation of long-form articles and shorter social media comments. The articles, focusing on US politics and global events, were published on five websites that posed as progressive and conservative news outlets. Meanwhile, the shorter comments, in both English and Spanish, were posted on social media platforms.
The groups has also leveraged fake accounts on X (formerly known as Twitter) and on Instagram for the same purpose. Some X accounts posed as progressives, while others as conservatives, and they generated comments by asking the AI models to rewrite existing social media posts.
The operation's content covered a wide range of topics, including the conflict in Gaza, Israel's presence at the Olympic Games, and the US presidential election. Additionally, it touched on Venezuelan politics, the rights of Latinx communities in the US, and Scottish independence. In an effort to appear more authentic or build a following, the operation also mixed political content with comments about fashion and beauty.
The Storm-2035 influence operation did not achieve significant audience engagement, OpenAI noted. The majority of identified social media posts received few or no likes, shares, or comments, and there was no evidence that the web articles were widely shared across social media.