China, Iran Exploit U.S. AI for Covert Influence, OpenAI Finds

China, Iran-Based Threat Actors Exploit U.S. AI Models for Covert Influence, OpenAI Report Reveals
A recent report from OpenAI uncovers how adversaries, including actors linked to China and Iran, are leveraging American artificial intelligence (AI) models to manipulate global political narratives. The February report highlights new threats involving AI misuse, particularly through platforms like ChatGPT.
AI Models Used for Covert Influence Operations
According to OpenAI’s findings, Chinese and Iranian threat actors have attempted to exploit AI-powered tools for disinformation campaigns. The report details two significant disruptions originating from China, where malicious actors engaged AI models from OpenAI and Meta to spread misinformation.
One instance involved OpenAI banning a ChatGPT account that generated critical comments about Chinese dissident Cai Xia. These comments were later disseminated on social media by accounts masquerading as individuals from India and the U.S. However, they failed to gain significant traction online.
Additionally, the same threat actor utilized ChatGPT to craft Spanish-language news articles denigrating the U.S., which were published in mainstream Latin American media. Some of these articles bore the bylines of individuals and even Chinese companies. In at least one case, a translated version was labeled as sponsored content, implying financial backing for its distribution.
First Instance of Chinese Actors Targeting Latin America
OpenAI describes this as the first known case where a Chinese entity successfully infiltrated Latin American media with anti-U.S. narratives through AI-generated content. Principal Investigator Ben Nimmo emphasized the significance of AI’s role in connecting social media activity with published articles, providing a clearer picture of covert influence operations.
“This is a troubling insight into how non-democratic actors exploit U.S.-based AI for undemocratic purposes,” Nimmo stated during a press briefing.
Iranian Influence Networks and AI Abuse
OpenAI also uncovered an Iranian-linked operation where ChatGPT-generated tweets and articles were published on platforms associated with Iranian influence campaigns. While these activities were initially viewed as separate, the report suggests a potential overlap among Iranian influence networks, raising concerns about coordinated disinformation efforts.
AI Models Used for Online Scams
Beyond political influence, threat actors exploited AI for financial scams. OpenAI banned multiple ChatGPT accounts used in “romance-baiting” scams, commonly known as “pig butchering.” These fraudulent schemes targeted users across social media platforms such as X, Facebook, and Instagram. Further investigation by Meta linked these activities to a newly established scam operation in Cambodia.
Strengthening AI Security Against Threat Actors
OpenAI was the first AI research lab to proactively publish reports on AI abuse prevention. Since its initial report, the company has significantly enhanced its investigative capabilities, disrupting various forms of malicious AI use.
The report underscores the importance of collaboration with industry partners, governments, and social media platforms to curb AI-driven disinformation. OpenAI advocates for information-sharing with upstream providers (hosting and software companies) and downstream distributors (social media platforms and researchers) to track and mitigate AI-powered threats effectively.
“We know that threat actors will continue testing our defenses. We are committed to identifying, preventing, disrupting, and exposing attempts to misuse our AI models,” OpenAI affirmed in its report.
As AI technology advances, the battle against misinformation and AI-powered deception intensifies. OpenAI’s latest findings reinforce the need for robust AI security measures and international cooperation to safeguard digital ecosystems from adversarial influence.
Source: FoxNews
Frequently Asked Questions
How are China and Iran using AI for covert influence?
According to OpenAI's report, threat actors from China and Iran are exploiting AI models like ChatGPT to spread misinformation, generate propaganda, and manipulate public opinion through social media and news articles.
What AI models are being misused for disinformation?
Threat actors have attempted to use AI models developed by OpenAI and Meta to create misleading content, including social media posts, fake news articles, and political narratives.
How does AI-generated misinformation impact global politics?
AI-generated misinformation can influence public opinion, disrupt democratic processes, and shape political narratives by spreading false or biased information on social media and news platforms.
What actions has OpenAI taken against these threats?
OpenAI has banned several ChatGPT accounts linked to these influence campaigns and continues to enhance its AI security, collaborate with industry partners, and monitor AI abuse.
How can governments and organizations prevent AI misuse?
Preventing AI misuse requires stronger regulations, AI monitoring systems, collaborations between AI companies and social media platforms, and increased awareness of AI-generated misinformation.
How does AI contribute to the spread of misinformation?
AI contributes to the spread of misinformation by enabling the creation of convincing fake content, such as text, images, and videos, which can be disseminated rapidly across social media and news platforms. According to NewsGuard, there are over 1200 unreliable AI-generated news websites, highlighting the scale of this issue. [Source]
What measures are tech companies taking to combat AI-generated misinformation?
Tech companies like Google and OpenAI are implementing various measures to combat AI-generated misinformation. Google has improved its AI systems with better detection mechanisms, limited user-generated content in responses, and added restrictions for sensitive queries. OpenAI has banned accounts linked to influence campaigns and is enhancing its AI security protocols. [Google Source]
How can AI be used to detect and counter misinformation?
AI can be leveraged to detect and counter misinformation by analyzing large datasets for patterns of false information, identifying anomalies in text or media, and automating the process of flagging suspicious content on social media platforms. [Source]
What are the challenges in regulating AI-generated content?
Regulating AI-generated content presents several challenges, including keeping pace with rapidly evolving technology, attributing responsibility for AI-generated actions, and balancing the need to prevent harm with protecting free speech. Additionally, excessive focus on AI’s role in misinformation might overshadow other important issues. [Source 1, Source 2]
Are there any successful examples of combating AI-generated misinformation?
While comprehensive success stories are still emerging, there are notable efforts by tech companies to combat AI-generated misinformation. For example, Google has made technical improvements to its AI systems to reduce the presentation of inaccurate information in search results. OpenAI has also taken action by banning accounts involved in influence campaigns. These steps demonstrate progress in addressing the issue. [Google Source]
About the Author

Michael
Administrator
Michael David is a visionary AI content creator and proud Cambridge University graduate, known for blending sharp storytelling with cutting-edge technology. His talent lies in crafting compelling, insight-driven narratives that resonate with global audiences.With expertise in tech writing, content strategy, and brand storytelling, Michael partners with forward-thinking companies to shape powerful digital identities. Always ahead of the curve, he delivers high-impact content that not only informs but inspires.
Follow Us on Social Media