
OpenAI has eliminated the accounts of a number of customers linked to China, which it says had been used to generate propaganda materials revealed in mainstream newspapers in Latin America.
In an up to date report noticed by Reuters, OpenAI factors to numerous incidents the place it believes that ChatGPT was used to generate Spanish-language newspaper articles criticizing the US, which had been then revealed in well-known newspapers in Mexico, Peru, and Ecuador. The articles centered on political divisions within the US and present affairs, particularly the subjects of drug use and homelessness.
The customers reportedly prompted ChatGPT to generate the Spanish-language articles in Chinese language throughout mainland Chinese language working hours. OpenAI says they used ChatGPT to translate receipts from Latin American newspapers, indicating the articles might properly have been paid placements.
ChatGPT was additionally allegedly utilized by the accounts to generate short-form materials, together with feedback crucial of Cai Xia, a well known Chinese language political dissident, which had been then posted on X by customers claiming to be from the US or India.
“That is the primary time we’ve noticed a Chinese language actor efficiently planting long-form articles in mainstream media to focus on Latin America audiences with anti-US narratives, and the primary time this firm has appeared linked to misleading social media exercise,” OpenAI says.
OpenAI says among the exercise is per the covert affect operation often called “Spamouflage,” a serious Chinese language disinformation operation noticed on over 50 social media platforms, together with Fb, Instagram, TikTok, Twitter, and Reddit. The marketing campaign, recognized by Meta in 2023, focused customers within the US, Taiwan, UK, Australia, and Japan with constructive details about China.
Advisable by Our Editors
In Could 2024, OpenAI reported that teams based mostly in Russia, China, Iran, and Israel used the corporate’s AI fashions to generate brief feedback on social media, in addition to translate and proofread textual content in numerous languages. For instance, a Russian propaganda group often called Dangerous Grammar used OpenAI’s know-how to generate faux replies about Ukraine to particular posts on Telegram in English and Russian.
However although we have seen worldwide propaganda teams leverage OpenAI’s instrument earlier than, OpenAI thinks the latest incident is exclusive as a result of its concentrating on of mainstream media, calling this “a beforehand unreported line of effort, which ran in parallel to extra typical social media exercise, and should have reached a considerably wider viewers.”
Get Our Finest Tales!
This article might include promoting, offers, or affiliate hyperlinks.
By clicking the button, you verify you might be 16+ and comply with our
Phrases of Use and
Privateness Coverage.
Chances are you’ll unsubscribe from the newsletters at any time.
About Will McCurdy
Contributor

Learn the newest from Will McCurdy