“OpenAI Thwarts Misinformation Campaigns Utilizing AI Models Across Multiple Platforms”
As per report On Thursday, OpenAI, under the leadership of CEO Sam Altman, disclosed its successful disruption of five separate campaigns employing AI models for “deceptive activity” online. These campaigns, active over the past three months, involved the creation of fake social media profiles, brief comments, and extensive articles in multiple languages, all aimed at disseminating misinformation and shaping public opinion.
The campaigns, identified by OpenAI, spanned various geopolitical issues including the conflict in Ukraine, the recent war in Gaza, Indian elections, and political landscapes in Europe and the United States. Threat actors associated with Russia, China, Iran, and Israel were implicated in these operations.
OpenAI condemned these actions, labeling them as deliberate attempts to manipulate public sentiment and influence political outcomes. This revelation underscores growing concerns regarding the potential misuse of generative AI technology, capable of producing remarkably human-like text, images, and audio.
To address these concerns, OpenAI, supported by tech giant Microsoft, announced the formation of a dedicated Safety and Security Committee earlier this week. Spearheaded by CEO Sam Altman and other board members, the committee will oversee the training of next-generation AI models.
Fortunately, OpenAI confirmed that the disrupted campaigns did not achieve significant reach or audience engagement due to the company’s countermeasures. The campaigns utilized a blend of AI-generated content, manually written text, and repurposed internet memes.
This development aligns with Meta Platforms’ recent discovery, where the parent company of Facebook and Instagram identified “likely AI-generated” content used deceptively on its platforms. This included strategically placed comments supporting Israel’s actions during the Gaza conflict under posts from prominent news outlets and US lawmakers.