How an Iranian Influence Operation Used ChatGPT to Target the U.S. Presidential Election

OpenAI says Iranian hackers used ChatGPT AI chatbot to influence US  Presidential elections, here's how - Times of India

Context

Recently, OpenAI announced that it had banned ChatGPT accounts connected to an Iranian influence operation as it was generating content to sway the U.S. presidential election.

Storm-2035

  • OpenAI identified a group concerned in an Iranian influence operation, dubbed “Storm-2035,” which operated by 4 websites posing as information groups.
  • These websites, along with EvenPolitics, Nio Thinker, Westland Sun, Teorator, and Savannah Time, exploited divisive issues which included LGBTQ rights and the Israel-Hamas battle to steer the U.S. Voters.
  • According to a Microsoft Threat Analysis Center (MTAC) record, the sites used AI equipment to plagiarize content material and drive net site visitors. The operation targeted both liberal and conservative voters in the U.S.

Use of ChatGPT to steer U.S. Presidential election

  • OpenAI discovered that operatives from the Storm-2035 organization used ChatGPT to generate long-form articles and social media comments, which had been then posted on X and Instagram accounts.
  • These AI-generated posts mimicked American language patterns, rehashed present feedback or propaganda, and extensively reduced the time had to produce and distribute plagiarized content material aimed toward influencing the electorate.
  • The operation not only targeted the imminent U.S. Presidential election but also protected worldwide issues along with Venezuelan politics, Latin rights in the U.S., the scenario in Palestine, Scottish independence, and Israel’s participation in the Olympic Games.

Impact

  • OpenAI has downplayed the severity of the Storm-2035 incident, noting that the content generated by the operation received minimum engagement on social media.
  • Using Brookings’ BreakoutScale, which rates the influence of covert operations from 1 to 6, the report categorized this operation as low-end Category 2.
  • This way the content was posted on more than one structure however didn’t gain traction among actual customers.
  • Despite this, OpenAI emphasised that it had shared the risk data with applicable government, marketing campaign, and enterprise stakeholders.
  • While OpenAI considered the disruption of this Iran-linked affect operation as a high quality final result, it also acknowledged the extreme implications of overseas operatives and the use of generative AI equipment to target U.S. Citizens.
  • The incident underscores more than one vulnerability across OpenAI, social media platforms like X and Instagram, and the search engines like Google that rank the web sites concerned.

Other similar issues OpenAI faced in the past

  • In May, OpenAI disclosed that it had spent over three months dismantling covert impact operations and the usage of its AI tools to generate social media feedback, articles in diverse languages, faux profiles, and to translate or proofread content material.
  • One Russian organization, dubbed “Bad Grammar,” used Telegram to goal Ukraine, Moldova, the Baltic States, and the U.S.
  • Other operations covered “Doppelganger” from Russia, “Spamouflage” from China, and “International Union of Virtual Media” (IUVM) from Iran.
  • These agencies used ChatGPT to put in writing social media feedback and articles on structures like X and 9GAG.
  • They centered on topics like Russia’s invasion of Ukraine, the Gaza war, elections in India and Europe, and criticism of the Chinese government.
  • OpenAI also exposed times of country-backed actors the usage of AI for malicious purposes.
  • In July, it found out that a hacker had accessed its internal messaging structures the preceding 12 months, stealing information related to its AI technologies.
  • Although the hacker turned into an individual, the breach raised concerns about potential threats from Chinese adversaries.

Steps taken by OpenAI to secure its tech

  • OpenAI found that its AI gear efficiently refused to generate positive text or pix because of built-in safeguards at some point of its research into affect operations.
  • The agency also evolved AI-powered protection equipment that could now come across chance actors in days in place of weeks.
  • Although not widely discussed, OpenAI has deepened its ties with U.S. Federal corporations.
  • In June, OpenAI appointed cybersecurity expert and retired U.S. Army General Paul M. Nakasone to its Board of Directors.
  • Nakasone, who previously led the U.S. National Security Agency, has vast experience in cyber devices across the U.S., Korea, Iraq, and Afghanistan.
  • Recently, OpenAI also introduced a partnership with the U.S. AI Safety Institute, permitting the institute to preview and test its upcoming foundational model, GPT-5.

Source: The Hindu

Share this with friends ->