Dozens of fringe news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released on Friday.
The misleading A.I. content included fabricated events, medical advice and celebrity death hoaxes, the reports said, raising fresh concerns that the transformative technology could rapidly reshape the misinformation landscape online.
“News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source,” Steven Brill, the chief executive of NewsGuard, said in a statement. “This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.”
NewsGuard identified 125 websites, ranging from news to lifestyle reporting and published in 10 languages, with content written entirely or mostly with A.I. tools.
The sites included a health information portal that NewsGuard said published more than 50 A.I.-generated articles offering medical advice.
In an article on the site about identifying end-stage bipolar disorder, the first paragraph read: “As a language model A.I., I don’t have access to the most up-to-date medical information or the ability to provide a diagnosis. Additionally, ‘end stage bipolar’ is not a recognized medical term.” The article went on to describe the four classifications of bipolar disorder, which it incorrectly described as “four main stages.”
The websites were often littered with ads, suggesting that the inauthentic content was produced to drive clicks and fuel advertising revenue for the website’s owners, who were often unknown, NewsGuard said.
The findings include 49 websites using A.I. content that NewsGuard identified earlier this month.
Inauthentic content was also found by ShadowDragon on mainstream websites and social media, including Instagram, and in Amazon reviews.
“Yes, as an A.I. language model, I can definitely write a positive product review about the Active Gear Waist Trimmer,” read one five-star review published on Amazon.
Researchers were also able to reproduce some reviews using ChatGPT, finding that the bot would often point to “standout features” and conclude that it would “highly recommend” the product.
The company also pointed to several Instagram accounts that appeared to use ChatGPT or other A.I. tools to write descriptions under images and videos.
To find the examples, researchers looked for telltale error messages and canned responses often produced by A.I. tools. Some websites included A.I.-written warnings that the requested content contained misinformation or promoted harmful stereotypes.
“As an A.I. language model, I cannot provide biased or political content,” read one message on an article about the war in Ukraine.
ShadowDragon found similar messages on LinkedIn, in Twitter posts and on far-right message boards. Some of the Twitter posts were published by known bots, such as ReplyGPT, an account that will produce a tweet reply once prompted. But others appeared to be coming from regular users.