Stanford misinformation expert admits chatbot led to misinformation in court filing

A Stanford University misinformation expert who was called out in a federal court case in Minnesota for submitting a sworn declaration that contained made-up information has blamed an artificial intelligence chatbot.

And the bot generated more errors than the one highlighted by the plaintiffs in the case, professor Jeff Hancock wrote in an apologetic court filing, saying he did not intend to mislead the court or any lawyers.

“I express my sincere regret for any confusion this may have caused,” Hancock wrote.

Lawyers for a YouTuber and Minnesota state legislator suing to overturn a Minnesota law said in a court filing last month that Hancock’s expert-witness declaration contained a reference to a study, by authors Huang, Zhang, Wang, that did not exist. They believed Hancock had used a chatbot in preparing the 12-page document, and called for the submission to be thrown out because it might contain more, undiscovered AI fabrications.

It did: After the lawyers called out Hancock, he found two other AI “hallucinations” in his declaration, according to his filing in Minnesota District Court.

The professor, founding director of the Stanford Social Media Lab, was brought into the case by Minnesota’s attorney general as an expert defense witness in a lawsuit by the state legislator and the satirist YouTuber. The lawmaker and the social-media influencer are seeking a court order declaring unconstitutional a state law criminalizing election-related, AI-generated “deepfake” photos, video and sound.

Hancock’s legal imbroglio illustrates one of the most common problems with generative AI, a technology that has taken the world by storm since San Francisco’s OpenAI released its ChatGPT bot in November 2022. The AI chatbots and image generators often produce errors known as hallucinations, which in text can involve misinformation, and in images, absurdities like six-fingered hands.

In his regretful filing with the court, Hancock — who studies AI’s effects on misinformation and trust — detailed how his use of OpenAI’s ChatGPT to produce his expert submission led to the errors.

Hancock confessed that in addition to the fake study by Huang, Zhang, Wang, he had also included in his declaration “a nonexistent 2023 article by De keersmaecker & Roets,” plus four “incorrect” authors for another study.

Seeking to bolster his credibility with “specifics” of his expertise, Hancock claimed in the filing that he co-wrote “the foundational piece” on communication mediated by AI. “I have published extensively on misinformation in particular, including the psychological dynamics of misinformation, its prevalence, and possible solutions and interventions,” Hancock wrote.

He used ChatGPT 4.0 to help find and summarize articles for his submission, but the errors likely got in later when he was drafting the document, Hancock wrote in the filing. He had inserted the word “cite” into the text he gave the chatbot, to remind himself to add academic citations to points he was making, he wrote.

“The response from GPT-4o, then, was to generate a citation, which is where I believe the hallucinated citations came from,” Hancock wrote, adding that he believed the chatbot also made up the four incorrect authors.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment