Stanford prof ‘likely’ used AI chatbot like ChatGPT for court submission: lawyers

A Stanford professor serving as an expert in a federal court lawsuit over fakery created by artificial intelligence submitted a sworn declaration containing false information likely made up by an AI chatbot, a legal filing claims.

The declaration submitted by Jeff Hancock, professor of communication and founding director of the Stanford Social Media Lab, “cites a study that does not exist,” the Nov. 16 filing by the plaintiffs in the case alleged. “Likely, the study was a ‘hallucination’ generated by an AI large language model like ChatGPT.”

Hancock and Stanford did not immediately respond to requests for comment.

The lawsuit was brought in Minnesota District Court by a state legislator and a satirist YouTuber seeking a court order declaring unconstitutional a state law criminalizing election-related, AI-generated “deepfake” photos, video and sound.

Hancock, according to the court filing Saturday, was brought in as an expert by Minnesota’s attorney general, a defendant in the case.

The filing by the lawmaker and YouTuber questioned Hancock’s reliability as an expert witness, and argued that his report should be thrown out because it might contain more, undiscovered AI fabrications.

In his 12-page submission to the court, Hancock said he studies “the impact of social media and artificial intelligence technology on misinformation and trust.”

Submitted with Hancock’s report was his list of list of “cited references,” court records show. One of those references — to a study by authors named Huang, Zhang and Wang — caught the attention of lawyers for state representative Mary Franson and YouTuber Christopher Kohls, who is also suing California Attorney General Rob Bonta over a law allowing damages-seeking lawsuits over election deepfakes.

Hancock cited the study, purportedly appearing in the Journal of Information Technology & Politics, to support a point he made in his submission to the court about sophistication of deepfake technology. The publication is real. But the study is “imaginary,” the filing by lawyers for Franson and Kohls alleged.

The journal volume and article pages cited by Hancock do not address deepfakes, but instead cover online discussions by presidential candidates about climate change, and the impact of social media posts on election results, the filing said.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment