Misinformation expert Jeff Hancock confessed to utilizing OpenAI's ChatGPT for organizing citations in a legal document, with hallucinations calling into question the integrity of the filing itself.
What Happened: Hancock, who founded the Stanford Social Media Lab, admitted that while ChatGPT assisted in drafting his affidavit, it introduced errors. These inaccuracies do not, however, affect the core arguments of his document, according to the expert.
"I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects," Hancock wrote in a subsequent filing.
Hancock's affidavit supports Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is currently challenged in federal court by Christopher Khols, known as Mr. Reagan on YouTube, and state Rep. Mary Franson.
Their attorneys labeled the document "unreliable" due to non-existent citations and sought its removal.
Hancock clarified that while he used ChatGPT for drafting, he did not rely on it for writing. He emphasized his commitment to the affidavit's claims, supported by scholarly research. He used Google Scholar and GPT-4o to find relevant articles, inadvertently causing citation errors, known as "hallucinations."
"I did not intend to mislead the Court or counsel," said Hancock.
Why It Matters: The incident underscores ongoing concerns about AI's reliability in legal contexts.
In May 2023, a lawyer faced similar issues when ChatGPT fabricated non-existent cases in a brief, leading to legal chaos.
This highlights the challenges of AI "hallucinations," a term gaining traction since Google
The rapid evolution of AI technology, as seen with the launch of ChatGPT's successor GPT-4, has prompted tech leaders like Elon Musk and OpenAI CEO Sam Altman to caution against potential risks.