Share
Home Region Worldwide Legal troubles accompanying GenAI hallucinations are not fantasies

Legal troubles accompanying GenAI hallucinations are not fantasies

Fears arise from potentially deadly fabrications
Legal troubles accompanying GenAI hallucinations are not fantasies
GenAI hallucinations

OpenAI is taking on responsibility against AI “hallucinations” by applying a novel way of training large language AI models.

GenAI systems’ research deemed imagined and not based on facts has been a hotly debated issue, a key weakness in an otherwise highly acclaimed product. OpenAI’s ChatGPT chatbot which is powered by GPT-3 and GPT-4, had over 100 million monthly users in just two months after launch. Following Microsoft investments reaching over $13 billion and other private sector funding in OpenAI that together garnered nearly 30% equity in the company, the startup’s value has reached roughly $29 billion.

These hallucinations include mistakes in dates, reasoning, and the introduction of citations, quotes, and references that are totally fictitious.

GenAI-wide hallucinations

Hallucinations don’t just apply to ChatGPT, but also to app competitors like Google’s Bard which also fabricates information that sounds real.

As AI-powered chatbots continue to advance, concerns regarding the potential for misinformation or harmful content increase with the risk of AI-generated content creating misleading perceptions.

“Even state-of-the-art models are prone to producing falsehoods —they exhibit a tendency to invent facts in moments of uncertainty,” the OpenAI researchers wrote in the report. “These hallucinations are particularly problematic in domains that require multi-step reasoning since a single logical error is enough to derail a much larger solution.”

Solving GenAI hallucinations

OpenAI is solving these challenging reasoning problems by training AI models to reward themselves for each individual correct step of reasoning when they’re arriving at an answer, instead of just rewarding a correct final conclusion, aka human-like reasoning called “process supervision,” as opposed to “outcome supervision.”

Read: Advanced ChatGPT, just what the doctor ordered?

OpenAI has released an accompanying dataset of 800,000 human labels it used to train the model mentioned in the research paper, Karl Cobbe, mathgen researcher at OpenAI told media. The research team also indicated that the reward model performs better across the board.

Many experts have expressed skepticism that until the findings are peer-reviewed, these results remain in the realm of mere observations in an isolated setting.

GenAI hallucinations

Hallucinated GenAI article ends in a lawsuit

Publishing GenAI content can lead to legal troubles. Just ask this unnamed journalist who writes for an online gun website and asked OpenAI’s ChatGPT to provide him a summary of a legal case ‘The Second Amendment Foundation v. Robert Ferguson’. The AI chatbot completely missed the mark with an answer that alleges the case involved a Georgia radio host named Mark Walters who was accused of embezzling money from The Second Amendment Foundation.

ChatGPT’s fabricated that “Walters misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports.”

The AI bot didn’t stop there, When prompted for an exact passage of the lawsuit mentioning Walters, the chatbot produced a bogus paragraph that does not exist in the actual complaint. The AI even got the case number wrong.

Following the publishing of the said article, Walters filed a libel lawsuit against ChatGPT claiming OpenAI acted negligently by showing false information to the journalist.

This may not be a one-off judicial case and could lead to similar legal actions against GenAI makers over their product’s hallucinations.

When future plaintiffs can effectively plead their case to a judge saying lost a job, contract, or any other income based on ChatBot hallucinations,  it would be possible for these potential victims to win their cases.

Earlier in 2023, Brian Hood, a regional mayor in Australia, threatened to sue OpenAI after its model allegedly named him as a convicted criminal involved in a bribery scandal.

Also, George Washington University law professor named Jonathan Turley, along with several other professors were falsely accused of sexual harassment by ChatGPT in a story that appeared in a Washington Post article. The Chatbot as well as hallucinated quotes in support of the claims.

Life-threatening GenAI hallucinations

Large language models (LLMs) are already being widely used in healthcare with medical startups using generative AI-powered tools for help in millions of real-life care situations. LLMs are also being used for drug discovery, These hallucinations are basically providing responses, many of which are completely inappropriate, untrue, and fabricated using a high level of language confidence that a non-expert would not be able to differentiate between the two. Can some doctors and researchers be fooled and prescribe or administer the wrong drugs or doses? Time will tell.

For more on GenAI, click here.

The stories on our website are intended for informational purposes only. Those with finance, investment, tax or legal content are not to be taken as financial advice or recommendation. Refer to our full disclaimer policy here.