Next Gen Investing

AI chatbots can ‘hallucinate’ and make things up—why it happens and how to spot it

Share
Laurence Dutton | E+ | Getty Images

When you hear the word "hallucination," you may think of hearing sounds no one else seems to hear or imagining your coworker has suddenly grown a second head while you're talking to them.

But when it comes to artificial intelligence, hallucination means something a bit different.

When an AI model "hallucinates," it generates fabricated information in response to a user's prompt, but presents it as if it's factual and correct.

Say you asked an AI chatbot to write an essay on the Statue of Liberty. The chatbot would be hallucinating if it stated that the monument was located in California instead of saying it's in New York.

But the errors aren't always this obvious. In response to the Statue of Liberty prompt, the AI chatbot may also make up names of designers who worked on the project or state it was built in the wrong year.

This happens because large language models, commonly referred to as AI chatbots, are trained on enormous amounts of data, which is how they learn to recognize patterns and connections between words and topics. They use this knowledge to interpret prompts and generate new content, such as text or photos.

But since AI chatbots are essentially predicting the word that is most likely to come next in a sentence, they can sometimes generate outputs that sound correct, but aren't actually true.

A real-world example of this occurred when lawyers representing a client who was suing an airline submitted a legal brief written by ChatGPT to a Manhattan federal judge. The chatbot included fake quotes and cited non-existent court cases in the brief.

AI chatbots are becoming increasingly more popular, and OpenAI even lets users build their own customized ones to share with other users. As we begin to see more chatbots on the market, understanding how they work — and knowing when they're wrong — is crucial.

In fact, "hallucinate," in the AI sense, is Dictionary.com's word of the year, chosen because it best represents the potential impact AI may have on "the future of language and life."

"'Hallucinate' seems fitting for a time in history in which new technologies can feel like the stuff of dreams or fiction — especially when they produce fictions of their own," a post about the word says.

How Open AI and Google address AI hallucination

Both OpenAI and Google warn users that their AI chatbots can make mistakes and advise them to double check their responses.

Both tech organizations are also working on ways to reduce hallucination.

Google says one way it does this is through user feedback. If Bard generates an inaccurate response, users should click the thumbs-down button and describe why the answer was wrong so that Bard can learn and improve, the company says.

OpenAI has implemented a strategy called "process supervision." With this approach, instead of just rewarding the system for generating a correct response to a user's prompt, the AI model would reward itself for using proper reasoning to arrive at the output.

"Detecting and mitigating a model's logical mistakes, or hallucinations, is a critical step towards building aligned AGI [or artificial general intelligence]," Karl Cobbe, mathgen researcher at OpenAI, told CNBC in May.

And remember, while AI tools like ChatGPT and Google's Bard can be convenient, they're not infallible. When using them, be sure to analyze the responses for factual errors, even if they're presented as true.

DON'T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

Get CNBC's free Warren Buffett Guide to Investing, which distills the billionaire's No. 1 best piece of advice for regular investors, do's and don'ts, and three key investing principles into a clear and simple guidebook.

CHECK OUT: The 'relatively simple' reason why these tech experts say AI won't replace humans any time soon

We renovated a $90,000 abandoned school into a 33-unit apartment building — take a look inside
VIDEO6:5206:52
We renovated a $90,000 abandoned school into an apartment building