Question: I’m really intrigued by all of the AI chatbots available nowadays. From Copilot built into Windows and Gemini built into Google to Perplexity, Claude and ChatGPT, there are lots of choices. I’m afraid of misinformation, however, so is there some way to have it verify its answers to avoid hallucinations?
Answer: Given the rise of AI everywhere, this is a very timely question you ask! Every operating system, from TVs to automobiles, computers to smart watches, are gaining AI features. Microsoft’s even said that the next version of Windows will not only be AI-centric, but agentic at its heart too.
Dave Taylor / TechnologyYou’re asking about inaccurate answers, but what about when your AI agent buys you plane tickets to Paris, Texas when you wanted flights to Paris, France?
As any teacher knows, facts can also be slippery and it’s all too easy to make things up in the pursuit of a strong argument or convincing reply. Heck, you might have experienced this over the Thanksgiving table from Uncle José or Aunt Cho!
Large language model (LLM) AI systems are particularly susceptible to making things up — so-called “hallucinations” — because in some sense they’re essentially autocomplete on steroids. That’s too often wrong, isn’t it?
How to minimize the risk
There are ways to ensure that the risk of misinformation and inaccuracies are minimized, however. My favorite approach is to simply ask the program to “cite sources for every factual statement.” They can be made up so check those, but it’s still going to be more trustworthy than “just give me an answer, don’t worry about where your numbers originate.”
You can also use cross questioning, as demonstrated on every police procedural drama. Don’t just ask “what were the major causes of the Civil War?” but also ask “what do historians disagree about?” and “what do major textbooks list?”
Another interesting approach is to split out facts from interpretation: “List only verifiable facts” or “explicitly differentiate between facts and interpretation.”
I also sometimes ask the LLM to “verify your facts and explain any divergence in critical opinion” or root it in a known publication like “list the 10 best films about the Vietnam War based on IMDb ratings.”
Consider rooting the query in live Internet results, if possible, rather than the AI’s training data with “Search the web for…” Ask the same question of multiple LLMs, or, even better, ask one to critique the answer that you’ve received from another AI chat system.
Finally, remember that it has a tendency to agree with your viewpoint, so try asking the same question from a different perspective. Example: “Was slavery the cause of the Civil War?” and “Other than slavery, what else is commonly cited as a cause of the Civil War?”
Be judicious and don’t forget to fact-check manually before you broadcast information from an AI tool. They’re darn helpful, but they’re not (yet) omniscient.
Dave Taylor has been involved with the online world since the beginning of the Internet. He runs the popular AskDaveTaylor.com tech Q&A site and invites you to subscribe to his weekly email newsletter at AskDaveTaylor.com/subscribe. You can also find his entertaining gadget reviews on YouTube at YouTube.com/AskDaveTaylor.
Hence then, the article about dave taylor can i have chatgpt fact check itself was published today ( ) and is available on GreeleyTribune ( Saudi Arabia ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Dave Taylor: Can I have ChatGPT fact-check itself? )
Also on site :