Artificial Intelligence (AI), like ChatGPT and others, is revolutionizing the way we work and live! From content generation, customer support chat bots, to predicting the future, AI is starting to show us the great potential of what it can help us achieve. However, as the use of AI increases, the dangers become more advanced and sophisticated to mitigate. The risks associated with AI are real, especially when it comes to AI hallucinations. AI hallucinations are the overly confident wrong answers that AI returns when answering AI prompts. This is a significant risk that must be mitigated in order to get the most benefit and accuracy from AI in any business workflow. In this article, we’ll explore more of what AI hallucinations are, the dangers they pose, and how they can be mitigated.

What are AI Hallucinations?

Artificial Intelligence (AI) hallucinations refer to situations where an AI model produces a wrong output that appears to be reasonable, given the input data. These hallucinations occur when the AI model is too confident in its output, even if the output is completely incorrect. In other words, AI hallucinations happen when an AI model produces wrong answers that it is sure are correct.

These AI hallucinations can occur when the AI is answering a prompt where it doesn’t have all the necessary information to give an accurate answer. It’s fairly common for the AI to basically make stuff up to fill in the gaps so it can give an answer to the AI prompt given. Just as people will some times give a confident answer, so does AI. These confident but incorrect answers are referred to as AI hallucinations.

Dangers of AI Hallucinations

The dangers of AI hallucinations are significant, especially when the wrong answer has real-world consequences. For example, suppose an AI model is used in medical diagnosis, and it hallucinates a diagnosis that leads to an incorrect treatment plan. In that case, the consequences could be life threatening in a healthcare scenario. Similarly, if an AI model is used in autonomous vehicles and hallucinates that it is safe to proceed through an intersection, the results could be deadly. There are many other scenarios where where AI hallucinations could pose legal and/or ethical consequences.

Legal Liability

One of the most significant dangers of AI hallucinations is legal liability. As AI models become more prevalent, they will inevitably be used in situations where their output has real-world consequences. In such situations, if an AI model hallucination is used for action or to make a statement to a customer unchecked, the organization using the model could be held legally liable for any resulting damages.


Another danger of AI hallucinations is when it comes to meeting compliance requirements. Many industries have strict compliance regulations that need to be met. It’s tempting to implement AI models to automatically take actions and perform tasks, but the AI may not adhere to compliance requirements that must be maintained by an organization. This could lead to violating compliance that could cause the organization to be audited and possibly lose it’s compliance certification. Depending on the organization, this could lead to catastrophic consequences for the business.

How to Mitigate AI Hallucinations

Several techniques can be used to mitigate the risks and dangers associated with AI hallucinations. Regardless of what business process uses AI, it is extremely important to implement at least one method of mitigation to protect your organization from the risks and potentially dangerous consequences of AI hallucinations.

Manual Human Review

The process of manual human review of the output and answers of AI is fairly simple method to mitigate and reduce the risks of AI hallucinations. With content generation, this would be manually reviewing and editing the content. When AI is predicting an action to take, this would be a human reviewing the suggested action and its reasons before allowing the action to be performed. This is simple, but does require the person performing the review to either know the models domain very well, or to know how to lookup and verify things.

Manual human review can be laborious and time consuming. While this may work really well for certain content or results generate by AI, it will not be feasible on a large scale. For this reason, it will likely be important to implement other mitigation techniques depending on the business solution that has implemented AI automation.

Limit the Possible Answers

When crafting the AI prompts used, the possible answers expected from the AI can be limited. This will enable you to guide the AI towards what type of answer, or possible answers, you are expecting it to return. This can be done by being more detailed in the AI prompt, and possibly giving the AI a very specific list of answers to choose from. This will help in preventing the AI form hallucinating a too confident and incorrect answer.

Specify an Answer Template

Whether the AI is predicting data trends, generating content, or making some other prediction, the AI prompt that is designed could include a sample or expected template description of what answer is expected. This is yet another way to guide the AI to giving an expected answer, and can help prevent the AI from hallucinating.

Tell the AI to NOT Lie

With conversational AI, like ChatGPT or others, you can include a request in the AI prompt that the AI doesn’t lie. This sounds like something that shouldn’t be necessary, but there are times where this may do the trick. You could tell it to tell you it doesn’t know instead of coming up with an AI hallucination just to give you a good, but wrong, answer.

Tell the AI What you don’t want

Similarly to instructing the AI that you don’t want it to lie, you can similarly tell the AI what type of answer or information you specifically do not want it to include in the answer returned. These instructions will guide the AI towards better answering your AI prompt and giving you a meaning full and valuable answer. This will also help prevent the AI from hallucinating as well.


AI hallucinations are a real and significant risk associated with AI models. The dangers of AI hallucinations include legal liability, compliance risks, and real-world consequences. There are many different techniques that can be used to help mitigate and prevent AI hallucinations and their consequences. Only then can we leverage the benefits of AI without falling prey to its dangers. Crafting detailed, instructive, and informative AI prompts will help you guide the AI used in your business solutions and workflows to give meaningful and valuable answers that have less risk of AI hallucinations.

Microsoft MVP

Chris Pietschmann is a Microsoft MVP, HashiCorp Ambassador, and Microsoft Certified Trainer (MCT) with 20+ years of experience designing and building Cloud & Enterprise systems. He has worked with companies of all sizes from startups to large enterprises. He has a passion for technology and sharing what he learns with others to help enable them to learn faster and be more productive.
HashiCorp Ambassador Microsoft Certified Trainer (MCT) Microsoft Certified: Azure Solutions Architect