An AI hallucination is when an AI model generates incorrect or misleading information but presents it as if it were a fact. AI hallucinations are impossible to prevent.
- A common AI hallucination in higher education happens when users prompt ChatGPT or Gemini to cite references. These tools scrape data that exists on this topic and create new titles, authors, and content that does not actually exist. For example, you could never find the article below from JAMA Pediatrics - ChatGPT made it up.
- GenAI produces biased information. For example, the prompt supervisor talking to staff members was entered in Canva AI image generator. Below is the first three images the AI generated. Without exception, all supervisors are male.