An AI hallucination is when an AI model generates incorrect or misleading information but presents it as if it were a fact. AI hallucinations are impossible to prevent. A common AI hallucination in higher education happens when users prompt ChatGPT or Gemini to cite references. These tools scrape data that exists on this topic and create new titles, authors, and content that does not actually exist.
Large Language models (LLM)produces gender bias and racial stereotyping. For example, women were described as working in domestic roles far more often than men. They ignore or distort artists’ text prompts to stereotype or censor Black history and culture. AI detectors are more likely to flag the works of authors whose first language is not English.
When you enter contents or prompts into AI tools, they ingest, store, and use them to further train the large language models. The information you submitted will be shared with others in some manner. Therefore, you should
Generative AI tools can be used to infringe on a copyright owner’s exclusive rights by producing derivatives. A number of copyright infringement lawsuits have been filed against AI platforms. Before entering any copyrighted material into a generative AI tool as part of a prompt, you need to get permission. Further, entering/uploading material such as articles you obtained from library subscribed databases to AI may be a violation of copyright. Currently, copyright protection is not granted to AI generated works.