Wednesday, November 13, 2024

AI Hallucinations – eLearning Industry

…Thank God For That!

Artificial Intelligence (AI) is quickly changing every part of our lives, including education. We’re seeing both the good and the bad that can come from it, and we’re all just waiting to see which one will win out. One of the main criticisms of AI is its tendency to “hallucinate.” In this context, AI hallucinations refer to instances when AI systems produce information that is completely fabricated or incorrect. This happens because AI models, like ChatGPT, generate responses based on patterns in the data they were trained on, not from an understanding of the world. When they don’t have the right information or context, they might fill in the gaps with plausible-sounding but false details.

The Significance Of AI Hallucinations

This means we cannot blindly trust anything that ChatGPT or other Large Language Models (LLMs) produce. A summary of a text may be incorrect, or we might find extra information that wasn’t originally there. In a book review, characters or events that never existed may be included. When it comes to paraphrasing or interpreting poems, the results can be so embellished that they stray from the truth. Even facts that seem to be basic, like dates or names, can end up being altered or associated with the wrong information.

While various industries and even students see AI’s hallucinations as a disadvantage, I, as an educator, view them as an advantage. Knowing that ChatGPT hallucinates keeps us, especially our students, on our toes. We can never rely on gen AI entirely; we must always double-check what they produce. These hallucinations push us to think critically and verify information. For example, if ChatGPT generates a summary of a text, we must read the text ourselves to judge whether the summary is accurate. We need to know the facts. Yes, we can use LLMs to generate new ideas, identify keywords or find learning methods, but we should always cross-check this information. And this process of double-checking is not just necessary; it’s an effective learning technique in itself.

Promoting Critical Thinking In Education

The idea of trying to find errors or being critical and suspicious about the information presented is nothing new in education. We use error detection and correction regularly in classrooms, asking students to review content to identify and correct mistakes. “Spot the difference” is another name for this technique. Students are often given multiple texts or information that require them to identify similarities and differences. Peer review, where learners review each other’s work, also supports this idea by asking to identify mistakes and to offer constructive feedback. Cross-referencing, or comparing different parts of a material or multiple sources to verify consistency, is yet another example. These techniques have long been valued in educational practice for promoting critical thinking and attention to detail. So, while our learners may not be entirely satisfied with the answers provided by generative AI, we, as educators, should be. These hallucinations could ensure that learners engage in critical thinking and, in the process, learn something new.

How AI Hallucinations Can Help

Now, the tricky part is making sure that learners actually know about these hallucinations and their extent, understand what they are, where they come from and why they occur. My suggestion for that is providing practical examples of major mistakes made by gen AI, like ChatGPT. These examples resonate strongly with students and help convince them that some of the errors might be really, really significant.

Now, even if using generative AI is not allowed in a given context, we can safely assume that learners use it anyway. So, why not use this to our advantage? My recipe would be to help learners grasp the extent of AI hallucinations and encourage them to engage in critical thinking and fact-checking by organizing online forums, groups, or even contests. In these spaces, students could share the most significant mistakes made by LLMs. By curating these examples over time, learners can see firsthand that AI is constantly hallucinating. Plus, the challenge of “catching” ChatGPT in yet another serious mistake can become a fun game, motivating learners to put in extra effort.

Conclusion

AI is undoubtedly set to bring changes to education, and how we choose to use it will ultimately determine whether those changes are positive or negative. At the end of the day, AI is just a tool, and its impact depends entirely on how we wield it. A perfect example of this is hallucination. While many perceive it as a problem, it can also be used to our advantage.

Related Articles

Latest Articles