Abstract: Recent progress in Large Language Models (LLMs) has significantly advanced natural language processing tasks such as summarization, translation, and text generation. Despite their impressive capabilities, these models frequently generate hallucinations—responses that appear fluent and convincing but lack factual correctness or logical grounding. Such behavior raises serious concerns regarding the dependability and ethical deployment of LLMs in real-world scenarios. This paper reviews and analyzes existing research on hallucinations in LLMs, focusing on their underlying causes, practical consequences, and mitigation strategies. Studies by Reddy et al. (2024) and Perković et al. (2024) investigate both internal model limitations and external influencing factors, including biased datasets, inadequate contextual understanding, and poorly structured prompts. Their findings highlight ethical and operational risks across multiple application domains. Research presented at ICALT 2024 emphasizes the dangers of hallucinated content in educational environments and proposes comparative and cross-verification techniques to preserve factual integrity. Furthermore, Sun et al. (2025) introduce a Markov Chain–based multi-agent debate framework that enhances post- generation verification through structured evidence retrieval and claim validation.
Downloads:
|
DOI:
10.17148/IJARCCE.2026.15203
[1] Kruthi S, Preksha M P, "Abstractive Summarization Via Contrastive Prompt Constructed By LLMS Hallucination," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.15203