← Back to article
5 quick questions to check your understanding of Why do large language models sometimes 'hallucinate' information?.