Presenter Information

Confirmation

1

Document Type

Paper

Location

ONU McIntosh Center; Ballroom

Start Date

21-4-2026 5:15 PM

End Date

21-4-2026 5:30 PM

Abstract

Artificial intelligence chatbots increasingly provide legal information to consumers, but AI "hallucinations" (confidently stated but incorrect responses) pose serious risks in immigration law. Incorrect information about USCIS forms, fees, processing times, or filing procedures can result in visa denials, deportation proceedings, or permanent bars to entry.

This research presents a novel "source-grounded AI" system that eliminates hallucinations in immigration legal information. Rather than relying solely on large language models (LLMs) trained on general internet data, the system uses USCIS.gov as the primary source of truth for all operational data including current forms, fees, processing times, filing addresses, and policy updates. The LLM provides natural language understanding and conversational interface, while a real-time USCIS data layer ensures factual accuracy.

Testing demonstrates 100% accuracy in responses to USCIS procedural questions, with every response including source citations linking to official USCIS.gov pages. This contrasts sharply with general-purpose AI chatbots which produce hallucinated or outdated immigration information in approximately 35% of queries.

This approach addresses the access to justice crisis in immigration law, where 87% of cases lack legal representation, by providing reliable, free legal information without crossing into unauthorized practice of law. The system serves as a replicable model for AI-assisted legal services in other practice areas where authoritative government sources exist.

Open Access

Available to all.

Share

COinS
 
Apr 21st, 5:15 PM Apr 21st, 5:30 PM

USCIS-Grounded AI: Preventing Hallucinations in Immigration Legal Services

ONU McIntosh Center; Ballroom

Artificial intelligence chatbots increasingly provide legal information to consumers, but AI "hallucinations" (confidently stated but incorrect responses) pose serious risks in immigration law. Incorrect information about USCIS forms, fees, processing times, or filing procedures can result in visa denials, deportation proceedings, or permanent bars to entry.

This research presents a novel "source-grounded AI" system that eliminates hallucinations in immigration legal information. Rather than relying solely on large language models (LLMs) trained on general internet data, the system uses USCIS.gov as the primary source of truth for all operational data including current forms, fees, processing times, filing addresses, and policy updates. The LLM provides natural language understanding and conversational interface, while a real-time USCIS data layer ensures factual accuracy.

Testing demonstrates 100% accuracy in responses to USCIS procedural questions, with every response including source citations linking to official USCIS.gov pages. This contrasts sharply with general-purpose AI chatbots which produce hallucinated or outdated immigration information in approximately 35% of queries.

This approach addresses the access to justice crisis in immigration law, where 87% of cases lack legal representation, by providing reliable, free legal information without crossing into unauthorized practice of law. The system serves as a replicable model for AI-assisted legal services in other practice areas where authoritative government sources exist.