USCIS-Grounded AI: Preventing Hallucinations in Immigration Legal Services

Confirmation

1

Document Type

Poster

Location

ONU McIntosh Center; Activities Room

Start Date

24-4-2026 12:00 PM

End Date

24-4-2026 12:50 PM

Abstract

Artificial intelligence chatbots increasingly provide legal information to consumers, but AI "hallucinations" (confidently stated but incorrect responses) pose serious risks in immigration law. Incorrect information about USCIS forms, fees, processing times, or filing procedures can result in visa denials, deportation proceedings, or permanent bars to entry.

This research presents a novel "source-grounded AI" system that eliminates hallucinations in immigration legal information. Rather than relying solely on large language models (LLMs) trained on general internet data, the system uses USCIS.gov as the primary source of truth for all operational data including current forms, fees, processing times, filing addresses, and policy updates. The LLM provides natural language understanding and conversational interface, while a real-time USCIS data layer ensures factual accuracy.

Testing demonstrates 100% accuracy in responses to USCIS procedural questions, with every response including source citations linking to official USCIS.gov pages. This contrasts sharply with general-purpose AI chatbots which produce hallucinated or outdated immigration information in approximately 35% of queries.

This approach addresses the access to justice crisis in immigration law, where 87% of cases lack legal representation, by providing reliable, free legal information without crossing into unauthorized practice of law. The system serves as a replicable model for AI-assisted legal services in other practice areas where authoritative government sources exist.

This document is currently not available here.

Restricted

Available to ONU community via local IP address and ONU login.

Share

COinS
 
Apr 24th, 12:00 PM Apr 24th, 12:50 PM

USCIS-Grounded AI: Preventing Hallucinations in Immigration Legal Services

ONU McIntosh Center; Activities Room

Artificial intelligence chatbots increasingly provide legal information to consumers, but AI "hallucinations" (confidently stated but incorrect responses) pose serious risks in immigration law. Incorrect information about USCIS forms, fees, processing times, or filing procedures can result in visa denials, deportation proceedings, or permanent bars to entry.

This research presents a novel "source-grounded AI" system that eliminates hallucinations in immigration legal information. Rather than relying solely on large language models (LLMs) trained on general internet data, the system uses USCIS.gov as the primary source of truth for all operational data including current forms, fees, processing times, filing addresses, and policy updates. The LLM provides natural language understanding and conversational interface, while a real-time USCIS data layer ensures factual accuracy.

Testing demonstrates 100% accuracy in responses to USCIS procedural questions, with every response including source citations linking to official USCIS.gov pages. This contrasts sharply with general-purpose AI chatbots which produce hallucinated or outdated immigration information in approximately 35% of queries.

This approach addresses the access to justice crisis in immigration law, where 87% of cases lack legal representation, by providing reliable, free legal information without crossing into unauthorized practice of law. The system serves as a replicable model for AI-assisted legal services in other practice areas where authoritative government sources exist.