Here’s the Company That Sold DHS ICE’s Notorious Face Recognition App
Exclusive: ICE and CBP Deploy NEC’s Mobile Fortify Face Recognition App Nationwide
In a revelation that’s sending shockwaves through privacy advocacy circles and tech communities alike, the Department of Homeland Security (DHS) has officially disclosed critical details about Mobile Fortify, a powerful facial recognition application that federal immigration agents are now using to identify individuals—including both undocumented immigrants and US citizens—in the field.
The Big Reveal: NEC Behind the Curtain
The groundbreaking disclosure came Wednesday as part of DHS’s mandatory 2025 AI Use Case Inventory, a transparency initiative requiring federal agencies to catalog their artificial intelligence deployments. For the first time publicly, the inventory identified NEC Corporation as the vendor behind Mobile Fortify, lifting the veil on what had been a closely guarded secret.
NEC, a Japanese multinational information technology company, markets a facial recognition solution called “Reveal” on its website, advertising capabilities that can perform one-to-many searches or one-to-one matches against databases of any size. The company’s biometric matching products have been under contract with DHS since 2020, with a $23.9 million agreement covering “unlimited facial quantities, on unlimited hardware platforms, and at unlimited locations.”
Deployment Timeline Raises Eyebrows
The timing of the deployment is particularly noteworthy. While Customs and Border Protection (CBP) claims Mobile Fortify became “operational” at the beginning of May 2024, Immigration and Customs Enforcement (ICE) gained access on May 20, 2025—approximately one month before 404 Media first reported on the app’s existence in early 2025.
This discrepancy between official deployment dates and public awareness has fueled speculation about the government’s transparency regarding surveillance technologies being deployed in American communities.
How Mobile Fortify Works in the Field
According to DHS documentation, the app functions as a comprehensive biometric identification tool that can capture faces, “contactless” fingerprints, and photographs of identity documents. Once collected, this data is transmitted to CBP for submission to government biometric matching systems.
The AI-powered systems then cross-reference faces and fingerprints with existing government records, returning potential matches along with biographic information. ICE has stated that the app is particularly valuable when officers “must work with limited information and access multiple disparate systems” in the field.
Training Data Controversy
Perhaps most concerning to privacy advocates is CBP’s admission that “Vetting/Border Crossing Information/Trusted Traveler Information” was used to either train, fine-tune, or evaluate Mobile Fortify’s performance. This data likely includes millions of Americans who participate in programs like TSA PreCheck and Global Entry.
The implications became starkly real in recent court declarations, where individuals reported having their Trusted Traveler privileges revoked after encounters with federal agents who mentioned using “facial recognition.” In one particularly troubling case, a Minnesota woman said her Global Entry and TSA PreCheck access was terminated following an interaction with an agent who explicitly referenced facial recognition technology.
AI Impact Assessment: A Regulatory Gray Area
The deployment of Mobile Fortify raises serious questions about compliance with federal AI governance guidelines. According to Office of Management and Budget guidance issued before the app’s deployment, agencies are required to complete AI impact assessments before deploying any “high-impact” use case.
However, both CBP and ICE classified Mobile Fortify as “high-impact” and “deployed” in their inventory submissions, suggesting the assessment may have occurred after the fact—a potential violation of established protocols designed to protect civil liberties.
Privacy Advocates Sound the Alarm
Civil liberties organizations have expressed grave concerns about the implications of Mobile Fortify’s deployment. The app’s ability to identify US citizens, not just undocumented immigrants, represents a significant expansion of government surveillance capabilities in everyday American life.
Critics point out that face recognition technology has documented accuracy issues, particularly when identifying people of color and women, raising the specter of false identifications leading to wrongful detentions or other civil rights violations.
The Technology Race: Government vs. Privacy
The revelation of Mobile Fortify’s deployment comes amid a broader national debate about the appropriate use of AI and biometric surveillance technologies by law enforcement. As federal agencies increasingly adopt these tools, questions about oversight, accountability, and the balance between security and privacy have become more urgent than ever.
The fact that ICE and CBP are using the same app, developed partially in-house by ICE but contracted through NEC, illustrates the complex ecosystem of public-private partnerships driving government surveillance capabilities.
What’s Next for Mobile Fortify?
While DHS claims there are “sufficient monitoring protocols” in place for CBP’s use of the app, ICE admits that development of monitoring protocols is still in progress. The agency states it will identify potential impacts during an AI impact assessment—a process that should have preceded deployment according to federal guidelines.
As Mobile Fortify continues to be deployed across federal immigration enforcement operations, its impact on American communities, particularly immigrant communities, remains to be fully understood. What is clear is that this technology represents a significant shift in how the federal government identifies and interacts with individuals in the United States.
The disclosure of Mobile Fortify’s details through the AI Use Case Inventory represents a rare moment of transparency in an area often shrouded in secrecy. However, the questions it raises about privacy, civil liberties, and government accountability in the age of AI surveillance are likely to reverberate through policy debates and courtrooms for years to come.
Tags: facial recognition, ICE, CBP, DHS, NEC, Mobile Fortify, biometric surveillance, AI impact assessment, privacy concerns, government surveillance, Trusted Traveler Programs, TSA PreCheck, Global Entry, civil liberties, immigration enforcement, biometric matching, contactless fingerprinting, identity verification, AI governance, federal contracts, surveillance technology
Viral Sentences:
- “ICE is using a new facial recognition app to identify people”
- “Government surveillance just got a major upgrade”
- “Your face could be in their database right now”
- “Privacy advocates are sounding the alarm”
- “The technology race between government and privacy”
- “False identifications could lead to wrongful detentions”
- “This app can identify US citizens, not just undocumented immigrants”
- “The implications for civil liberties are staggering”
- “Government transparency or lack thereof”
- “AI impact assessment should have preceded deployment”
- “Millions of Americans’ data used to train this surveillance tool”
- “The future of identification is here, and it’s watching you”
- “Your Trusted Traveler status could be revoked based on a facial match”
- “The complex ecosystem of public-private surveillance partnerships”
- “What happens when AI gets it wrong?”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!