WASHINGTON — Federal agencies and private contractors are expanding the use of artificial intelligence–driven image analysis in ways that civil liberties advocates say risk recasting lawful protest as a security threat. Records reviewed by reporters and interviews with experts indicate that tools originally developed for immigration enforcement are increasingly being used to monitor demonstrations opposing Immigration and Customs Enforcement (ICE).
At issue is a growing reliance on AI systems that scan images and video for faces, objects, and patterns of movement—technologies officials describe as force multipliers but critics warn are prone to error and misuse.
What the Systems Do
The tools combine facial recognition, object detection, and image matching across public databases and social media. Vendors market the systems as capable of identifying individuals of interest in large crowds and flagging “anomalous” behavior. But independent audits have found uneven accuracy, with higher error rates for people of color and younger individuals, and limited ability to interpret context.
“Images don’t explain intent,” said one computer vision researcher familiar with government deployments. “An algorithm can’t reliably distinguish between a chant and a threat.”
From Enforcement to Protest Monitoring
ICE has used surveillance technology for years in immigration cases. More recently, procurement documents and internal briefings suggest those capabilities are being repurposed for crowd analysis during protests. Former officials described a pattern in which demonstrations are designated high-risk, justifying the deployment of AI tools whose outputs are then cited as evidence of risk—an approach critics call circular.
The Weight of Labels
The use of terms like “domestic terrorist” in official rhetoric has heightened concerns. Legal experts note that such labels can trigger broader surveillance authorities and long-lasting data retention, often without clear standards for appeal or correction when AI systems make mistakes.
“Once a person is flagged, the data follows them,” said a civil liberties attorney. “There’s no transparent process to challenge an algorithmic judgment.”
Oversight and Accountability
Oversight remains limited. Many deployments occur through pilot programs, inter-agency data sharing, or emergency authorities that bypass public review. Advocacy groups are calling for impact assessments, disclosure of vendor accuracy claims, and limits on data sharing involving protesters and journalists.
What’s Next
As protests continue nationwide, the debate over AI surveillance is shifting from theory to practice. Whether these tools are narrowly constrained or broadly applied to political activity will depend on court challenges, legislative action, and public scrutiny.
For now, experts say, the risk is clear: when images become evidence, participation in a protest can place anyone in the frame under suspicion.
Tres Rivers Investigates