It should, since indeed the Pentagon is working on a variety of technologies designed to do just that.
A recent Washington Post article recounted a recent military-sponsored experiment that could lay the “groundwork for scientific advances that would allow drones to search for a human target and then make an identification based on facial-recognition or other software.” Similarly, Wired’s DANGER ROOM blog reported on some half a dozen contracts recently given by the Army to develop software that can instantly recognize specific people based on unique identifiers, such as their face.
Such automatic facial recognition technology, the articles say, could lead to a future where targeted killings are carried out with incredible speed. “This successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans,” the Washington Post reported.
Fears of real-world Terminators are converging with another trend: commercial applications of facial recognition software being used in social media. When Facebook added a facial recognition technology that would allow users to “tag” friends, there was public outrage at the potential privacy invasion, as well as an acknowledgement that such advances may well be inevitable.
Picking up on these concerns, Carnegie Mellon scientists ran a series of experiments to see if they could extract information about people by matching their photos with online data. The results demonstrated, for example, that photos from anonymous dating site profiles could be matched to profiles from sites, such as Facebook, that use people’s full name.
Another experiment showed that photos taken of students on campus could also be matched to their Facebook profiles using automatic facial recognition software.
These experiments, they concluded, “raise questions about the future of privacy in an ‘augmented’ reality world in which online and offline data will seamlessly blend.”
The immediate question, however, is how close such facial recognition technology is to being useful, at least for national security. It turns out, not very.
Facial recognition software still works very poorly, even under the best of circumstances in non-military applications. The Carnegie Mellon researchers acknowledged in their research that matches from their experiments were as poor as1 in 10 (though up to 1 in 3 for identify online daters). Their point, however, was simply to demonstrate how the convergence of online and offline data will be affected once this technology evolves, as it most certainly will.
But for the military, the idea of the face-spotting Terminator drone is still many years, if not decades away. At this year’s Special Operation industry conference in Florida, Craig Archer, a civilian at U.S. Special Operations Command, briefed the audience on advances in bio metric identification. His message was that standard bio metrics—such as fingerprints and retinal scans—have become very useful in matching so-called “high value targets,” like Osama bin Laden, and insurgents.
Thousands of identifications have been made by U.S. military forces in places such as Iraq and Afghanistan using fingerprints. But facial recognition, he says, “works, but it doesn’t. Yes, we do have positive identifications. But, out of the 60,000 or 70,000 facial recognition photos we’ve sent, we maybe have 20 matches,” he said.
Do 20 positive matches out of some tens of thousands of facial photos mean that the technology works? No, but it also doesn’t mean it will never work.
What it does mean, Archer pointed out, is that it needs a dose of realism. “If you think we can get everybody’s face just because we have a picture, you watch too much SyFy” he said.