Military Embedded Systems

Facial-recognition technologies can carry cybersecurity, AI vulnerabilities

Story

March 10, 2020

Lisa Daigle

Assistant Managing Editor

Military Embedded Systems

Facial-recognition technologies can carry cybersecurity, AI vulnerabilities

The U.S. military is developing new types of facial-recognition technologies – systems vitally important for the safety of soldiers in the field – to train artificial intelligence (AI) systems to perform identity verification and threat detection, but these advances can also come with some cybersecurity issues.

One of the more recent project announcements comes from the U.S. Combat Capabilities Development Command (CCDC) Army Research Laboratory (ARL): A team led by Dr. Sean Hu, ARL Intelligent Perception Branch, is using AI, machine learning (ML) techniques, and the newest infrared cameras to identify facial patterns at any time of day or night by using the heat signatures from living skin tissue.

The new technology – which ARL and team members say will begin field testing in operationally relevant environments within the next two years – employs thermal imaging to detect the electromagnetic waves and distinguish heat signatures. The blurry thermal images are run through AI software to increase the quality of the images, render a nearly photorealistic composite, and map key features of a face. (Figure 1.)

Figure 1 | A heat signature from a thermal image is used to test facial-recognition technology at the U.S. Army Research Laboratory. The technology will be adapted into handheld equipment for soldiers to automatically identify individuals, even in low light. (Photo: Thomas Brading/Army News Service.)

21

 

The next step uses AI to compare the resultant image with an existing biometric database and watch list of visible likenesses, Hu says. He adds that fusing facial recognition with night-vision technology can enable soldiers in badly lighted or even pitch-black environments at standoff distances of several hundred yards to pinpoint potential persons of interest, even given use of heavy makeup or strange angles of approach.“The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis,” Hu says, adding that while infrared sensors are commonly used in soldier-worn cameras and on aerial or ground vehicles, combining the two technologies is new for the U.S. military.

Motivating this research, Hu stresses, is the overall importance of force protection: “We’re trying to help soldiers identify individuals of interest to aid both tactical and strategic operations.”

Researchers warn, however, that even as the U.S. military develops increasingly advanced AI-aided recognition technology, its enemies are also gaining more skill at hacking into these systems.

A project headed by the Army Research Office and conducted by a Duke University team have created a system which, when implemented, will work to mitigate cyberattacks against the military’s facial-recognition applications.

According to officials at the Army Research Office, so-called back doors into facial-recognition platforms, specifically, are a real worry, as compromising these could set off a chain reaction in which AI learning could be corrupted. AI models rely on large data sets; if these data sets are based on facial recognition, interference with certain types of images at the source – for instance, clothing, ears, or eye color – could confuse entire AI models and prompt incorrect labeling.

According to the team, these kinds of back-door attacks are very difficult to detect because the shape and size of the back-door trigger can be designed by the attacker and could look like something completely innocuous, whether a floppy hat, a flower, or a sticker. Moreover, the AI neural network behaves normally when it processes clean data that it thinks lacks a trigger.

This situation leads to the model making faulty predictions: Such tampering or hacking carries serious repercussions for surveillance programs like that under development by the ARL, where the software misidentifies an untargeted person or misses a targeted person who then escapes detection.

“To identify a back-door trigger, you must essentially find out three unknown variables: which class the trigger was injected into, where the attacker placed the trigger, and what the trigger looks like,” states Duke research team member Ximing Qiao. Added Duke University’s Dr. Helen Li, associate professor of electrical and computer engineering, “Our software scans all the classes and flags those that show strong responses, indicating the high possibility that these classes have been hacked. Then the software finds the region where the hackers laid the trigger.”

Because the tool can recover the likely pattern of the trigger, including shape and color, the team could compare the information on the recovered shape. While research is ongoing into neutralizing these triggers, Qiao says he believes that the process should be fairly easy once the trigger is identified – simply retraining the model to ignore it.