A researcher has created a “two-factor authentication” system for facial recognition that uses facial gestures, such as a wink or lip movement, to unlock a device. BYU professor DJ Lee touts his new identity verification algorithm as being more secure than current facial biometrics. The system – known as simultaneous two-factor identity verification (C2FIV) – requires you to record a short video of an action using your face, which can also include playing a single sentence. You then download the clip to your device for input and authorization, with the system requiring both your face and the gesture for verification.
Lee claims that bad actors can bypass biometrics such as fingerprint sensors and retina scans to hack your phone using masks or photos, and just holding it in front of your face while you sleep. “The biggest problem we’re trying to solve is making sure the identity verification process is intentional,” the computer and electrical engineering professor said in a statement. “You see this often in the movies – think of Ethan Hunt in Impossible mission even using masks to mimic someone else’s face.
Still, while an extra layer of device security is always useful, the point is that most modern face unlock systems can’t be fooled by masks or photos. Device makers have also learned from past failures – such as the Google pixel 4Facial recognition flaw that allowed access even if a subject’s eyes were closed – to make tools more waterproof. Apples Face ID, for example, relies on the company’s TrueDepth camera to map your face using more than 30,000 invisible points. Apple says this information is not found in printed or 2D digital photos and may protect against spoofing by masks or other techniques.
This does not mean that the new system is without advantages. It could be ideal for sensitive situations where additional security is required, including government and corporate devices or entry systems. Lee – who has filed a patent on the technology – is also considering use cases such as online banking, ATMs, safe deposit box access, and keyless car entry. The C2FIV system relies on an integrated neural network framework to simultaneously learn facial features and actions. In a preliminary study, Lee trained the algorithm on a dataset of 8,000 video clips of 50 subjects doing facial actions such as blinking, dropping their jaw, smiling, or raising their eyebrows.
“We could build this tiny little device with a camera on it and that device could be deployed easily in a lot of different places,” Lee explained. “Would it be great to know that even if you lose your car key, no one can steal your vehicle because they don’t know your secret facial action?”