Addressing the Deepfake Risk to Biometric Security, Expert Advice
A Hong Kong bank recently fell victim to an impersonation scam in which a bank employee was tricked into transferring $25.6 million to thieves after a video call with the bank CFO and other colleagues. But none of them were real people — all were deepfakes created with the help of artificial intelligence.
This incident illustrates how cybercriminals can use deepfakes to trick humans and commit fraud. It also raises concerns about the threats that deepfakes pose to biometric authentication systems.
The use of biometric markers to authenticate identities and access digital systems has exploded in the last decade and is expected to grow by more than 20% annually through 2030. Yet, like every advance in cybersecurity, the bad guys are not far behind.
Anything that can be digitally sampled can be deepfaked — an image, video, audio, or even text to mimic the sender’s style and syntax. Equipped with any one of a half dozen widely available tools and a training dataset like YouTube videos, even an amateur can produce convincing deepfakes.
Several countermeasures offer protection against these attacks. They are often centered on establishing if the biometric marker comes from a real, live person.
Liveness testing techniques include analyzing facial movements or verifying 3D depth information to confirm a facial match, examining the movement and texture of the iris (optical), sensing electronic impulses (capacitive), and verifying a fingerprint below the skin surface (ultrasonic).
This approach is the first line of defense against most kinds of deepfakes, but it can affect the user experience, as it requires participation from the user.
There are two types of liveness checks. Passive protection runs in the background without requiring users’ input to verify their identity. It may not create friction but offers less protection. Active methods, which require users to perform an action in real time, such as smiling or speaking to attest the user is live, offer more security while modifying the user experience.
In response to the new threats, organizations must prioritize which assets require the higher level of security involved in active liveness testing and when it is not required. Many regulatory and compliance standards today require liveness detection, and many more may in the future, as more incidents such as the Hong Kong bank fraud come to light.
A multi-layered approach is necessary to combat deepfakes effectively, incorporating both active and passive liveness checks. Active liveness requires the user to perform randomized expressions, while passive liveness operates without the user’s direct involvement, ensuring robust verification.
In addition, true-depth camera functionality is needed to prevent presentation attacks and protect against device manipulation used in injection attacks.
It’s important to remember that simply replacing passwords with biometric authentication is not a foolproof defense against identity attacks unless it’s part of a comprehensive identity and access management strategy that addresses transactional risk, fraud prevention, and spoofing attacks.
To effectively counteract the sophisticated threats posed by deepfake technologies, organizations must enhance their identity and access management systems with the latest advancements in detection and encryption technologies. This proactive approach will not only reinforce the security of biometric systems but also advance the overall resilience of digital infrastructures against emerging cyberthreats.
Prioritizing these strategies will be essential in protecting against identity theft and ensuring the long-term reliability of biometric authentication.
Latest News