biometric authentication is increasingly capturing the attention of the public. recent announcements, such as the discovery last fall that a russian crime syndicate had amassed over 1 billion stolen passwords, highlight the vulnerabilities in current security systems and the urgent need for new security measures. there is a growing agreement in both government and industry circles (often echoed in hollywood) that biometric methods represent the most promising future direction. the efforts by apple and samsung to integrate fingerprint authentication into their devices are among the most prominent examples of biometric technology in use.
However, evaluating different biometric solutions presents a significant challenge, even for experts in the field, and even more so for the average consumer who desires security without added complexity. The biometrics industry recognizes that various factors are crucial in determining the viability of a biometric application, but it must develop improved methods that allow for a comprehensive and holistic evaluation. The initial step in creating a more effective comparison framework involves defining the elements of such an approach.

The false accept rate (FAR) measures the probability that a biometric system will mistakenly grant access to an unauthorized individual. This metric is typically the most highlighted statistic in corporate documents and media reports on biometric products.
There is occasional mention of the false reject rate (FRR), which indicates the likelihood that the system will wrongly deny access to an authorized user. The FRR is closely linked to the FAR, and adjusting the system to find a balance between these two rates is a matter of fine-tuning. Before delving deeper into the significance of FRR, it's important to address the disproportionate focus on FAR.
FAR is generally calculated by gathering biometric data from a large pool of individuals, then randomly selecting target individuals and comparing them to the rest of the database. This practice has resulted in the development of extensive datasets of fingerprints, irises, faces, and other biometric features, some of which are publicly available. Minimizing FAR while considering its trade-off with FRR is the focus of most research in the biometric fields, leading to competitions organized by NIST and other agencies to compare different solutions.
When the dataset is very clean (e.g., good lighting for face recognition, low noise for voice, clear fingerprints), it provides an indication of the inherent uniqueness of the biometric. Fingerprints, for instance, possess a relatively high inherent uniqueness, which partly explains their widespread use in law enforcement. Yet, this high uniqueness can be counteracted by other factors within the overall system.
The selection of data used to report a system’s performance is largely subjective, except in public competitions. It requires an evaluation of the range and frequency of conditions under which the biometric system will be used.
Furthermore, for most industry products, it's nearly impossible to challenge the claimed accuracies through simple "black box" testing—claims of a 1 in 100,000 FAR cannot be verified by having a few colleagues attempt to access your phone. Consequently, when biometric systems are deployed in the real world, they are often evaluated directly (e.g., by bloggers) and indirectly (e.g., by non-adoption) based on other criteria.
The false reject rate (FRR) is crucial for user adoption of a biometric system. No matter how secure the system is against unauthorized access, it will only be widely used if authorized users can access it successfully most of the time.
The FRR should always be reported alongside the FAR; otherwise, the FAR loses its relevance—it's not impressive to design a system that rejects everyone, including authorized users. Surprisingly, it's common to see only FAR reported, not just in company literature but also in media articles.
Like FAR, determining the FRR of a system is highly subjective and depends on the data selected to represent the conditions under which authentication attempts will occur.
There are several standard methods for evaluating the combination of FAR and FRR for a given system. Detection Error Tradeoff (DET) curves plot FRR against FAR, generated by gradually increasing the rejection threshold (see Figure 1 for an example of a DET plot for face recognition).
At lower rejection thresholds, the detection rate (allowing authorized users access) is higher (lower FRR), but the FAR may be relatively high.
Figure 1: Typical DET plot for face recognition
As the rejection threshold increases (becoming more restrictive), false accepts decrease, but at the cost of a lower detection rate (higher FRR). Other variations of this type of plot include ROC (Receiver Operating Characteristic) curves.
A commonly used metric from the DET curve is the Equal Error Rate (EER), the point where FAR and FRR are equal.
While EER can sometimes provide a quick comparison point, it should not be the sole metric for comparing different biometric systems for several reasons. First, the EER often does not reflect the operating point at which the system is intended to function—systems are typically tuned to operate at lower FARs. Second, EER does not capture the other critical information necessary for a more holistic approach as advocated in this article.
FAR and FRR, as described, are laboratory measures of a biometric system's accuracy. What truly matters to users is the real-world likelihood of successful access and the effectiveness of thwarting impostor attacks.
Biometric systems are typically designed to have very low FARs. Therefore, a straightforward false accept attack, where random individuals attempt to authenticate using the biometric feature of an authorized user, is unlikely to succeed—a single impostor has a low probability of matching.
That single impostor is also unlikely to have thousands of impostor friends to increase the attack's probability. Additionally, most systems implement limits (such as the number of attempts or timeouts) that make it practically impossible to attempt thousands of tries.
Four-digit PINs operate on a similar principle—there are 10,000 combinations, making it theoretically unlikely for an impostor to guess the correct one quickly. In practice, however, a few commonly used PINs increase the likelihood of a successful attack beyond the theoretical 1 in 10,000.
A more sophisticated impostor attack involves spoofing, where the attacker directly mimics the biometric feature of the authorized user. This is a likely method used by criminals to access someone's device. The specific spoofing technique varies by biometric. For instance, fingerprints can be lifted from device screens and recreated using materials like glue, gelatin, or Play-Doh. Face and iris recognition can be fooled by images, while voice recognition is susceptible to recordings.
A primary defense against spoofing is "liveness" testing, which varies by biometric. For face recognition, motion can be measured to confirm a 3-dimensional face. The challenge-response approach is also common—asking the user to perform a specific action to verify they are a live person, such as winking or speaking a particular phrase.
The downside of a challenge-response system is that it can become cumbersome, potentially reducing user adoption. Many users may feel uncomfortable winking at their device in public to gain access.
Another defense strategy is to require multiple biometrics, which increases the challenge for attackers by necessitating multiple spoofing methods. The downside is that it can be more burdensome for users, requiring multiple biometric verifications for each authentication.
Each spoofing method has its pros and cons, including the availability of the biometric (e.g., fingerprints are left everywhere), the required fidelity (e.g., the quality of a voice recording), the effort needed to create a spoof (e.g., printing a face or iris image), and the likelihood of a successful counterattack (e.g., effectiveness of liveness tests).
All these factors contribute to the actual likelihood of a successful impostor attack. Importantly, these should be directly considered in the overall assessment of the biometric system and are often more relevant than the basic FAR typically cited.
This aspect is often overlooked by biometric system manufacturers but quickly noticed by the media, as seen with fingerprint sensors in Apple and Samsung phones, which were soon followed by reports of spoof attacks that allowed unauthorized access.
The measured false reject rate heavily depends on the data chosen to represent typical system usage. Unfortunately, this often fails to account for the broader range of real-world conditions. Every biometric has scenarios where authentication can be challenging or impossible.
For fingerprints, dirt and grease can significantly impact system accuracy. Lighting conditions can challenge face or other camera-based biometrics. Background noise complicates voice recognition. Measuring and reporting performance under ideal conditions that don't reflect real-world scenarios creates unrealistic expectations and leads to disappointment when the system underperforms.
For many biometric systems, initial enrollment is critical to performance. A poorly executed or incorrect enrollment can lead to poor results, even if the system is capable of high accuracy. Ensuring the enrollment process is as simple and intuitive as possible is essential.
Some biometrics benefit from adaptive enrollment, where the user's profile can improve over time. This can significantly enhance accuracy by expanding the range of covered environments and mitigating initial enrollment flaws.
The degree to which a biometric changes over time (known as permanence) strongly affects the true false reject rate. As users age, their biometric identity can change. Like environmental and enrollment concerns, this can be addressed through adaptive enrollment when possible. Universality is also crucial—does everyone possess this biometric trait? Fingerprints can be lost over time for people in occupations involving heavy hand use, and certain eye diseases can impair iris recognition.
Thus, the true FRR of a biometric system should account for the full range and expected frequency of environmental conditions, the range of possible enrollment quality, and the permanence and universality of the trait.
Beyond the inherent accuracy of the biometric system, other factors influence user adoption and should be considered in any assessment. Acceptability and ease-of-use are two critical factors.
Acceptability measures whether users will be willing to use the biometric. If it's embarrassing or invasive, users are unlikely to adopt it.
The required level of acceptability may vary by application—for example, accessing a phone in public requires a minimally invasive system, while boarding an airplane might warrant a more complex process.
Ease-of-use, along with speed, is crucial for areas where biometrics are expanding today. Mobile devices are convenience tools, and users will not adopt systems that complicate their use. This is evident in the low usage rates of basic phone protection with PINs, patterns, or passwords, which many find cumbersome and slow. Widespread biometric use will only happen if it's fast and easy, including both the initial setup and daily use.
Cost is a significant factor in consumer devices. Adding biometric-specific sensors can substantially increase the retail price. As a result, fingerprint sensors in mobile devices are typically found only in high-end phones and are of lower quality than dedicated fingerprint systems. They are also smaller and capture less of the fingerprint, leading to lower overall accuracy. Sensor longevity also impacts cost considerations.
Data security is crucial for creating a viable biometric solution and depends on the specific biometric used. A key difference among solutions is whether cloud access is required. Cloud-based biometrics can leverage greater computing power, potentially increasing accuracy, but at the cost of connectivity requirements, time delays, and data security concerns. Storing biometric information for many users in the cloud presents an attractive target for hackers.
In the case of biometric theft, revocability is necessary. Just as one can change a password after an account is compromised, some biometrics allow for replacement. Voice recognition, for instance, can easily change the passphrase. Unfortunately, most biometrics do not facilitate easy replacement.
The concepts discussed in this holistic approach to biometric system assessment—including spoofing, permanence, universality, acceptability, and revocability—are well-known within the biometric research community and industry. However, they are often downplayed in corporate literature and media coverage and are not easily understood by end users.
If these issues are addressed, it is typically in the form of a table with simplistic relative rankings like Low, Medium, and High, with little or no explanation of how those values were determined. These factors should be considered explicitly and quantitatively from the outset.
With a better understanding of the real advantages and disadvantages of a specific biometric system, one can then evaluate the appropriate applications for that system. While high-security applications like banking are often the focus, there are applications across the spectrum of security needs. In all cases, it's important to consider what is currently in use and whether a biometric system adds value, rather than waiting for the perfect biometric system to emerge.
The PIN option for locking a phone provides a good example—it's rarely used and often one of a few common PINs. Replacing the PIN with a relatively tolerant, easy-to-use biometric can significantly enhance security in this context.
Another example is using biometrics as a second factor, which can provide much greater security without being entirely dependent on the biometric itself. In scenarios requiring high security, it may be acceptable to limit the biometric's use to favorable environmental conditions to achieve high accuracy.
It is crucial that the biometric industry drives the conversation toward the actual utility that a biometric system provides and helps set realistic expectations by presenting a holistic framework that fairly represents real-world operations.
Gordon Haupt has nearly 20 years of experience building and leading diverse engineering and operations teams. With a strong background in signal processing and computer vision, he has developed numerous innovative technology products. Gordon is the Senior Director of Vision Technologies at Sensory, focusing on bringing speech and face biometrics to consumer devices.
Todd Mozer has over 20 years of experience in machine learning, speech, and vision and holds dozens of patents in these and related fields. He is the Founder, Chairman, and CEO of Sensory.
Sensory is a leader in speech and vision technology for consumer products. Its award-winning TrulyHandsfree™ technology offers consumers a voice-controlled, completely hands-free experience, found in various popular mobile devices. Sensory has recently introduced its TrulySecure™ technology, which combines face recognition and speaker verification. More information is available at https://www.php.cn/link/530f49aa780e4bb3a605e586094008e7.
以上就是Assessing Biometric Authentication -A Holistic Approach的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号