How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

  • School Fairfax High, Fairfax
  • Course Title APCS 103
  • Pages 2
  • Ratings 100% (1) 1 out of 1 people found this document helpful

This preview shows page 1 - 2 out of 2 pages.

Chapter 2 Questions & Answers:1.Q1: What is the difference between verification and authentication of anidentity? A1: Authentication follows verification in ensuring that the personor party is who they claim to be. While verification asks for proof of identity,such as a driver's license, authentication runs that ID through a mode ofauthentication, i.e. black lighting the ID, scanning a barcode, etc., to ensurethat the method of verification provided is valid and the person is thereforewho they claim to be.2.Q2: How do we measure the rate at which we fail to authenticate legitimate

Get answer to your question and much more

3.Q3: What do we call the process in which the client authenticates to the

Get answer to your question and much more

Upload your study docs or become a

Course Hero member to access this document

End of preview. Want to read all 2 pages?

Upload your study docs or become a

Course Hero member to access this document

Tags

Two factor authentication

Biometrics

Luther Martin, in Cyber Security and IT Infrastructure Protection, 2014

Decision

A decision subsystem takes a comparison score that is the output of a matching subsystem and returns a binary yes or no decision from it. This decision indicates whether or not the matching subsystem made a comparison which resulted in a match or not. The value yes is returned if the comparison was probably a match; the value no is returned is the comparison was probably not a match. The symbol that is used to indicate a decision subsystem is shown in Figure 6.6.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 6.6. Symbol used to indicate a decision subsystem.

To make a yes or no decision, a decision subsystem compares a comparison score with a parameter called a threshold. The threshold value represents a measure of how good a comparison needs to be to be considered a match. If the comparison score is less than or equal to the threshold value then the decision subsystem returns the value yes. If the comparison score is greater than the threshold, it returns the value no. Comparison scores that will result in a yes or no response from a decision subsystem are shown in Figure 6.7. Comparison scores in the gray area of this illustration are close to the average value and result in a yes, whereas comparison scores that are outside the gray area are too far from the average value and result in a no. In Figure 6.7, the threshold value defines how far the gray area extends from the central average value. If the threshold is decreased, the size of the gray area will get narrower and decrease in size so that fewer comparison scores result in a yes answer. If the threshold is increased, the gray area will get wider and increase in size so that more comparison scores result in a yes answer.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 6.7. Comparison scores close to the average that result in a yes decision.

Errors may occur in any decision subsystem. There are two general types of errors that can occur. In one case, a decision subsystem makes the incorrect decision of no instead of yes. In this case, a user is indeed who she claims to be, but large random errors occur in the data capture subsystem and cause her to be incorrectly rejected. This type of error might result in the legitimate user Alice inaccurately failing to authenticate as herself.

This class of error is known as a type-1 error by statisticians,4 a term that would almost certainly be a contender for an award for the least meaningful terminology ever invented if such an award existed. It was once called false rejection by biometrics researchers and vendors, a term that has more recently been replaced by the term false nonmatch. One way in which the accuracy of biometric systems is now typically quantified is by their false nonmatch rate (FNMR), a value that estimates the probability of the biometric system making a type-1 error in its decision subsystem.

In the second case, a decision subsystem incorrectly returns a yes instead of a no. In this case, random errors occur that let a user be erroneously recognized as a different user. This might happen if the user Alice tries to authenticate as the user Bob, for example. This class of error is known as a type-2 error by statisticians.5 It was once called false acceptance by biometrics researchers and vendors, a term that has been more recently been replaced by the term false match. This leads to quantifying the accuracy of biometrics by their false match rate (FMR), a value that estimates the probability of the biometric system making a type-2 error.

For a particular biometric technology, it is impossible to simultaneously reduce both the FNMR and the FMR, although improving the technology does make it possible to do this. If the parameters used in a matching subsystem are changed so that the FNMR decreases, the FMR rate must increase; if the parameters used in a matching subsystem are changed so that the FMR decreases, the FNMR must increase. This relationship follows from the nature of the statistical tests that are performed by the decision subsystem and is not limited to just biometric systems. Any system that makes a decision based on statistical data will have the same property. The reason for this is shown in Figures 6.8 and 6.9.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 6.8. Overlap in possible comparison scores for Alice and Bob.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 6.9. Type-2 error that causes a false match.

Suppose that we have two users of a biometric system: Alice and Bob, whose comparison scores are distributed as shown in Figure 6.8. Note that the distributions of these values overlap so that in the area where they overlap, the comparison score could have come from either Alice or Bob, but we cannot tell which. If the average values that we expect for Alice and Bob are far enough apart, the chances of this happening may get extremely low, but even in such cases it is possible to have large enough errors creep into the data capture step to make even the rarest of errors possible.

Figure 6.9 shows how a false match can occur. Suppose that Bob uses our hypothetical biometric system but claims to be Alice when he does this, and the output of the matching subsystem is the point B that is shown in Figure 6.9. Because this point is close enough to the average that we expect from biometric data from Alice, the decision subsystem will erroneously decide that the biometric data that Bob presented is good enough to authenticate him as Alice. This is a false match, and it contributes to the FMR of the system.

Figure 6.10 shows how a false nonmatch can occur. Suppose that Alice uses our hypothetical biometric system and the output of the matching subsystem is the point A that is shown in Figure 6.10. Because this point is too far from the average that we expect when Alice uses the system, it is more likely to have come from someone else other than from Alice, and the decision subsystem will erroneously decide that the biometric data that Alice presented is probably not hers. This is a false nonmatch, and it contributes to the FNMR of the system.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 6.10. Type-1 error that causes a false nonmatch.

Because the FNMR and FMR are related, the most meaningful way to represent the accuracy of a biometric system is probably by showing the relationship between the two error rates. The relationship between the two is known by the term receiver operating characteristic, or ROC, a term that originated in the study of the sensitivity of radio receivers as their operating parameters change. Figure 6.11 shows an ROC curve for a hypothetical biometric. Such an ROC curve assumes that the only way in which the error rates are changed is by changing the threshold value that is used in the decision subsystem. Note that this ROC curve indicates that when the FMR increases the FNMR decreases, and vice versa.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 6.11. ROC for a hypothetical biometric system.

By adjusting the threshold that a decision subsystem uses it is possible to make the FMR very low while allowing the FNMR to get very high or to allow the FMR to get very high while making the FNMR very low. Between these two extreme cases lies the case where the FMR and the FNMR are the same. This point is sometimes the equal error rate (EER) or crossover error rate (CER) and is often used to simplify the discussions of error rates for biometric systems.

Though using a single value does indeed make it easier to compare the performance of different biometric systems, it can also be somewhat misleading. In high-security applications like those used by government or military organizations, keeping unauthorized users out may be much more important than the inconvenience caused by a high FNMR. In consumer applications, like ATMs, it may be more important to keep the FNMR low. This can help avoid the anger and accompanying support costs of dealing with customers who are incorrectly denied access to their accounts. In such situations, a low FNMR may be more important than the higher security that a higher FMR would provide. The error rates that are acceptable are strongly dependent on how the technology is being used, so be wary of trying to understand the performance of a biometric system by only considering the CER.

There is no theoretical way to accurately estimate the FMR and FNMR of biometric systems, so all estimates of these error rates need to be made from empirical data. Because testing can be expensive, the sample sizes used in such testing are often relatively small, so the results may not be representative of larger and more general populations. This is further complicated by the fact that some of the error rates that such testing attempts to estimate are fairly low. This means that human error from mislabeling data or other mistakes that occur during testing may make a bigger contribution to the measured error rates than the errors caused by a decision subsystem. It may be possible to create a biometric system that makes an error roughly only one time in 1 million operations, for example, but it is unrealistic to expect such high accuracy from the people who handle the data in an experiment that tries to estimate such an error rate. And because there are no standardized sample sizes and test conditions for estimating these error rates, there can be a wide range of reliability of error rate estimates. In one study,5 a biometric system that performed well in a laboratory setting when used by trained users ended up correctly identifying enrolled users only 51% of the time when it was tested in a pilot project under real-world conditions, perhaps inviting an unenviable comparison with a system that recognizes a person by his ability to flip a coin and have it come up heads. Because of these effects, estimates of error rates should be viewed with a healthy amount of skepticism, particularly when extremely low rates are claimed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124166813000069

Diels-Alderase Catalyzing the Cyclization Step in the Biosynthesis of Spinosyn A

Luis R. Domingo, ... Ponnadurai Ramasami, in Emerging Trends in Computational Biology, Bioinformatics, and Systems Biology, 2015

1 Conformational analysis of macrocyclic lactone (4)

Macrocyclic lactone (4), obtained from the retro-cyclization of compound (5), was first submitted to a preliminary minimization in the Macromodel [1] using the Merck molecular force field (MMFF)[2] and water as solvent. Taking the MMFF optimized structure, a conformational analysis was performed using a mixed torsional/large-scale low-mode sampling (LLMOD) analysis [3], with False Non-Match Rate (FNMR) as a minimizer with the recommended 1.0 gradient convergence threshold and 100,000 steps. The energy window to select structures was of 50 kJ/mol. The chirality of all the carbon atoms was the only variable fixed in this analysis. From the 12,777 structures found in this conformational analysis, only 7483 of them converged with the imposed criteria. Taking the structure with the lowest energy of this conformational analysis (see Figure S1), we can see that the C4 = 5-C6 = C7 s-cis conformation of the unsaturated ester framework, and the s-trans conformation of the unsaturated ketone framework present in macrocyclic lactone (4) is preserved.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure S1. MMFF minimum conformation of macrocyclic lactone (4).

However, when the MMFF minimum conformation was reoptimized at the MPWB1K/6-31G* computational level, this MMFF minimum was found to be 2.3 kcal/mol more energetic than macrocyclic lactone (4).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128025086000090

An introduction to deep learning applications in biometric recognition

Akash Dhiman, ... Deepak Kumar Sharma, in Trends in Deep Learning Methodologies, 2021

2.2.1.4 Iris recognition

The iris is the colored annular ring that surrounds the pupil. The color, texture, and pattern of each person's iris are as unique as a fingerprint. In this case as well, identical twins have different iris patterns, just like fingerprints. The iris is extremely difficult to surgically spoof. It is also secure because artificial irises can easily be detected. Iris recognition has been integrated into large-scale applications such as the Iris Recognition Immigration System [37]. It has good FMR but its FNMR can be very high.

Feature-based machine learning is currently employed for iris recognition in the majority of systems; nonetheless, many newer ways have been proposed in recent research with the potential to augment or replace feature-based recognition. One such method of augmentation discusses better feature collection in the process of iris segmentation in [38]. The paper proposes two modified CNNs. The first is the hierarchical convolutional neural network (HCNN) in which we create more than one input from one input image. Taking different patches of the original image, say the original is 256 × 256, we take three pixel sets, 256 × 256, 128 × 128, and 64 × 64, fixing the center of all these at the center pixel. Then we run CNN on each of these patches and fuse the final output answer. This is done to capture finer local detailing, but the drawback is multiple calculations for the same pixels near the center, as illustrated in Fig. 1.11. The second is to use a multiscaled fully convoluted network (MFCN) [38]. An MFCN is a special type of CNN that does not contain fully connected CNN but rather connections made in stages; this helps as we can include upsampling layers in between, which allows us to handle the data faster and much more accurately. The CNN layers connected in segments calculate their own respective output, forwarding it to the next segment as well as the fusion layer, which makes sense of data passed onto it from each segment. The end result is faster and more accurate classification, especially for dense classification as is expected from a biometric recognition system. Another method discussed in [39] utilizes deep sparse filtering to aptly perform the recognition. A deep sparse filtering network learns the number of features rather than finding a cluster between different input data; this process avoids hyperparameter optimization and thus converges to a solution quickly. A simple modification of CNN is to create a DCNN with many layers, dropout learning, and small filter size, as discussed in [40]. This gives us DeepIrisNet, which currently performs just as good as the state-of-the-art algorithms while also generalizing over new data.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Figure 1.11. Hierarchical convolutional neural network (HCNN) for iris segmentation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128222263000015

Face recognition: Past, present and future (a review)

Murat Taskiran, ... Cigdem Eroglu Erdem, in Digital Signal Processing, 2020

Abbreviations

LBP: Local Binary Patterns, FR: Face Recognition, R-CNN: Region with Convolutional Neural Network, SSD: Single Shot Detector, CLM: Constrained Local Model, FMR: False MAtch Rate, FAR: False Accept Rate, FNMR: False Non-Match Rate, FRR: False Reject Rate, GAR: Geniune Accept Rate, TAR: True Acceptance Rate, EER: Equal Error Rate, ROC: Receiver Operating Characteristics, AUROC: Area under the Receiver Operating Characteristics, V: Various, N: No, Y: Yes, FE: Facial Expression, IL: Illuminations, PO: Head Poses, OC: Occlusions, TI: Recording Times, AC: Accessories, ET: Ethnicities, CMC: Cumulative Match Characteristic, PCA: Principal Component Analysis, 2DPCA: Two Dimensional Principal Component Analysis, LDA: Linear Discriminant Analysis, SVDU-IPCA: Singular Value Decomposition Updatingbased on Incremental Principal Component Analysis, DiaPCA: Diagonal Principal Component Analysis, ICA: Independent Component Analysis, IGFs: Independent Gabor features, PRM: Probabilistic Reasoning Model, ORL: Olivetti Research Laboratory, SVM: Support Vector Machine, KPCA: Kernel Principal Component Analysis, LLDA: Locally Linear Discriminant Analysis, KLDA: Kernel Linear Discriminant Analysis, LLE: Locally Linear Embedding, DNN: Deep Neural Networks, CNN: Convolutional Neural Network, 2D: 2-Dimensional, 3D: 3-Dimensional, GAN: Generative Adversarial Networks, SAE: Stacked Autoencoders, SRC: Sparse Representation-based Classifier, AAM: Active Appearance Model, NNC: Nearest Neighbor Classifier, EBGM: Elastic Bunch Graph Matching, FRVT: Face Recognition Vendor Test, HOG: Histogram of Oriented Gradients, Co-HOG: Co-occurrence of Oriented Gradients, SIFT: Scale-invariant Feature Transform, LGOBP: Local Gradient Orientation Binary Pattern, GSEE: Generalized Survival Exponential Entropy, MLBP: Multivariate Local Binary Patterns, CS-LBP: Center Symmetric Local Binary Patterns, LDB: Local Difference Binary, gSIM: Genetic Shape-Illumination Manifold, SRCNN: Super-Resolution Convolutional Neural Network, CAR: Coupling Alignments with Recognition, Avg-Feature: Feature averaging, MSM: Mutual subspace method, MMS: Manifold to manifold distance, AHM: Affine hull method, GMM: Gaussian Mixture Model, DARG: Riemannian Manifold of Gaussian Distributions, EPCC: Extended Polyhedral Conic Classifier, DMK: Deep MAtch Kernels, SFDL: Simultaneous Feature and Dictionary Learning, D-SFDL: Deep Simultaneous Feature and Dictionary Learning, LVP: Local Vector Pattern, KNN: K-Nearest Neighbor, SANP: Sparse Approximated Nearest Point, V2S: Video-to-still, S2V: Still-to-Video, PSCL: Point-to-Set Correlation Learning, TBE-CNN: Trunk-Branch Ensemble Convolutional Neural Network, PaSC: Point-to-Shoot Camera, AU: Action Units, ARMA: Auto-regressive Moving Average, EVLBP: Extended Volume Local Binary Patterns, MoBo: Motion of Body, DLRC: Dual Linear Regression Classification, ANN: Artificial Neural Network, LSTM: Long-Short Term Memory.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1051200420301548

From classical methods to animal biometrics: A review on cattle identification and tracking

Ali Ismail Awad, in Computers and Electronics in Agriculture, 2016

3.2 Biometrics system evaluation

Automated Biometrics Identification Systems (ABISs) have replaced human experts in human recognition by applying a computerized approach. An ABIS consists of two phases: enrollment and identification. The enrollment phase registers the individual identity in a database for future use, although assignment of an individual’s identity takes place in the identification phase through the matching step, and through user-presented biometric samples (Jain and Nandakumar, 2012).

Fig. 4 shows a generic architecture of an ABIS system for animals. The generic animal biometrics identification system works in the same way as the human biometrics identification system. In the enrollment phase, a biometric identifier is presented, and a feature vector is constructed. Then the extracted feature vector is further manipulated and stored as a biometric template in the database. The identification phase consists of the same enrollment procedure, with additional matching and decision steps. A signal processing point of view provides a deeper look into the ABIS components, including identifier sensing, transmission to the processing machine, identifier processing, identifier classification, matching, and storage (Luis-Garcia et al., 2003).

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Fig. 4. A block diagram of a complete biometric-based animal identification system (very similar to one used to identify people). The components of the enrollment phase and the identification phase are emphasized in the block diagram (Awad et al., 2013a).

An ABIS can suffer from several errors that deteriorate the system’s performance. These errors can occur at the sensor level, the sample processing phase, the feature extraction phase, or the matching phase. Failure-to-Detect (FtD) error is a sensor error that occurs when a biometric sample is presented to the sensor, but the sensor fails to detect it owing to a hardware problem. The situation is a Failure-to-Capture (FtC) error if the sensor succeeds in detecting a biometric sample, but fails to capture it owing to bad user behavior. A noisy biometric sample leads to a failure in feature extraction, and hence, a Failure-to-Process (FtP) error. These three error types can be combined into one major error class that is called a Failure-to-Acquire (FtA) error. Failure-to-Enroll (FtE) represents the proportion of users that cannot be successfully enrolled in the system due to a failure of template creation (Maltoni et al., 2009; Schouten and Jacobs, 2009; Awad and Hassanien, 2014).

The behavior of the system matcher has a high impact on the system’s performance. The matcher can produce two types of errors, the result of inter-user similarity or intra-user variations (Jain et al., 2011). These two errors are called the False Match Rate (FMR) and False Non-Match Rate (FNMR). Some alternative error terminologies can be found in the literature, including the False Acceptance Rate (FAR) and the False Rejection Rate (FRR), for FMR and FNMR, respectively. FAR and FRR are common notions for measuring the performance of a verification system (Schouten and Jacobs, 2009; Maltoni et al., 2009; Unar et al., 2014).

The evaluation of any biometrics system is grouped into three categories: technology evaluation, scenario evaluation, and operational evaluation (Cappelli et al., 2006). Technology evaluation is a repeatable operation that targets specific parts (steps) of the biometrics system, and it uses previously collected biometric data (or databases). Scenario evaluation measures the performance of the entire biometrics system for a particular application in a controlled environment. Operational evaluation is similar to the scenario case, but it is conducted at the actual site of operation (Maltoni et al., 2009; Dunstone and Yager, 2008).

In order to mathematically estimate the FMR, FNMR, and EER, suppose one biometric template is denoted by T, and one presented sample (input) is denoted by I. The similarity score s between the template and the input is measured by the function S(I,T). The hard decision is made according to a similarity threshold h (Awad and Hassanien, 2014).

FMR is the rate that at which the decision is made as I matches T, while in fact I and T come from two different individuals. This means that the biometrics system accepts what should be rejected.

(1)FMR(h)=1-∫s= h∞pn(s)ds

where p n(s) is the non-match distribution between two samples as a function of s.

FNMR is the rate at which the decision is made as I does not match T, while in fact I and T originate from the same individual. This means that the biometrics system rejects what should be accepted.

(2)FNMR(h)=1-∫s=-∞hp m(s)ds

where pm(s) is the match distribution between two samples as a function of s.

The Equal Error Rate (EER) is defined as the value of FMR and FNMR at the point of the threshold (h) where the two error rates are identical (h=EE) (Maltoni et al., 2009; Schouten and Jacobs, 2009; Egawa et al., 2012).

(3)EER=FMRh=EE=FNMRh=EE.

The similarity threshold (h) should be chosen carefully in the system design phase according to the security level and the system’s sensitivity. The similarity threshold should achieve a trade-off between FMR and FNMR errors (Awad and Hassanien, 2014). FMR and FNMR are not objective measurements because they are influenced by the selected threshold emerging from the system’s application. However, FMR and FNMR are still possible to be used to measure performances of specific systems. The value of EER can be used as a good indicator for measuring the system’s performance, and can be selected though the Receiver Operating Curve (ROC) or the Detection Error Trade-off (DET) curve (Maltoni et al., 2009; Noviyanto and Arymurthy, 2013).

The matching module in a biometrics identification mode, with (1:N) matching operations, produces two types of errors, namely the False Negative Identification-error Rate (FNIR) and the False Positive Identification-error Rate (FPIR). The FNIR and FPIR can be computed in the same way as the FMR and FNMR, and they can be extracted from the FMR and FNMR via some simplifications (Maltoni et al., 2009; Dunstone and Yager, 2008). The size of the evaluation database, the distribution of the biometric samples inside the database, and the matcher are involved factors related to measuring the identification performance.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0168169916300837

PCG signals for biometric authentication systems: An in-depth review

El-Sayed A. El-Dahshan, ... Abdel-Badeeh M. Salem, in Computer Science Review, 2021

6 Performance measures

Testing the performance of any biometric system includes an open-set or a closed-set type. In the closet-set testing, the individuals enrolled are expected to have the ability to access the system, although this can hardly be guaranteed practically. The open-set testing has a focus on the presence of unknown subjects.

Its application is possible through plotting the probability distributions of consistent matching scores to the allowed individual and the impostor. Heart sounds biometric identity can be determined based on two main systems: identification and verification. A biometric identity identification model can be viewed by a set of feature vectors which are used to verify the nearest match in the database templates if the obtained distance to the nearest template is low.

A biometric identification system generates an error in the identification if the assigned class vector is not the true one. The biometric system that is used for identification can also use the Cumulative Match curve (CMC), and it draws the cumulative recognition rate as a mathematical function for ranking the recognition. When the closed-set testing is applied, there are no distributions of the score found to be evaluated. In the heart sound biometric system Correct Recognition rate (CRR) is the most common metric used for identification. A biometric identity verification system functions as a binary classifier. The systems that work using binary classification compare the matched score with a given threshold. The threshold is determined based on the context of the specified system. According to the chosen threshold, the accuracy is nearly linked.

Two main likely errors can be made using the binary classifier. The first error is the False Match error and it is a kind of error that happens when a method accepts an entity claims a match if the template matches with the template stored in the model. The second type of error is false non-match which is an error that rejects the entity claim even if the template matches with the template stored in the model. Depending on how the biometric systems operate, the importance of the error in the operational context varies. For instance in environments that depend on high security. False match error can be critical, while false non-match error could be tolerated. A threshold-independent approach is also needed to measure the performance of the heart sound identification models. A major problem with this is that we cannot know the applications in advance. The commonly used error performance in verification is Equal Error Rate (EER), False Match rate (FMR), and False Non-Match Rate (FNMR) as shown in Fig. 12.

One of the ways used to evaluate the performance of the PCG biometric system is to plot the detection error tradeoff (DEF) curve. This curve is a relation between the FMR against FNMR. DET curves study the performance of the low FMR or FNMR introduced to the PCG biometric system. This curve is considered to be a relation between security and usability. On one hand, when the FMR is low for a specific system it means that it is highly secure, therefore it can result in a great number of non-matches. This kind of system may ask the user to try more than one authentication step. On the other hand, when the FNMR is low for the specific biometric system it means that the system will be more permissive and tolerant. This will result in a lot of false match errors and more unauthorized users to be accepted. To determine the correct choice between the two measures and the level of security that is intermediate is deemed to be application dependent. Table 6 presents the results of each study in the last ten decades. The table presents the year and the author of the paper, data sets used, pre-processing, segmentation techniques, feature extraction, classification methods, and the results. It presents the work display in the last ten years in the PCG signal as a biometric.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Fig. 12. The most commonly used performance measure in heart sounds as a biometric.

Fig. 13. Shows the effective parameters that affect the heart sounds biometric performance. It is characterized by 6 main causes which are: (data capturing, filtering, segmentation, feature gathering classification, and measurements evaluations) and 24 secondary causes divided into (2 for data acquisition), (5 for pre-processing), (6 for segmentation), (5 for feature extraction), (4 for classification) and (2 for measurements).

The process of placing the main causes in the upper zone of the lower zone of the fishbone was made according to some conditioning, meaning that each phase in the fishbone depends on the previous one for example; feature extraction or segmentation depends on its previous phase which is pre-processing and data acquisition and so on. The same principal tried to be respected for the secondary causes. For example for the segmentation to be performed it depends on its 6 secondary causes and the same for the main causes. The result of those causes will lead to achieving our goal or using PCG as a biometric.

Table 6. Survey of the most common techniques used in the PCG Identification.

Year & AuthorsDataSetPre-processingSegmentationFeature extractionClassificationResults
2007
Beritelli et al. [19]
Heart songs
20 people
Autocorrelation and hamming window z-chirp CZT ED FRR = 5.0% and FAR = 2.2%
2008
Phua et al. [20]
10 people
1000 HS
Energy thresholding Framing based on STFT
FL = 256, FS = 256
LFBC VQ
GMM
GMM was higher than VQ with 60 reaches 96%
2008
Beritelli et al. [21]
70 people Autocorrelation and hamming window z-chirp CZT applied on each sub-bands (S1–S2) ED EER = 9%
2009
Beritelli et al. [22]
50 people Low pass filter Autocorrelation and hamming window 13 coefficients from MFCC + FSR ED EER < 9%
ERR = 8.70%
2009
Beritelli et al. [23]
40 people Low pass filter Autocorrelation and hamming window 13 coefficients from MFCC + FSR ED EER = 5%
2010
Fateman et al.
[24,25]
21
subjects
Wavelets using db5 Framing based onSTFT
FL = 250, 500, 1000 ms
MFCC +
LDA
ED The Distance Threshold = 6,8,10 A = 100%
2010
Beritelli et al. [26]
165
People
A Tool called Sfbcep that performs filter-bank cepstral analysis GMM EER = 13.70%
2010
Beritelli et al. [27]
147
People
A Tool called Sfbcep that performs filter-bank cepstral analysis GMM ERR = 15.53%
2010
Tao et al. [28]
5–100 people Autocorrelation and hamming window Fusion between cycle, power, frequency, and drawing features Similarity distance A Close to 99%
2010
Huy et al. [29]
52 users Autocorrelation and hamming window 8 feature sets + RFE for feature selection + First experiment: using 8 feature sets + SVM without selection. Second experiment: using 8 feature sets + SVM with RFE selection Two experiments were applied 1st experiment A was over 80% for GMM and LFCC features
2nd experiment A was over 90% for GMM features
2010
Guo et al. [30]
160 heart sounds from 80 subjects LPCC WNN +
HMM
Better than GMM
2010
Jasper et al. [31]
10 Low pass filter DWT decomposition + selecting appropriate bands + Shannon energy Template matching 98.67% with Shannon energy
77.33% without Shannon energy
2011
Cheng Xie et al. [32]
10 ICC + Independent sub-band function Similarity distance A = 85.7%, EAR < 7%,
RER < 10%
2011
Zhao et al. [33]
30 Normalization hamming window MFCC VQ A = 100%
2012
Cheng Xie [34]
300 heart sounds Wavelet Family LBFC Similarity distance Verification: 12 Heart sounds signal for train, 12 Heartsounds signals for test A = 100%
Identification: EAR < 1%–8%,
ERR < 3%, A = 99%
2012
Rasha Wahid et al. [35]
80 heart sound samples FFT using hamming window FL = 256 ms
STFT
FL = 256 ms
FEAL1: MFCC + DCT
FEAL2: Spectral magnitude + DCT
GMM FEAL1: A = 100% for 7 samples FEAL2: A = 100% for 6 samples
2012
Chen W et al. [36]
Wavelet transform MFCC
PCA
A could reach 90%
2012
Karmakar et al. [37]
Low pass filter Wavelet and windowed 2nd level coefficients MLP 96.178%
2013
Zhong L et al. [38]
100 heart sounds
From 50 people
Wavelet
transform
LPCC
MFCC
GMM LFCC is more suitable than MFCC
2013
Spadaccini et al. [39]
HSCT-11
206 people
Low pass filter Cross-correlation windowing
Computing the variance of each segment
Structural system: MFCC + FSR
Statistical System:
LFCC + FSR
Structural system: Template matching
Statistical System:
GMM
Structural system ERR = 36.86%
Statistical system
ERR = 13.66%
2013
Zhao et al. [40]
40 Subjects
280 heart sounds
DWT using db5 family Hamming
Hanning
Rectangular
Hamming was the best
FS (Fourier Spectrum)
MS (Marginal Spectrum)
VQ For FS A = 84.93%
For MS A = 94.16%
2013
Girish et al. [41]
10
4000 PCG
Samples
Normalization
Low pass filter
Autocorrelation + segmentation using thresholding LFBCC
Wavelet decomposition using db2
MLP-ANN LFBCC with A = 89.68%
Wavelet with A = 90.52%
2014
Tan et al. [42]
15
Subjects
Low pass filter ZCR + STA
“zero-crossing
rate and short-term amplitude”.
MFCC
DCT
KSRC
SVM
KNN
SRC
A = 85.45%
A = 84.87%
A = 84.57%
A = 78.78%
2014
Abo el zahad et al. [43]
HSCT-11
17
Subjects
DWT using db5
+ Thresholding of wavelet coefficients
Hamming window MFCC, BFCC, LFCC, DW-MFCC + Fusion between them using CCA LDA + GMM with kmeans with a decision-based Bayes theory A = 94.4%,A = 94.325%,A = 93.7%
A = 95.12%, A of parallel fusion was the best between MFCC + DW-MFCC features A = 99.5%
2014
Swati et al. [44]
30 subjects High pass filter Framing using Hamming window MFCC
DCT
SVM A = 96%
2015
S. Bindu et al. [45]
TVD Signal Energy envelope Template matching
2015
Abo el zahad
et al. [46,47]
HSCT-11
206
people
DWT using db5 and 4 thresholding techniques Framing using hamming window
FS = 1000 ms,
FS = 500 ms
WPCC using linear and non-linear filters + LDA
Based on different wavelets the best was a demy
Bayes The best accuracy achieved using WPCC with non-linear filtering reaching
A = 91.05%
2016
Abo el zahad
et al. [48]
HSCT-11
206 people
Bio-Sec
21 people
DWT using db5 and 4 thresholding techniques Framing using hamming window
FS = 1000 ms,
FS = 500 ms
MFCC, LFCC, NLFCC, WPCC
+ LDA
Based on different wavelets the best was demy
Bayes HSCT-11 database A = 91.61%, A = 91.15, A = 92.51 and 90.26%
Bio-Sec database A = 97.31%,
A = 96.94%,A = 97.02% andA = 98.05%
2016
Abo el zahad
et al. [49]
HSCT-11
206 people
Bio-Sec
21 people
DWT using db5 and 4 thresholding techniques Framing using hamming window
FS = 1000 ms,
FS = 500 ms
MFCC, LFCC, MMFCC, WPCC
+ LDA
Based on different wavelets the best was demy
Bayes HSCT-11 database A = 91.15%, A = 91.61, A = 92.82 and 90.26%
Bio-Sec database A = 96.94%,
A = 97.31%,A = 98.57% and A = 97.02%
2017
T.E Chen et al. [50]
16 people
Total of 616 HS
Heart sound activity detection based on SEE MFCC + kmeans DNN,
KNN,
LR
SVM
GMM
A = 91.12%
A = 78.11%
A = 87.57%
A = 90.53%
A = 86.98%
2018
TG Meitei et al. [51]
Wavelets ED, GMM, FSR, and VQ
2019
Fahad et al. [52]
50 DWT Hilbert modeling AR burg modeling Bagged decision Tree A = 86.7%
2019
El-dahshan et al. [53]
60 from HSCTI
50 from PASCAL
MRD-MRR Framing and windowing, Shannon energy envelope MS_DWT RF
ANN
SVM
KNN
A = 100% using SVM with (Db9) on 60 Subjects
A = 100% using ANN with (Db10) on 50 Subjects
2020
Cheng, X
et al. [54]
80 HS
from 40 subjects
IMF + multiscale dispersion entropy LR
HSMM + FR for reduction
ED 96.08%

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1574013721000605

A survey on touch dynamics authentication in mobile devices

Pin Shen Teh, ... Ke Chen, in Computers & Security, 2016

2.5.1 Verification accuracy

The metrics that are commonly used to evaluate the verification accuracy of a biometrics authentication method are the false rejection rate (FRR), false acceptance rate (FAR) and equal error rate (EER). The relationship among these metrics is shown in Fig. 5 and their definitions are given below.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Fig. 5. The relationship between the FRR, FAR, and EER.

2.5.1.1 False rejection rate (FRR)

This is the percentage ratio of the number of legitimate users who are falsely rejected against the total number of legitimate user trials. A lower FRR value indicates fewer legitimate users being falsely rejected. It also means that the system usability level is higher. FRR is also referred to as a false alarm rate, false negative rate, false non-match rate, or Type II error.

2.5.1.2 False acceptance rate (FAR)

This is the percentage ratio of the number of illegitimate users who are falsely accepted against the total number of illegitimate user trials. Again, a lower FAR value indicates fewer illegitimate users being falsely accepted, and this also indicates that the system has a higher security level. FAR is also referred to as miss alarm rate, false positive rate, false match rate, or Type I error.

2.5.1.3 Equal error rate (EER)

EER is a single-number performance metric, which is commonly used to measure and compare the overall accuracy level of different biometrics authentication method. It is sometimes also referred to as crossover error rate (CER). EER can be obtained by finding the interception point of two graphs, one for FRR and the other for FAR. Typically, the lower the FRR and the FAR values, the lower the EER value, which in turn indicates a better accuracy performance of a biometrics authentication method. However, FRR and FAR are negatively correlated, so it is not possible to lower both FRR and FAR values at the same time. Therefore, in real-life applications, FRR and FAR are usually adjusted and determined based on the security and usability requirements of the applications. In some literature, the term “accuracy”, rather than EER, is used as an accuracy performance metric. It is worth noting that “accuracy” and EER are actually the same; “accuracy” is defined as the inverse of EER. In other words, a higher “accuracy” value indicates a better accuracy performance of a biometrics authentication method.

The accuracy performance can also be graphically visualized by using the receiver operating characteristic (ROC) curve as shown in Fig. 6. This graph is obtained by plotting genuine acceptance rate (GAR) against FAR at different matching threshold values. GAR is the percentage ratio between the correctly accepted legitimate users against the total number of legitimate user trials. It is also referred to as the inverse of FRR (100-FRR), true positive rate, or true match rate. A larger area under the curve (nearer the curve towards the top left corner of the graph) indicates a better performance.

How do you measure the rate at which you fail to authenticate legitimate users in a biometric system?

Fig. 6. The ROC curves of three performance scenarios.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0167404816300256

A review of state-of-the-art in Face Presentation Attack Detection: From early development to advanced deep learning and multi-modal fusion methods

Faseela Abdullakutty, ... Pamela Johnston, in Information Fusion, 2021

6 Evaluation metrics

Face PAD is commonly considered as a binary classification problem. Various performance associated metrics are used to evaluate the performance. Chingovska et al. detailed about measuring face PAD as a binary classification problem [158]. Since these binary classification systems are provided with two classes of input, they normally termed as positive and negative classes. Their performance is evaluated by the types of errors committed and the method to measure them. False Positive and False Negative are the errors exhibited by the binary classification systems. Normally recorded error rates are False Positive Rate (FPR) and False Negative Rate (FNR). FPR is the ratio of FP to the total number of negative samples and FNR is the ratio of FN to the total number of positive samples.

In biometric verification systems, the performance relies upon acceptance or rejection of the sample. So the terms False Positive Rate (FPR) and False Negative Rate (FNR) are replaced by False Acceptance Rate (FAR) and False Rejection Rate (FRR), respectively [159]. As there is matching process involved in the verification task, FAR and FRR are often described as False Match Rate (FMR) and False Non-Match Rate (FNMR) [160]. Anti-spoofing systems function on the concept of acceptance and rejection. So usually PAD systems use FRR and FAR. The ratio of incorrectly accepted spoofing attacks defines FAR, whereas FRR stands for the ratio of incorrectly rejected real accesses [158].

Presentation Attack Detection (PAD) follows ISO/IEC DIS 30107-3:2017 [161] to evaluate the performance of the PAD systems [33]. Authors of [5] described evaluation metrics used for testing different scenarios in a PAD system. The most commonly used metric in anti-spoofing scenarios is Half Total Error Rate (HTER) [158]. HTER is found out by calculating the average of FRR (ratio of incorrectly rejected genuine score) and FAR (ratio of incorrectly accepted zero-effort impostor). FAR is associated with SFAR (ratio of incorrectly accepted spoof attacks). PAD methods used Equal Error Rate (EER) to test reliability [5]. EER is a specific value of HTER at which FAR and FRR have equal values.

While evaluating some methods, metrics mentioned as per ISO standard in [161] were used. They were Attack Presentation Classification Error Rate (APCER), Normal Presentation Classification Error Rate (NPCER) and Average Classification Error Rate (ACER). NPCER is identical to Bona fide Presentation Classification Error Rate (BPCER). A Face PAD is evaluated in terms of classification of attacks and real face, intra dataset performance and cross-dataset performance [17]. BPCER and APCER measures bona fide and attack classification error rates respectively. ACER evaluates the intra dataset performance, whereas HTER scales cross-dataset performance [161]. Commonly used metrics [37,14,5] in face anti-spoofing are listed in Table 9.

Table 9. Commonly used evaluation metrics in face PAD.

MetricsEquation
False Acceptance RateFAR FPFakesamples
False Rejection RateFRR FNGenuinesamples
Equal Error RateEER (FRR=FAR)
Half Total Error RateHTER FAR+FRR2
Attack Presentation Classification Error RateAPCER FPFP+TN
Bona fide Presentation Classification Error RateBPCER FNFN+TP
Average Classification Error RateACER APCER+BPCER2

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1566253521000919

A comprehensive survey on the biometric recognition systems based on physiological and behavioral modalities

Shaveta Dargan, Munish Kumar, in Expert Systems with Applications, 2020

15 State-of-the-art work based on multiple modalities and the critical survey

15.1 Multimodal systems and its uniqueness as compared to single modal systems

Biometrics measurement deals with themeasurements of the personal traits such as iris, fingerprints, face, retina, hand geometry, voice or signatures which are used for the current security technologies. Multi-biometric systems are different from unimodal in multiple aspects, but the main reason is that the multi biometric systems use multiple biometric traits for the identification and verification of an identity whereas single biometric systems are dependent only on the one identity.

Difference between Unimodal and Multimodal biometric Systems: -

Unimodal Systems: Systems usinga single modality are called unimodal systems. The recognition accuracy of these systems is affected and questionable by sample size, low error rate, poor robustness, noisy sensor data and spoofing attacks. Unimodalsystems are less useful because no single biometric must be sufficiently robust and produce accurate results in real-world applications.

The multimodal biometric system can generate the problems by fusing two or more biometricmodalities.

The multimodal biometrics provides rich information as compared to unimodal biometric recognition systems.

Fusion framework and an efficient recognition algorithm are required for multimodal biometric systems.

Multimodal biometrics recognition proved reliable, effective, secure approach to identification and authentication and used in, airport, access and visitor control, suspect identification, check in, face surveillance, Jetway surveillance, attendance logging, DVR integration, robotics, hooligan control, and network security.

The benefits of multimodal biometric systems include (a) Reduce false non-match and false match rates (2) Means of verification, enrollment, and identification (3) Attempts to spoof biometric systems.

15.2 Characteristics of multimodal systems

Higher reliability

Improved fault tolerance and reduce noise

Varieties to accept the complex environment

Strong security measures are available

Enhances recognition security by combining with multimodal biometric features

15.3 Survey work associated with multi modal biometric systems

Moi, Asmuni, Hassan and Othman (2014) presented the combination of most promising biometric traits such as iris and face that help in the identification process. Multimodal systems are based on the non-ideal factors such as angles, reflections, expression changes, variations in posing, or blurred images. They used weighted score level fusion technique to fuse the matching scores from these two traits. The dataset used for the experiment was self-generated and named UniversitiTeknologi Malaysia Iris and Face Multimodal Datasets (UTMIFM), UBIRIS version 2.0 (UBIRIS v.2) and ORL face databases, that got high accuracy and high decidability. Chen, Yang, Wang and Liu (2016) developed a novel framework for the multimodal system. They used face and fingerprint images that used block based, feature-image matrix. The proposed method retrieved middle layer semantic features of the local features that result in better characterization capabilities, lower dimension and high accuracy rates for multimodal biometrics. They produced results using the Variational Bayesian Extreme Learning Machine (VBELM) that have high stability and generalization. Experimental results showed higher testing accuracy, higher efficiency and better stability in comparison to the traditional methods. Bouchaffra and Amira (2008) presented a study on Structural Hidden Markov Models (SHMM) for biometrics and proved that the novel machine-learning algorithm known as SHMMs extends traditional HMMs and used the local structures for the biometric identification based on both face and finger modalities. The proposed framework showed the benefits of using wavelets with kernels with the SHMMs classifier. Bouchaffra and Amira (2008) proposed a novel face and fingerprint feature modeling using the structural hidden Markov model (SHMM) and using feature extraction techniques such as ridgelet transform and discrete wavelet transform. The proposed algorithm showed that SHMM is more promising as compared to state-of-the-arts classifiers when combine with the discrete wavelet transform. The experiment proved that the with the of ridgelet transform without principal components analysis (PCA) dimension reduction produced better results with the support vector machines (SVMs) classifier. Barrero, Maiorana, Galbally, Campisi and Fierrez (2017) presented a method for the development of a multi biometric system based on the homomorphic encryption method where all the information either stored in the database is encrypted. Multi-biometric fusion namely feature, score and decision level were used. Experiments were carried out on the on-line signature and fingerprint dataset BiosecurID. The system fulfilled the requirements established in the ISO/IEC 24,745.Jing, Yao, Zhang, Yang and Li (2007) proposed a classifier for the development of recognition system based on the face and palm print images. They executed the Gabor transform on face and palm print images and combine them at the pixel level. The results proved that there was a very small correlation between their normalized Gabor-transformed images. The experiment used a novel classifier, KDCV-RBF to classify the fused biometric images and extracting the image features using a Kernel discriminative common vector (KDCV) approach and performed classification by using the radial base function (RBF) network. As the test data, we take two largest public face databases (AR and FERET) and a large palmprint database. The FERET database consisted of 2200 facial images consisting of 200 individuals with each person contributing 11 images and the AR face database that contains over 4000 color face images of 126 people. The experimental results revealed the proposed biometric fusion recognition approach performs better than the state-of-the-art work. Walia, Rishi, Asthana, Kumar, and Gupta (2019) proposed a robust biometric system based on the multi identifiers and on the optimum score level fusion model. The system considered biometric modalities such as Iris, finger vein and fingerprint. They used backtracking search optimization method (BSA) and the proportional conflict redistribution rules (PCR-6) and Fusion of the scores after using multiple traits produced better outputs. Various methods for the fusion are Einstein product, T-norms called Hamanger, Frank and Yagerand used Support Vector Machine for classification and reported accuracy rates of 98.43% with error rates of 1.57%. Table 15 presents the major findings in a summarized way with the multimodal biometric recognition system.

Table 15. Major findings and results of multimodal biometric systems.

Author/sYearMajor FindingsResults
Moi et al. 2014 • Iris and face biometric traits
• Angles, reflections, expression changes, variations in posing, or blurred image angles, reflections, expression changes, variations in posing, or blurred images
• Weighted score level fusion technique to fuse the matching scores
• UniversitiTeknologi Malaysia Iris and Face Multimodal Datasets (UTMIFM), UBIRIS version 2.0 (UBIRIS v.2) and ORL face databases
• Got high accuracy and high decidability.
Walia et al. 2018 • Iris, finger vein and fingerprint
• Used backtracking search optimization method (BSA) and the Proportional conflict redistribution rules (PCR-6)
• Fusion of the scores after using multiple traits produced better outputs.
• Various methods for the fusion are Einstein product, T-norms called Hamanger, Frank and Yager
• Support Vector Machine
• The experiment performed the accuracy rates of 98.43% with error rates of 1.57%.
Barrero et al. 2017 • Homomorphic encryption method where all the information either stored in the database is encrypted.
• Multi-biometric fusion namely feature, score and decision level were used.
• Experiments were carried out on the on-line signature and fingerprint dataset BiosecurID.
• The system fulfills the requirements established in the ISO/IEC 24,745.
Jing et al. 2007 • (KDCV) approach and performed classification by using the radial base function (RBF) network.
• As the test data, we take two largest public face databases (AR and FERET) and a large palmprint database.
• The FERET database includes 2200 facial images consisting of 200 individuals with each person contributing 11 images and the AR face database that contains over 4000 color face images of 126 people.
• The experimental results revealed the proposed biometric fusion recognition approach performs better than the state-of-the-art work.
• Gabor transform on face and palm print images
• A novel classifier, KDCV-RBF
Chen et al. 2016 • Used block-based feature image matrix.
• The proposed method retrieved middle layer semantic features of the local features with better characterization capabilities, lower dimension and high accuracy rates.
• Used Variational Bayesian Extreme Learning Machine (VBELM) with high stability and generalization.
• Experimental results showed higher testing accuracy, higher efficiency and better stability.
Bouchaffra and Amira 2008 • Structural Hidden Markov Models (SHMM) for Biometrics
• Fusion of Face and Fingerprint
• Novel machine-learning algorithm known as SHMMs extends traditional HMMs
• Local structures for the biometric identification based on both face and finger modalities.
• Wavelets with kernels with the SHMMs classifier.
Bouchaffra and Amira 2008 • Proposed a novel face and fingerprint feature model
• Used Structural hidden Markov model (SHMM)
• Feature extraction techniques such as ridgelet transform and discrete wavelet transform were employed.
• SHMM is more promising as compared to state-of-the-art.
• Ridgelet transform without principal components analysis (PCA) dimension reduction.
• Produced better results with the support vector machines (SVMs) classifier.

15.4 Design and implementation of multi biometrics recognition trade-offs

Biometric recognition systems are based on both software and hardware-based techniques. To satisfy all the challenging requirements such as user friendliness, low cost, fast, high performance, less intrusive, high fake detection rates and fraud prevention, the following design issues are to be included such as:

Selecting the right biometric modalities.

Selecting number of traits.

Choosing the required samples for a biometric.

Methodology for fusion.

Learning weights of biometric traits.

Cost-performance and accuracy-reliability trade-offs

Verification system for application

Features selection difficulties or complexities

Privacy issues should be handled

15.5 Factors considered during the development of system with multi biometric identifiers

Following are the important factors that must be taken care of, for the development of multi biometric systems. From the data collection, to the fusion approach, to the feature selection, classifiers and the accuracy achieved, various factors play a vital role.

There should be no performance loss in the selected protected domain.

Secure irreversible templates are to be stored in the server's database for achieving irreversibility.

As no plain information is shared and no biometric information is leaked, thus preventing hill-climbing or inverse biometrics attacks

The developed system should be used for real-time applications.

Feature level fusion is most preferred fusion methods for achieving better performance.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0957417419308310

How is biometric accuracy measured?

Three metrics are used to judge biometric accuracy: the false reject rate (FRR), the false accept rate (FAR), and the crossover error rate (CER).

How do you calculate false acceptance rate?

For example, if the number of false acceptances is 100 out of a total of 1,000 identification attempts, the FAR is 0.10. This means that the biometric system will allow 10 percent, or 1 out of each 10 false attempts, to access the system.

What is false rejection rate in biometrics?

False Reject Rate (FRR) definition Probability that the system fails to detect a match between the input fingerprint template and a matching template in the database. It measures the percent of valid inputs which are incorrectly rejected.

What measures human traits for authentication?

Biometrics are body measurements and calculations related to human characteristics. Biometrics authentication (or realistic authentication) is used in computer science as a form of identification and access control.