Liveness.com
Biometric Liveness Detection Explained

 
 

What is “Liveness”?
 
In biometrics, Liveness Detection is an AI computer system’s ability to determine that it is interfacing with a physically present human being and not an inanimate spoof artifact. 
Note: It’s not called “Liveliness”. Don’t make that rookie mistake!
  

  
The History of Liveness
 

In 1950, Alan Turing (
wiki) developed the famous "Turing Test". It measures a computer's ability to exhibit human-like behavior. Conversely, Liveness Detection is AI that determines if a computer is interacting with a live human. 

Alan Turing
Turing c. 1928

 

The "Godmother of Liveness"
 

Dorothy E. Denning (
wiki) is a member of the
National Cyber Security Hall of Fame and coined the term “Liveness” in her 2001 Information Security Magazine Article: It's "liveness," not secrecy, that counts. She states:

A good biometrics system should not depend on secrecy," and,

“... biometric prints need not be kept secret, but the validation process must check for liveness of the readings."

Decades ahead of her time, Dorothy E. Denning’s vision for Liveness Detection in biometric authentication could not have been more correct.

Dorothy E. Denning



Early Academic Papers About Liveness & Anti-Spoofing
 

One of the earliest papers on Liveness was published by Stephanie Shuckers, S.A., in 2002. "Spoofing and anti-spoofing measures", and it is widely regarded as the foundation of today's academic body of work on the subject. The paper states that "Liveness detection is based on recognition of physiological information as signs of life from liveness information inherent to the biometric".  

Later in 2016, her follow-up, "Presentations and Attacks, and Spoofs, Oh My", continued to influence presentation attack detection research and testing. 
 
  

How Liveness Detection Protects Us


Ms. Denning's photo posted above is biometric data, and is now cached on your computer. Is she somehow more vulnerable now that you have it?  Not if her accounts are secured with Certified Liveness Detection, because that photo won't fool the AI. Nor will a video, a copy of her driver license, passport, fingerprint, or iris. She must be physically present to access her accounts, so she need not worry about keeping her biometric data "secret".

Liveness Detection prevents bots and bad actors from using stolen photos, deepfake videos, masks, or other spoofs to create or access online accounts. Liveness ensures only real humans can create and access accounts. Liveness checks solve some very serious problems. For example, Facebook had to delete 5.4 billion fake accounts in 2019 alone! Requiring proof of Liveness would have prevented these fakes from ever being created.
 

   
 
The Liveness.com Level 1-5 Threat Vector Scale - Spoof Artifact & Bypass Levels

 
When a non-living object that exhibits human traits (an "artifact") is presented to a camera or biometric sensor, it's called a "spoof." Photos, videos, deepfake puppets, masks, and dolls are all common examples of spoof artifacts. When biometric data is tampered with post-capture, or the camera is bypassed altogether, that is called a "bypass." There are no lab tests available for Level 3 artifacts, or Level 4 & 5 bypasses, since those attack vectors are missing from the ISO 30107-3 Standard and thus all associated lab testing. Only a Spoof Bounty Program can currently address Levels 1-5.

  

 Artifact Type Description Example
 Level 1 (A)
 (Spoof Bounty Avail)
Hi-res paper & digital photos, hi-def challenge/response videos and paper masks. Beware: iBeta Lab Tests DO NOT include digital deepfake puppets, but FaceTec's Spoof Bounty DOES include deepfake puppets.
 Level 2 (B)
 (Spoof Bounty Avail)
Commercially available lifelike dolls, and human-worn resin, latex & silicone 3D masks under $300 in price.
 
 Level 3 (C)
 (Spoof Bounty Avail)
Custom-made ultra-realistic 3D masks, wax heads, etc., up to $3,000 in creation cost.

 

 Bypass Type Description Example
 Level 4 
 (Spoof Bounty Avail)

 
 

Decrypt & edit the contents of a 3D FaceMap to contain synthetic data not collected from the session, have the Server process and respond with Liveness Success.

   

 Level 5
 (Spoof Bounty Avail)

 

Successfully take over the camera feed & inject previously captured frames that result in the Server responding with Liveness Success.


 


  

Liveness for Onboarding, KYC, and Enrollment

Requiring every new user to prove their Liveness before they are even asked to present an ID Document during digital onboarding is itself a huge deterrent to fraudsters who don't want their real face on camera.
 
If an onboarding system has a weakness, the bad guys will exploit it to create as many fake accounts as possible. To prevent this, Certified Liveness Detection during new account onboarding should be required. Once we know that the new account belongs to a real human, their biometric data can be stored as a trusted reference of their digital identity in the future.

  

  
  
Liveness for Ongoing Authentication (Password Replacement)

 
Since most biometric attacks are spoof attempts, Certified Liveness Detection during user authentication must be mandatory.  With multiple high-quality photos of almost everyone available on Google or Facebook, a biometric authenticator cannot rely on secrecy for security. 

Liveness Detection is the first and most important line of defense against targeted spoof attacks on authentication systems. The second is a very high FAR (see Glossary, below) for accurate biometric matching.   

With Certified Liveness Detection you can't even make a copy of your biometric data that would fool the system even if you wanted to. Liveness catches the copies by detecting generation loss, and only the genuine physical user can gain access.


 

No Stored Liveness Data = No Honeypot Risk
 
Two types of data are required for every Face Authentication: Face Data (for matching) and Liveness Data (to prove the Face Data was collected from a live person). 

Liveness Data must be timestamped, be valid only for a few minutes, and then deleted. Only Face Data should ever be stored.  New Liveness Data must be collected for every authentication attempt.  

Face photos are just "Face Data" without the corresponding Liveness Data, so they cannot be used to spoof Certified Liveness Detection, and thus, storing photos does not create honeypot risk.

Note: Think of the stored Face Data as the lock, the User's newly collected Face Data as a One-Time-Use key, and the Liveness Data being present is proof that key has never been used before. 
 

 

The Achilles Heel of Weak Liveness 

Some Liveness Detection methods will never be secure because they do not capture enough unique data to confirm the session is not faked.

For example, if a 4k monitor presents a video to a device with a low-res 2D camera, if no glare or skew is observed it is virtually impossible for the camera to determine the 4k monitor is showing a spoof. The camera is capturing lower-res than the monitor presenting it and the weak liveness algorithms will be fooled.

Weak Liveness Detection Methods include: blink, smile, turn head, flashing lights, make random faces, speak random numbers, and more. All are fairly easy to spoof with higher-than-camera-resolution monitors, with workarounds for randomness needed in some cases.

User security and hard-won corporate credibility are at risk by unscrupulous vendors' exaggerated Liveness claims. If they claim  "Robust Liveness Detection", they should "Provide a Spoof Bounty Program or give it a rest!"

Note: Watch USAA Bank's Blink "Selfie-Recognition" app security
get spoofed by a crude photo slideshow, easily unlocking
one of their user's bank accounts ------------------->

              

   

 

The Threat of Deepfakes
 
So-called "deepfakes" have been around for years, but now even the general public understands that digital media can be manipulated easily.

If the Liveness Detection tech is vulnerable to deepfake puppets derived from photos or videos, it cannot be used for serious biometric security. 

Note: Watch as a basic "deepfake" puppet is created in 20 seconds
that can be used to spoof almost every liveness
vendor on the market today ------------------->

                             

  

 
 

Realistic Deepfake Puppet from a Single Photo
 
Don't believe that blink, nod, or shake-your-head Liveness can stop serious deepfake puppets. iBeta DOES NOT TEST for these, but FaceTec catches these attacks because of its Spoof Bounty Program experience.

If Liveness Detection is vulnerable to deepfake spoofs derived from photos or videos, it cannot be used for serious biometric security. 

Note: Watch as a professional level "deepfake" puppet is created from a
single photo that can be used to spoof almost every liveness
vendor on the market today ------------------->

                             

  

   

Free 2D Liveness Detection Providers Listed Below

FaceTec provides Free 2D Liveness Detection to ALL of its Customers & Partners. These 2D Liveness Checks are 97% accurate against Level 1-3 Spoof Attack Vectors. While not as secure as 3D Liveness (+99.997% accurate), there are scenarios where 2D Liveness Checks make sense. For example, at a Customs Checkpoint in an airport or at a retail store's self-checkout. Scenarios where a fraudster is unlikely to be able to use a Deepfake avatar or bypass the camera and injecting a pre-recorded video. 

2D Liveness doesn't require a Device SDK or special user interface, it works on any mugshot-style 2D face photo, the number of checks is Unlimited, and the 2D images are processed and stored 100% on the Customer's Server.

You can contact ANY* of the FaceTec Certified Vendors below and ask about Free 2D Liveness Detection, or visit this web-page for more information.   *Participation of FaceTec Partners may vary.
  
 

Certified FaceTec 3D Liveness Vendors
 
FaceTec created its $100,000 Spoof Bounty Program to prove real-world Level 1,2 & 3 PAD security, and Level 4 & 5 Biometric Template Tampering, and Virtual-Camera & Video Injection Attack Detection.

All organizations have a fiduciary duty to provide the strongest Liveness Detection available to their users whenever remote biometric onboarding or authentication is required.

        
    
 Certified 3D FaceTec Liveness Vendors

$100,000 Spoof Bounty Program 
+ NIST/NVLAP Lab Certified PAD: Level 1 & 2 AI*

   

01 Systems
Autentikar
Authenteq
BrainySoft
Bryk Group
BTS Digital
Certisign
Civic
Cynopsis
e4 Global
EvidentID
FaceTec
FintechOS
Fractal
Gemalto/Thales
Gulf Data-gDi
IDdataweb
Idenfy
Identyum
IDnow
IQSEC
Journey.ai
Jumio
Karalundi
Kvalifika
Lynx Global
Namiral
Nets
Neuvote
ODEK
Ondato
OneyTrust
Passbase
PBSA Group
Polygon
Pulsar AI
Socialnet
Solus Connect
Sum & Substance
TiC Now
Tekbees
TeraSystem
Valid
VerifyMyAge
VeriTran

VNG
Yoti
ZealiD

Bahrain
Chile
Iceland
Russia
Australia
Kazakhstan
Brazil
USA
Singapore
South Africa
USA
USA
Romania
Germany
France
UAE
USA
Lithuania
Croatia
Germany
Mexico
USA
USA
Mexico
Georgia
Hong Kong
Romania
Denmark
Canada
South Africa
Lithuania
France
USA
South Africa
Portugal
Georgia
Argentina
Singapore
United Kingdom
Chile
Columbia
Philippines
Brazil
United Kingdom
Argentina
Vietnam
United Kingdom
Sweden

Incentivized public bypass testing for Template Tampering,
Level 1-3 Presentation, Video Replay & Virtual Camera Attacks.

      
   

   

*Vendors listed above have not have been individually tested by
a NVLAP/NIST accredited lab for Level 1&2 Presentation Attacks, they are distributing
FaceTec's software, which has had v6.9.11 Certified to Level 2 + Level 1 regression testing.


 
ISO/IEC 30107-3 - Presentation Attack Detection Standard from 2017

https://www.iso.org/standard/67381.html is the International Organization for Standardization’s (ISO) testing guidance for evaluation of Anti-Spoofing technology, a.k.a., Presentation Attack Detection (PAD). Three document editions have been published to date, with a fourth edition currently in progress.
  
Released in 2017, ISO 30107-3 served as official guidance for how to determine if the subject of a biometric scan is alive, but since it allows PAD Checks to be compounded with Matching which can produce confusing results. In 2020, with the introduction of deepfake puppets and other attack vectors not conceived of at the time of publication, ISO 30107-3 is now considered by many experts to be outdated and incomplete.

 
Due to "hill-climbing" attacks (see Glossary at bottom of page), biometric systems should never reveal which part of the system did or didn't "catch" a spoof. And while ISO 30107-3 gets a lot right, it unfortunately encourages testing both Liveness and Matching at the same time. Scientific method requires the fewest variables possible be tested at once, so Liveness testing should be done with a solely Boolean (true/false) response. Liveness testing should not allow systems to have multiple-decision layers that could allow an artifact to pass Liveness but fail Matching because it didn't "look" enough like the enrolled subject. 

    
 

Why Isn't iBeta PAD Testing Enough?
  

iBeta PAD tests alone do not adequately represent the real-world threats a Liveness Detection System will face from hackers. Any 3rd-party testing is better than none, but taken at face value, iBeta tests provide a false sense of security due to being incomplete, too brief, having too much variation between Vendors, and being much TOO EASY to pass.

Unfortunately, iBeta allows Vendors to choose the devices they use during testing, and most choose newer devices with up to 12MP cameras. To put this in perspective, a 720p webcam is not even 1MP, and the higher the quality/resolution of the camera sensor, the easier the testing is to pass. 

iBeta also indirectly allows Vendors to influence the number of sessions in their time-based testing because some Vendors have much longer session times than others. This means that by extending the time it takes for a session to be completed, the Vendor limits the amount of attacks that can be performed in the time allotted. The goal of biometric security testing is to expose vulnerabilities, and when the number of sessions, the devices, and the tester skill levels are non-standardized, it means the testing is NOT equally difficult between Vendors, and/or isn't representative of real-world threats.

In addition, the Level 1 & 2 numbering method tends to be confusing because Level 2 Artifacts are often tougher to detect than Level 1, and it's important to note that NO Level 3 testing is offered by iBeta any longer. It was offered for a few months under a "Level 3 Conformance," but then NIST notified iBeta that they didn't believe iBeta was capable of performing such important and difficult testing, and iBeta had to remove the Level 3 testing option. However, iBeta still lists Level 3 testing on their website. The editors of this website believe that the lack of a "Level 3 cannot be tested by iBeta under its NIST accreditation" disclaimer, is purposefully missing to attempt to make iBeta's testing menu look more complete and their company more competent, even though NIST knows they aren't and doesn't allow them to perform Level 3 testing.

iBeta doesn't usually test the vendor's Liveness Detection software in web browsers, only native devices, so numerous untested threat vectors exist even in systems that pass some basic PAD testing. Another huge red flag in iBeta's testing is  they now allow up to 15% BPCER (Bona fide presentation classification error rate) also known as False Reject Rate (or FRR), which enables vendors to tighten security thresholds just to pass the test, but lower security in their real product to when customers experience poor usability, and iBeta does not require production version verification.

Even though most consumers and end-users don't have access to the expensive "pay-per-view" ISO 30107-3 Standard document, iBeta refuses to add disclaimers to their Conformance Letters to warn that their PAD tests ONLY contain Presentation Attacks, and not attempts to bypass the camera/sensor. It is also unfortunate that ISO & iBeta both conflate Matching & Liveness into one unscientific testing protocol, making it impossible to know if the Liveness Detection is actually working as it should in scenarios where matching is included.

This means that iBeta testing only considers artifacts physically shown to a sensor. And even though the digital attacks are the most scalable type, currently, iBeta DOES NOT TEST for any type of Replay Attack, Virtual-Camera Attack, or Template Tampering in their PAD testing. So iBeta testing, no matter what PAD Level it is, is NEVER enough to ensure real-world security. As far as we are aware, iBeta has never offered to perform any sensor bypass testing to any PAD Vendor at any time before this writing.

Remember, robust Liveness Detection must also cover all digital attack vectors, so don't be fooled by an iBeta "Conformance" badge. While it's better than nothing, it's nowhere near enough. Make your vendor sign an affidavit saying they have not lowered security thresholds, demand to see their Full Conformance Reports, with the False Reject Rate/BPCER listed, make them prove they have undergone Penetration Testing for the aforementioned Digital Spoof Attack Vectors, and demand the vendor stand-up a Spoof Bounty Program before they can earn your business.


 

FaceTec's $100,000 Spoof Bounty Program

Don't be a guinea pig. Insist your biometrics vendor maintain a persistent Spoof Bounty Program to ensure they are aware of and robust to any emerging threats, like deepfakes, video injection, or virtual-camera hijacking. As of today, the only Biometric Authentication Vendor with an active, real-world Spoof Bounty is FaceTec. Having now rebuffed over 37,000 spoof attacks, the goal of the $100,000 Spoof Bounty Program remains to uncover unknown vulnerabilities in the FaceTec Liveness AI and Security scheme. If any are found, they can be patched, and the security levels elevated even further. Visit bounty.facetec.com to participate. 

 
 

Ask The Editor: Is Facial Recognition the Same as Liveness & Face Authentication?

No! And we need to start using the correct terminology if we want to stop confusing people about biometrics.

Facial Recognition is for surveillance. It's the 1-to-N matching of images captured with cameras the user doesn't control, like those in a casino or an airport. And it only provides "possible" matches for the surveilled person from face photos stored in an existing database.

Face Authentication (1:1 Matching+Liveness), on the other hand, takes user-initiated data collected from a device they do control and confirms that user's identity for their own direct benefit, like, for example, secure account access.

They may share a resemblance and even overlap in some ways, but don't lump the two together. Like any powerful tech, this is a double-edged sword; Facial Recognition is a threat to privacy while Face Authentication is a huge win for it.

 

Ask The Editor: Should We Fear Centralized Face Authentication?
 

Fear of biometric authentication stems from the belief that centralized storage of biometric data creates a "honeypot" that, if breached, compromises the security of all other accounts that rely on that same biometric data.

Biometric detractors argue, "You can reset your password if stolen, but you can't reset your face." While this is true, it is a failure of imagination to stop there. We must ask, "What would make centralized biometric authentication safe?"

The answer is Certified Liveness Detection backed by a public spoof bounty program. With this AI in place, the biometric honeypot is no longer something to fear because the security doesn't rely on our biometric data being kept secret.

Learn more about how Certified Liveness Makes Centralized Safe in this comprehensive FindBiometrics white paper.


    
Ask The Editor: Is "Genuine Presence Assurance" Better Than Liveness Detection?
 
The sole vendor who claims to sell "Genuine Presence Assurance" confuses the narrower scope of Presentation Attack Detection with the broader scope of Liveness Detection and claims that their "Genuine Presence" is somehow more complete than Liveness Detection. When you invent a term, you can define it however you'd like, but it is inaccurate to say that Liveness Detection doesn't address Replay Attacks, Injected Videos because they are also Digital Artifacts and, like deepfake puppets, aren't "alive," so by definition, "Liveness" Detection must catch them too.


    
  

Ask The Editor: Should Liveness Detection be required by law?
 

We believe that legislation must be passed to make Certified Liveness Detection mandatory if biometrics are used for Identity & Access Management (IAM). The reality of the situation is that all our personal data has already been breached, so we can no longer trust Knowledge-Based Authentication (KBA). We must now turn our focus from maintaining databases full of "secrets" to securing attack surfaces. Current laws already require organic foods to be certified, and every medical drug must be tested and approved. In turn, governments around the world should require Certified Liveness Detection be employed to protect the digital safety and biometric security of their citizens.
  
 
  
Ask The Editor: Why doesn't 2D Face Matching work well with large data sets? (500,000+)

We've all heard an actor say, "get my good side", and the best photographers know which distances and lenses make portrait photos the most flattering. This is because a real 3D human face contains orders of magnitude more data than a typical 2D photo, and when a 3D face is flattened into a single 2D layer depth data is lost and creates significant issues. In the real world, capture distance, camera position, and lens diameter play big parts in how well a derivative 2D photo represents the original 3D face.



Source – Best Portrait Lens – Focal Length, Perspective & Distortion Matt Granger – Oct 27, 2017

 
2D Face Matching will not always “see” her as the same person. In some frames she might look more like her sister or her cousin and could match one of them even more highly than herself. In large datasets these visual differences are within the margin of error of the 2D algorithms and they make confidence in the 1:N match impossible. However, 3D FaceMaps not only provide more human signal for Liveness Detection, but they also provide data on the size and depth of the face which is combined with visual traits to increase accuracy, and enable the use of 1:N matching with significantly larger datasets.​

Ask The Editor: What's the Problem With Text & Photo CAPTCHAs?
  

CAPTCHA (wiki), an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart", is a simple challenge–response test used in computing to determine whether the user is human or a bot.

In an article on TheVerge.com, Josh Dzieza writes, “Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8-percent of the time, while the humans got a mere 33 percent.” 

Jason Polakis, a computer scientist, used off-the-shelf image recognition tools, including Google's own image search, to solve Google's image CAPTCHA with 70% accuracy, states “You need something that’s easy for an average human, it shouldn’t be bound to a specific subgroup of people, and it should be hard for computers at the same time.”

Even without AI, services like: deathbycaptcha.com and anti-captcha.com allow bots to bypass the challenge–responses tests by using proxy humans to complete them. With so many people willing to do this work, it's cheap to defeat at scale and workers earn between $0.25-$0.60 for every 1000 CAPTCHAs solved. (webemployed).

Ask The Editor: What Is a faceCAPTCHA?

Not to be confused with a "Face Capture," a faceCAPTCHA, like FaceTec's 3D Liveness Check, is a much better way to prove that it's not a bot accessing a web page.

While still remaining anonymous, a faceCaptcha can also prove that a user is old enough to access restricted content while performing an Age Check when verifying Liveness.

                    

Resources & Whitepapers
 

Information Security Magazine - Dorothy E. Denning's (wiki) 2001 article, “It Is "Liveness," Not Secrecy, That Counts
 
FaceTec: There's a New Sheriff in Town - Standardized PAD Testing & Liveness Detection - Biometrics Final Frontier

Gartner, “Presentation attack detection (PAD, a.k.a., “liveness testing”) is a key selection criterion. ISO/IEC 30107 “Information Technology — Biometric Presentation Attack Detection” was published in 2017.  
(Gartner’s Market Guide for User Authentication, Analysts: Ant Allan, David Mahdi, Published: 26 November 2018). FaceTec’s ZoOm was cited in the report. For subscriber access: https://www.gartner.com/doc/3894073?ref=mrktg-srch.
 
Forrester, "The State Of Facial Recognition For Authentication - Expedites Critical Identity Processes For Consumers And Employees" By Andras Cser, Alexander Spiliotes, Merritt Maxim, with Stephanie Balaouras, Madeline Cyr, Peggy Dostie. For subscriber access: https://www.forrester.com/report/The+State+Of+Facial+Recognition+For+Authentication+And+Verification/-/E-RES141491#

Ghiani, L., Yambay, D.A., Mura, V., Marcialis, G.L., Roli, F. and Schuckers, S.A., 2017. Review of the Fingerprint Liveness Detection (LivDet) competition series: 2009 to 2015. Image and Vision Computing58, pp.110-128:
https://www.clarkson.edu/sites/default/files/2017-11/Fingerprint%20Liveness%20Detection%2009-15.pdf 

Schuckers, S., 2016. Presentations and attacks, and spoofs, oh my. Image and Vision Computing55, pp.26-30:
https://www.clarkson.edu/sites/default/files/2017-11/Presentations%20and%20Attacks.pdf

Schuckers, S.A., 2002. Spoofing and anti-spoofing measures. Information Security technical report(4), pp.56-62:
https://www.clarkson.edu/sites/default/files/2017-11/Spoofing%20and%20Anti-Spoofing%20Measures.pdf

  

 

Glossary - Biometrics Industry & Testing Terms:

1:1 (1-to-1) – Comparing the biometric data from a subject User to the biometric data stored for the expected User. If the biometric data does not match above the chosen FAR level, the result is a failed match.

1:N (1-to-N) – Comparing the biometric data from one individual to the biometric data from a list of known individuals, the faces of the people on the list that look similar are returned. This is used for facial recognition surveillance, but can also be used to flag duplicate enrollments.

Artifact (Artefact) – An inanimate object that seeks to reproduce human biometric traits. 

Authentication – The concurrent Liveness Detection, 3D depth detection, and biometric data verification (i.e., face sharing) of the User.

Bad Actor – A criminal; a person with intentions to commit fraud by deceiving others.

Biometric – The measurement and comparison of data representing the unique physical traits of an individual for the purposes of identifying that individual based on those unique traits.

Certification – The testing of a system to verify its ability to meet or exceed a specified performance standard. Testing labs Like iBeta issue certifications.

Complicit User Fraud – When a User pretends to have fraud perpetrated against them, but has been involved in a scheme to defraud by stealing an asset and trying to get it replaced by an institution.

Cooperative User/Tester – When human Subjects used in the tests provide any and all biometric data that is requested. This helps to assess the complicit User fraud and phishing risk, but only applies if the test includes matching (not recommended).

Centralized Biometric – Biometric data is collected on any supported device, encrypted and sent to a server for enrollment and later authentication for that device or any other supported device. When the User’s original biometric data is stored on a secure 3rd-party server, that data can continue to be used as the source of trust and their identity can be established and verified at any time. Any supported device can be used to collect and send biometric data to the server for comparison, enabling Users to access their accounts from all of their devices, new devices, etc., just like with passwords. Liveness is the most critical component of a centralized biometric system, and because certified Liveness did not exist until recently, centralized biometrics have not yet been widely deployed.

Credential Sharing – When two or more individuals do not keep their credentials secret and can access each others accounts. This can be done to subvert licensing fees or to trick an employer into paying for time not worked (also called “buddy punching”).

Credential Stuffing – A cyberattack where stolen account credentials, usually comprising lists of usernames and/or email addresses and the corresponding passwords, are used to gain unauthorized user account access.

Decentralized Biometric – When biometric data is captured and stored on a single device and the data never leaves that device. Fingerprint readers in smartphones and Apple’s Face ID are examples of decentralized biometrics. They only unlock one specific device, they require re-enrollment on any new device, and further do not prove the identity of the User whatsoever. Decentralized biometric systems can be defeated easily if a bad actor knows the device's override PIN number, allowing them to overwrite the User’s biometric data with their own.

End User– An individual human who is using an application.

Enrollment – When biometric data is collected for the first time, encrypted and sent to the server. Note: Liveness must be verified and a 1:N check should be performed against all the other enrollments to check for duplicates.

Face Authentication – Authentication has three parts: Liveness Detection, 3D Depth Detection and Identity Verification. All must be done concurrently on the same face frames.

Face Matching – Newly captured images/biometric data of a person are compared to the enrolled (previously saved) biometric data of the expected User, determining if they are the same.

Face Recognition – Images/biometric data of a person are compared against a large list of known individuals to determine if they are the same person.

Face Verification – Matching the biometric data of the Subject User to the biometric data of the Expected User.

FAR (False Acceptance Rate) – The probability that the system will accept an imposter’s biometric data as the correct User’s data and incorrectly provide access to the imposter.

FIDO – Stands for Fast IDentity Online: A standards organization that provides guidance to organization that choose to use Decentralized Biometric Systems (https://fidoalliance.org).

FRR/FNMR/FMR – The probability that a system will reject the correct User when that User’s biometric data is presented to the sensor. If the FRR is high, Users will be frustrated with the system because they are prevented from accessing their own accounts.

Hill-Climbing Attack – When an attacker uses information returned by the biometric authenticator (match level or liveness score) to learn how to curate their attacks and gain a higher probability of spoofing the system. 

iBeta – A NIST-certified testing lab in Denver Colorado; the only lab currently certifying biometric systems for anti-spoofing/Liveness Detection to the ISO 30107-3 standard (ibeta.com).

Identity & Access Management (IAM) – A framework of policies and technologies to ensure only authorized users have the appropriate access to restricted technology resources, services, physical locations and accounts. Also called identity management (IdM).

Imposter – A living person with traits so similar to the Subject User that the system determines the biometric data is from the same person.

ISO 30107-3 – The International Organization for Standardization’s testing guidance for evaluation of Anti-Spoofing technology (www.iso.org/standard/67381.html).

Knowledge-Based Authentication (KBA) - Authentication method that seeks to prove the identity of someone accessing a digital service. KBA requires knowing a user's private information to prove that the person requesting access is the owner of the digital identity. Static KBA is based on a pre-agreed set of shared secrets. Dynamic KBA is based on questions generated from additional personal information.

Liveness Detection – The ability for a biometric system to determine if data has been collected from a live human or an inanimate, non-living Artifact.

NIST – National Institute of Standards and Technology – The U.S. government agency that provides measurement science, standards, and technology to advance economic advantage in business and government (nist.gov).

Phishing – When a User is tricked into giving a Bad Actor their passwords, PII, credentials, or biometric data. Example: A User gets a phone call from a fake customer service agent and they request the User’s password to a specific website.

PII – Personally Identifiable Information is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context (en.wikipedia.org/wiki/Personally_identifiable_information).

Presentation Attack Detection (PAD) – A framework for detecting presentation attack events. Related to Liveness Detection and Anti-Spoofing.

Root Identity Provider – An organization that stores biometric data appended to the corresponding personal information of individuals, and allows other organizations to verify the identities of Subject Users by providing biometric data to the Root Identity Provider for comparison.

Spoof – When a non-living object that exhibits some biometric traits is presented to a camera or biometric sensor. Photos, masks or dolls are examples of Artifacts used in spoofs.

Subject User – The individual that is presenting their biometric data to the biometric sensor at that moment.

Synthetic Identity - When a bad actor uses a combination of biometric data, name, social security number, address, etc. to create a new record for a person who doesn't actually exist, for the purposes of opening and using an account in that name.



Editors & Contributors

Kevin Alan Tussy
Editor-in-Chief

LinkedIn

John Wojewidka
Senior Editor

LinkedIn

Josh Rose
Tech Editor

LinkedIn

2020, Liveness.com. All rights reserved. ©