Biometric Liveness Detection Explained


What is “Liveness”?
In biometrics, Liveness Detection is an AI computer system’s ability to determine that it is interfacing with a physically present human being, and not an inanimate spoof artifact.  
Note: It’s not called “Liveliness”. Don’t make that rookie mistake!

The History of Liveness

In 1950, Alan Turing (
wiki) developed the famous "Turing Test".  It measures a computer's ability to exhibit human-like behavior.  Conversely, Liveness Detection is AI that determines if a computer is interacting with a live human. 

Alan Turing
Turing c. 1928


The "Godmother of Liveness"

Dorothy E. Denning (
wiki) is a member of the
National Cyber Security Hall of Fame and coined the term “Liveness” in her 2001 Information Security Magazine Article: “It's "Liveness," Not Secrecy, That Counts”.  She states:

“A good biometrics system should not depend on secrecy,"  and,

“... biometric prints need not be kept secret, but the validation process must check for liveness of the readings."

Decades ahead of her time, Dorothy E. Denning’s vision for Liveness Detection in 1-to-1 biometric authentication couldn't have been more correct.

Dorothy E. Denning

Should We Fear Centralized Biometric Authentication?

Fear of 1-to-1 biometric authentication stems from the belief that centralized storage of biometric data creates a "honeypot" that, if breached, compromises the security of all other accounts that rely on that same biometric data.

Biometric detractors argue, "You can reset your password if stolen, but you can't reset your face".  While this is true, it is a failure of imagination to stop there.  We must ask, "What would make centralized biometric authentication safe?"

The answer is Certified Liveness Detection.  With it, the biometric honeypot is no longer something to fear, because we no longer have to try to keep inherently non-secret biometric data like photos and fingerprints secret.

How Liveness Protects Us

Ms. Denning's photo posted above is biometric data, and is now cached on your computer.  Is she somehow more vulnerable now that you have it?  Not if her accounts are secured with Certified Liveness Detection, because that photo won't fool the AI.  Nor will a video, a copy of her driver license, passport, fingerprint or iris.  She must be physically present to access her accounts, so she need not worry about keeping her biometric data "secret".

Liveness Detection prevents bots and bad actors from using photos, videos, masks, or other biometric data (stolen or otherwise) to create or access online accounts.  Liveness ensures only real humans can create and access accounts.

Liveness checks solve some very serious problems.  For example, Facebook had to delete 2.2 billion fake accounts in 2019 alone!  Requiring proof of Liveness would have prevented these fakes from ever being created.

Note: In 2019, the crypto-currency wallet ZenGo offered a challenge: spoof Certified Liveness Detection and "steal" one Bitcoin (worth over $11,000 at the time).  A hi-res photo of the ZenGo CEO was provided and the savvy cypherpunks gave it their best shot.  The ZenGo wallet remained unspoofed and the bitcoin stayed safe, proving the efficacy of Certified Liveness Detection in one of the most public displays of biometric security to date (

Liveness for Onboarding, KYC and Enrollment

Requiring every new user prove their Liveness before they are even asked to present an ID Document during the digital onboarding process is in itself a huge deterrent to fraudsters who don't ever want their Real Face on camera.
If an authenticator has a weakness, the bad guys will find it and exploit it to create as many fake accounts as possible.  To prevent this, Certified Liveness Detection during new account onboarding should be mandatory.  Now we know that the new account belongs to a real human and their biometric data can be stored as a trusted reference of their digital identity in the future.

The Double-Identity problem (also called 1 Person = 1 Account) is when one human can create more than one account and commit fraud by impersonating another individual, or by creating a synthetic identity.  Double-Identity plagues systems like online voting, benefit allocation, and newer concepts like Universal Basic Income (UBI).  Further discussion on how biometrics can solve this problem will be held on (coming soon).  

Liveness for Ongoing Authentication (Password Replacement)

Since most biometric attacks are spoofs, Certified Liveness Detection during biometric user authentication must be mandatory.  With multiple high-quality photos of almost everyone available on Google and Facebook, a biometric authenticator cannot rely on secrecy for security. 

Liveness is the first and most important line of defense against targeted spoof attacks, second is a very high FAR for accurate biometric matching.   

With Certified Liveness Detection you can't even make a copy of your biometric data that would fool the system even if you wanted to.  Liveness catches the copies by detecting generation loss and only the genuine user gains access.

No Stored Liveness Data = No Honeypot Risk

Two types of data are required for every secure biometric authentication: User Biometric Data (for matching), and Liveness Data (to prove it's first-generation data collected from a live person). 

Liveness Data should be timestamped so it is only valid for few minutes, and then deleted.  New Liveness Data must be collected for every authentication attempt.  Only User Biometric Data should ever be saved, never Liveness Data. 

Just as photos from LinkedIn or Instagram can’t spoof Certified Liveness Detection, neither can the standalone User Biometric Data.  By deleting the Liveness Data and only storing the User Biometric Data, there is no honeypot risk.  

Note: Think of the stored User Biometric Data as the lock, the newly collected User Biometric Data as a one-time-use key, and the Liveness Data as proof the key has never been used before. 

ISO/IEC 30107 - Liveness Testing Global Standard is the International Organization for Standardization’s (ISO) testing guidance for evaluation of Anti-Spoofing technology, a.k.a., Presentation Attack Detection or PAD.
“bio” “metrics” literally means to measure life.  So it’s ironic that it took until late 2017 for anyone to release official guidance on how to determine if the subject of a biometric scan is actually alive.

Due to "hill-climbing" attacks, biometric systems should never reveal which part of the system did or didn't catch a spoof, and while ISO 30107-3 gets a lot right, it unfortunately encourages testing both Liveness and Matching at the same time.  Scientific method requires the fewest variables possible be tested at once, so Liveness testing should be done with a solely Boolean (true/false) response.  Tests should not allow systems to have multiple-decision layers that could allow an artifact to pass Liveness but fail Matching because it didn't look enough like the enrolled subject. 
Spoof Artifacts

When a non-living object that exhibits human traits (an "artifact") is presented to a camera or biometric sensor, it's called a "spoof".  Photos, videos, masks, and dolls are all common examples of spoof artifacts.


 Artifact Level Description Example
 Level 1 (A) Hi-Res Paper & Digital Photos, Hi-Def Videos exhibiting challenge/response and human-worn paper Masks.
 Level 2 (B) Commercially available lifelike Dolls, and human-worn Resin, Latex & Silicone 3D Masks under $300 in price.
 Level 3 (C) Custom-made ultra-realistic 3D Masks, Wax Heads, etc. up to $3,000 in creation cost.


Un-Certified Liveness

Unfortunately, some types of Liveness Detection are un-certifiable because they are not secure enough to pass the lowest level of the ISO 30107 Presentation Attack Detection guidance requirements. 

Un-certifiable Liveness Detection methods include: blink, smile, turn/nod, colored flashing lights, making random faces, speaking random numbers, and many more. All easily spoofed.

User security and hard-won brand recognition will be put at risk by trusting unscrupulous vendor's exaggerated claims. 

When vendors claim to have "Robust Liveness Detection", they should "Pass the test or give it a rest!"

Note: Watch USAA Bank's un-certified "Facial Recognition" app security get spoofed by a crude photo slideshow, easily unlocking
one of their user's bank accounts ------------------->



Certified Liveness
NIST/NVLAP-accredited Lab, iBeta in Denver, CO USA, is currently the only Certified Liveness testing lab guided by the ISO 30107 global testing standard (

Organizations have a fiduciary duty to provide Certified Liveness Detection to their users whenever biometric onboarding or authentication is required.

 Certified 3D Face Liveness Vendors Non-Certified Vendors

e4 Global

PBSA Group
Solus Connect
Sum & Substance

TiC Now

South Africa

South Africa
United Kingdom
United Kingdom

Vendors With Certified iBeta PAD: Level 1 & 2 Tech





& more...

Though they remain unnamed,
over 10 Vendors have tried and
failed iBeta's PAD testing.


Compliant Fingerprint Liveness Vendors Non-Certified Vendors

HID Global


All Vendors Compliant iBeta PAD: Level 1


Coming Soon...

Though they remain unnamed,
over 10 Vendors have tried and
failed iBeta's PAD testing.


 Compliant Palmprint Liveness Vendors Non-Certified Vendors

RedRock Biometrics


All Vendors Compliant iBeta PAD: Level 1



Though they remain unnamed,
over 10 Vendors have tried and
failed iBeta's PAD testing.

Note on FIDO PAD Testing 
- In September 2018, FIDO ( released it's own spec to test and "certify" biometric anti-spoofing capabilities. However, it has been criticized for setting the bar much too low by some experts as it's thresholds are much weaker than the iBeta Certification levels.  For example: The Samsung S10's under-screen fingerprint sensor made by Qualcomm was Certified FIDO Level 2 PAD, but was spoofed in minutes by a 3D printed finger (

Editors' Note: Should Liveness Detection Be Required By Law?

We believe that legislation must be passed to make Certified Liveness Detection mandatory if biometrics are used for Identity & Access Management (IAM).  Our personal data has already been breached, so we can no longer trust Knowledge Based Authentication (KBA).  We must turn our focus from maintaining databases full of "secrets" to securing attack surfaces.  Current laws already require organic foods to be certified, and every medical drug must be tested and approved.  In turn, governments around the world should require Certified Liveness Detection be used to protect the digital safety and biometric security of their citizens.


The Problem With CAPTCHAs

wiki), an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart", is a simple challenge–response test used in computing to determine whether the user is human or a bot.

In an article on, Josh Dzieza
writes, “Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8-percent of the time, while the humans got a mere 33 percent.” 

Jason Polakis, a computer scientist, used off-the-shelf image recognition tools, including Google's own image search, to solve Google's image CAPTCHA with 70% accuracy, states “You need something that’s easy for an average human, it shouldn’t be bound to a specific subgroup of people, and it should be hard for computers at the same time.”

Even without AI, services like: and allow bots to bypass the challenge–responses tests by using proxy humans to complete them.  With so many people willing to do this work, it's cheap to defeat at scale and workers earn between $0.25-$0.60 for every 1000 CAPTCHAs solved. (webemployed).
Resources & Whitepapers

Information Security Magazine - Dorothy E. Denning's (
wiki) 2001 article, “It Is "Liveness," Not Secrecy, That Counts

There's a New Sheriff in Town - Standardized PAD Testing & Liveness Detection - Biometrics Final Frontier

Gartner, “Presentation attack detection (PAD, a.k.a., “liveness testing”) is a key selection criterion.  ISO/IEC 30107 “Information Technology — Biometric Presentation Attack Detection” was published in 2017.  
(Gartner’s Market Guide for User Authentication, Analysts: Ant Allan, David Mahdi, Published: 26 November 2018). FaceTec’s ZoOm was cited in the report.  For subscriber access:
Forrester, "
The State Of Facial Recognition For Authentication - Expedites Critical Identity Processes For Consumers And Employees"  By Andras Cser, Alexander Spiliotes, Merritt Maxim, with Stephanie Balaouras, Madeline Cyr, Peggy Dostie.  For subscriber


Glossary - Biometrics Industry & Testing Terms:

1:1 (1-to-1) – Comparing the biometric data from a subject User to the biometric data stored for the expected User.  If the biometric data does not match above the chosen FAR level, the result is a failed match.

1:N (1-to-N) – Comparing the biometric data from one individual to the biometric data from a list of known individuals, the faces of the people on the list that look similar are returned.  This is used for facial recognition surveillance, but can also be used to flag duplicate enrollments.

Artifact (Artefact) –  An inanimate object that seeks to reproduce human biometric traits. 

Authentication – The concurrent Liveness Detection, 3D depth detection, and biometric data verification (i.e., face sharing) of the User.

Bad Actor – A criminal; a person with intentions to commit fraud by deceiving others.

Biometric – The measurement and comparison of data representing the unique physical traits of an individual for the purposes of identifying that individual based on those unique traits.

Certification – The testing of a system to verify its ability to meet or exceed a specified performance standard.  Testing labs Like iBeta issue certifications.

Complicit User Fraud – When a User pretends to have fraud perpetrated against them, but has been involved in a scheme to defraud by stealing an asset and trying to get it replaced by an institution.

Cooperative User – When a testing organization is guided by ISO 30107-3 ISO, the human Subjects used in the tests must provide any and all biometric data that is requested.  This helps to assess the complicit User fraud and phishing risk, but only applies if the test includes matching (not recommended).

Centralized Biometric – Biometric data is collected on any supported device, encrypted and sent to a server for enrollment and later authentication for that device or any other supported device.  When the User’s original biometric data is stored on a secure 3rd-party server, that data can continue to be used as the source of trust and their identity can be established and verified at any time.  Any supported device can be used to collect and send biometric data to the server for comparison, enabling Users to access their accounts from all of their devices, new devices, etc., just like with passwords.  Liveness is the most critical component of a centralized biometric system, and because certified Liveness did not exist until recently, centralized biometrics have not yet been widely deployed.

Credential Sharing – When two or more individuals do not keep their credentials secret and can access each others accounts.  This can be done to subvert licensing fees or to trick an employer into paying for time not worked (also called “buddy punching”).

Credential Stuffing – A cyberattack where stolen account credentials, usually comprising lists of usernames and/or email addresses and the corresponding passwords, are used to gain unauthorized user account access.

Decentralized Biometric – When biometric data is captured and stored on a single device and the data never leaves that device.  Fingerprint readers in smartphones and Apple’s Face ID are examples of decentralized biometrics. They only unlock one specific device, they require re-enrollment on any new device, and further do not prove the identity of the User whatsoever.  Decentralized biometric systems can be defeated easily if a bad actor knows the device's override PIN number, allowing them to overwrite the User’s biometric data with their own.

End User– An individual human who is using an application.

Enrollment – When biometric data is collected for the first time, encrypted and sent to the server.  Note: Liveness must be verified and a 1:N check should be performed against all the other enrollments to check for duplicates.

Face Authentication – Authentication has three parts: Liveness Detection, 3D Depth Detection and Identity Verification.  All must be done concurrently on the same face frames.

Face Matching – Newly captured images/biometric data of a person are compared to the enrolled (previously saved) biometric data of the expected User, determining if they are the same.

Face Recognition – Images/biometric data of a person are compared against a large list of known individuals to determine if they are the same person.

Face Verification – Matching the biometric data of the Subject User to the biometric data of the Expected User.

FAR (False Acceptance Rate) – The probability that the system will accept an imposter’s biometric data as the correct User’s data and incorrectly provide access to the imposter.

FIDO – Stands for Fast IDentity Online:  A standards organization that provides guidance to organization that choose to use Decentralized Biometric Systems (

FRR/FNMR/FMR – The probability that a system will reject the correct User when that User’s biometric data is presented to the sensor.  If the FRR is high, Users will be frustrated with the system because they are prevented from accessing their own accounts.

Hill-Climbing Attack – When an attacker uses information returned by the biometric authenticator (match level or liveness score) to learn how to curate their attacks and gain a higher probability of spoofing the system. 

iBeta – A NIST-certified testing lab in Denver Colorado; the only lab currently certifying biometric systems for anti-spoofing/Liveness Detection to the ISO 30107-3 standard (

Identity & Access Management (IAM) – A framework of policies and technologies to ensure only authorized users have the appropriate access to restricted technology resources, services, physical locations and accounts. Also called identity management (IdM).

Imposter – A living person with traits so similar to the Subject User that the system determines the biometric data is from the same person.

ISO 30107-3 – The International Organization for Standardization’s testing guidance for evaluation of Anti-Spoofing technology (

Knowledge-Based Authentication (KBA) - Authentication method that seeks to prove the identity of someone accessing a digital service. KBA requires knowing a user's private information to prove that the person requesting access is the owner of the digital identity. Static KBA is based on a pre-agreed set of shared secrets. Dynamic KBA is based on questions generated from additional personal information.

Liveness Detection – The ability for a biometric system to determine if data has been collected from a live human or an inanimate, non-living Artifact.

NIST – National Institute of Standards and Technology – The U.S. government agency that provides measurement science, standards, and technology to advance economic advantage in business and government (

Phishing – When a User is tricked into giving a Bad Actor their passwords, PII, credentials, or biometric data.  Example: A User gets a phone call from a fake customer service agent and they request the User’s password to a specific website.

PII – Personally Identifiable Information is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context (

Presentation Attack Detection (PAD) – A framework for detecting presentation attack events. Related to Liveness Detection and Anti-Spoofing.

Root Identity Provider – An organization that stores biometric data appended to the corresponding personal information of individuals, and allows other organizations to verify the identities of Subject Users by providing biometric data to the Root Identity Provider for comparison.

Spoof – When a non-living object that exhibits some biometric traits is presented to a camera or biometric sensor.  Photos, masks or dolls are examples of Artifacts used in spoofs.

Subject User – The individual that is presenting their biometric data to the biometric sensor at that moment.

Synthetic Identity - When a bad actor uses a combination of biometric data, name, social security number, address, etc. to create a new record for a person who doesn't actually exist, for the purposes of using an account in that name.

Editors & Contributors

Kevin Alan Tussy


John Wojewidka
Senior Editor


Josh Rose
Tech Editor


Copyright 2019 -