The National Institute of Standards and Technology (NIST, part of U.S. Department of Commerce) conducts large scale testing of face recognition software in their Face Recognition Vendor Test (FRVT). One of the core tests is the FRVT 1:1 Verification, where algorithms are evaluated on multiple large-scale datasets in how good they perform on 1:1 facial verification.

What is 1:1 verification? It is the scenario where a person claims to be a certain individual, such as the owner of a specific passport or a specific user requesting access to a system with facial biometrics. The system then performs the 1:1 facial verification by capturing a new face sample and compares it against the reference face sample stored in the passport document or in the system. For example, the automatic border gate compares the new image against the facial image from your passport, or your phone compares a new selfie against the stored data when you unlock it.

In NIST FRVT 1:1 the evaluation is conducted on a large series of individuals in datasets with images collected from Visa, Mugshot, Wild and Border control scenarios. Below you can see some of the sample photos from the NIST datasets:

In Mobai we are very proud of being a top contender in the NIST FRVT 1:1, and especially for the results on the Mugshot and Visa-Border datasets. The Visa-Border dataset is one of the more interesting scenarios since the comparisons consist of images from a Visa-document, such as a passport and then that is compared to actual border crossing images. This test is a good representation for border control or (self-service) identity verification use cases. The Mobai results can be found at https://pages.nist.gov/frvt/reportcards/11/mobai_001.html

The test results are illustrated in graphs showing the relationship between False non-match rate (FNMR) and False match rate (FMR). The FMR indicates the proportion of comparisons between two different persons that is falsely considered as a match. The FNMR indicates the proportion of comparisons between the same person that is falsely considered to not match. You can think of FMR as the security dimension; how large is the risk that a person will be able to successfully generate match as someone else, and FNMR as the usability dimension; what is the likelihood that a person will not be considered a match when a comparison is performed against their own reference image.

Above you can see the results for Mobai on the Visa-Border datasets, where we obtain 0,0093 FNMR @ FMR 1e-06, which is extremely low and only 0,006 behind the leader. To rephrase the results a bit: when the security dimension (FMR) is fixed at a level where only 1 person out of 1 000 000 is wrongly matched against another person in the dataset, then our system wrongly rejects the correct person approximately 1 time out of 100 attempts. If we compare this result to human performance in face recognition, we see that the results are significantly better in automated systems, as studies show the FMR or the chance of wrongly giving a match between two distinct persons can be as high as up to 30%, or 3 out of 10 when we do facial comparisons between unfamiliar faces.

We are very proud to join the other participants as one of the best performing algorithms.