Fingerprint Verification Competition

The Fingerprint Verification Competition (FVC) is an outdated international competition focused on fingerprint verification software assessment. [1] It became largely irrelevant for commercial fingerprint providers once the U.S. National Institute of Science and Technology (NIST) introduced the free, but public Proprietary Fingerprint Test (PFT I, II and III, running from 2004 to present). Once the NIST test was available, all of the major commercial fingerprint vendors switched to the NIST test and ceased paying to participate in the FVC. The FVC tests that followed in 2006 and later were populated by largely anonymous participants and researchers. A few opportunistic commercial vendors who for reasons known only to them, did not participate in the public NIST test, instead took the opportunity to participate in FVC test, then touted the fact that they “beat” the cadre of anonymous participants. It is important to look at the participants that a vendor “beat,” and decide if the vendor is being deceptive.

Another deceptive practice has been the use, by anti-fingerprint competitors, of FVC accuracy averages to characterize all fingerprint products’ performance and accuracy as inferior to their alternative biometric modality.

How FVC is operated: A subset of fingerprint impressions acquired with various sensors was provided to registered participants, to allow them to adjust the parameters of their algorithms. Participants were requested to provide enroll and match executable files of their algorithms; the evaluation was conducted at the organizers’ facilities using the submitted executable files on a sequestered database, acquired with the same sensors as the training set.

The organizers of FVC are:

Each participant can submit up to one algorithm to the open and light categories.

The first, second and third international competitions on fingerprint verification (FVC2000, FVC2002 and FVC2004) were organized in 2000, 2002 and 2004, respectively. These events received great attention both from academic and industrial biometric communities. They established a common benchmark, allowing developers to unambiguously compare their algorithms, and provided an overview of the state-of-the-art in fingerprint recognition. Based on the response of the biometrics community, FVC2000, FVC2002 and FVC2004 were undoubtedly successful initiatives. The interest shown in previous editions by the biometrics research community has prompted the organizers to schedule a new competition for the year 2006.

In 2006 there were:

  • Four new databases (three real and one synthetic)
  • Two categories (open and light)
  • 53 participants (27 industrial, 13 academic, and 13 independent developers)
  • 70 algorithms submitted (44 in the open category and 26 in the light category)

Aim

  • Continuous advances in the field of biometric systems and, in particular, in fingerprint-based systems (both in matching techniques and sensing devices) require that performance evaluation of biometric systems be carried out at regular intervals.
  • The aim of FVC2006 is to track recent advances in fingerprint verification, for both academia and industry, and to benchmark the state-of-the-art in fingerprint technology.
  • Further testing, on interoperability and quality related issues, will be performed in a second stage, after the competition is completed.
  • This competition should not be viewed as an "official" performance certification of biometric systems, since only parts of the system software will be evaluated by using images from sensors not native to each system. Nonetheless, the results of this competition will give a useful overview of the state-of-the-art in this field and will provide guidance to the participants for improving their algorithms.

Categories

  • Two different sub-competitions (open category and light category) will be organized using the same databases.
  • Each participant is allowed to submit only one algorithm to each category.
  • The open category has no limits on memory requirements and template size. For practical testing reasons, the maximum response time of the algorithms is limited as follows: the maximum time for each enrollment is five seconds and the maximum time for each matching is three seconds. The test will be executed under Windows XP Professional O.S. on PC Intel Pentium 4 - 3.20 GHz - 1.00 GB RAM.
  • The Light category is intended for algorithms conceived for light architectures and therefore characterized by low computing needs, limited memory usage and small template size. The maximum time for enrollment is 0.3 seconds and the maximum time for matching is 0.1 seconds. The test will be executed under Windows XP Professional O.S. on PC Iintel Pentium 4 - 3.20 GHz - 1.00 GB RAM. The maximum memory that can be allocated by the processes is 4 MB. The maximum template size is 2 kB. A utility will be made available to the participants to test if their executables comply with the memory requirement.

Databases

One of the most important and time-consuming tasks of any biometric system evaluation is the data collection. Organizers have created a multi-database, containing four disjoint fingerprint databases, each collected with a different sensor technology.

  • Four distinct databases, provided by the organizers, constitute the benchmark: DB1, DB2, DB3 and DB4. Each database is 150 fingers wide and 12 samples per finger in depth (1800 fingerprint images). Each database is partitioned in two disjoint subsets A and B:
  • subsets DB1-A, DB2-A, DB3-A and DB4-A, which contain the first 140 fingers (1680 images) of DB1, DB2, DB3 and DB4, respectively, are used for the algorithm performance evaluation.
  • subsets DB1-B, DB2-B, DB3-B and DB4-B, containing the last 10 fingers (120 images) of DB1, DB2, DB3 and DB4, respectively, will be made available to the participants as a development set to allow parameter tuning before the submission.
  • During performance evaluation, fingerprints belonging to the same database will be matched against each other.
  • The image format is BMP, 256 gray-levels, uncompressed.
  • The image size and resolution vary depending on the database (detailed information is available to the participants).
  • Data collection in FVC2006 was performed without deliberately introducing difficulties such as exaggerated distortion, large amounts of rotation and displacement and wet and dry impressions (as it was done in the previous editions), but the population is more heterogeneous and also includes manual workers and elderly people. The volunteers are simply asked to put their fingers naturally on the acquisition device, but no constraints were enforced to guarantee a minimum quality in the acquired images. The final datasets were selected from a larger database by choosing the most difficult fingers according to a quality index, to make the benchmark sufficiently difficult for a technology evaluation.

Performance evaluation

For each database and for each algorithm:

  • Each sample in the subset A is matched against the remaining samples of the same finger to compute the false non match rate (FNMR) (also referred as false rejection rate - FRR). If image g is matched to h, the symmetric match (i.e., h against g) is not executed to avoid correlation in the scores. The total number of genuine tests (in case no enrollment rejections occur) is:
     ((12*11) /2) * 140 = 9,240 
  • The first sample of each finger in the subset A is matched against the first sample of the remaining fingers in A to compute the false match rate (FMR) (also referred as false acceptance rate - FAR). If image g is matched to h, the symmetric match (like h against g) is not executed to avoid correlation in the scores. The total number of impostor tests (in case no enrollment rejections occur) is:
     ((140*139) /2) = 9,730 

Although it is possible to reject images in enrollment, this is strongly discouraged. In fact, in FVC2006, as in FVC2004 and FVC2002, rejection in enrollment is fused with other error rates for the final ranking; in particular, each rejection in enrollment will produce a "ghost" template which will not match (matching score zero) with all the remaining fingerprints.

For each algorithm and for each database, the following performance indicators are reported:

  • REJENROLL (Number of rejected fingerprints during enrollment)
  • REJNGRA (Number of rejected fingerprints during genuine matches)
  • REJNIRA (Number of rejected fingerprints during impostor matches)
  • Impostor and Genuine score distributions
  • FMR(t)/FNMR(t) curves, where t is the acceptance threshold
  • ROC(t) curve
  • EER (equal-error-rate)
  • EER* (the value that EER would take if the matching failures were excluded from the computation of FMR and FNMR)
  • FMR100 (the lowest FNMR for FMR<=1%)
  • FMR1000 (the lowest FNMR for FMR<=0.1%)
  • ZeroFMR (the lowest FNMR for FMR=0%)
  • ZeroFNMR (the lowest FMR for FNMR=0%)
  • Average enrollment time
  • Average matching time
  • Average and maximum template size
  • Maximum amount of memory allocated

The following average performance indicators are reported over the four databases:

  • Average EER
  • Average FMR100
  • Average FMR1000
  • Average ZeroFMR
  • Average REJENROLL (average number of rejected fingerprints during enrollment)
  • Average REJMATCH (average number of rejected fingerprints during genuine and impostor matches)
  • Average enrollment time
  • Average matching time
  • Average template size (calculated on the average template size for each database)
  • Average memory allocated (calculated on the maximum amount of memory allocated for each database)

Participants

  • Participants can be from academia, the industry, or independent developers.
  • Anonymous participation will be accepted: participants will be allowed to decide whether or not they want to publish their names together with their algorithm’s performance. Participants will be confidentially informed about the performance of their algorithm before they are required to make this decision. In case a participant decides to remain anonymous, the label "anonymous organization" will be used, and the real identity will not be revealed.
  • Together with their submissions, participants will be required to provide some general, high-level information about their algorithms (similar to those reported in FVC2004, see [R. Cappelli, D. Maio, D. Maltoni, J.L. Wayman and A.K. Jain, “Performance Evaluation of Fingerprint Verification Systems”, IEEE Transactions on Pattern Analysis Machine Intelligence, January 2006]). Whilst this required information will not disclose industrial secrets, since it is a very high level description of the approaches, it could be of interest to the entire fingerprint community.
  • Organizers of FVC2006 will not participate in the contest.

See also

References

  1. "Fingerprint Verification Competition | Semantic Scholar". www.semanticscholar.org. Retrieved 2020-11-15.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.