On Thursday 20 December 2007, Daniel Drake wrote: > Mark Vytlacil wrote: > > I have been experimenting with the microsoft reader for a while now and > > have been getting a lot of false rejections. Cleaning the reader and more > > care in pressing help, but not that much. The minutiae are not very > > consistant in position or number. I was puzzled that Daniel reported such > > good results. > > If you enable libfprint debug log messages at configure time, it will > print out bozorth3 match scores on the console. The default threshold is > 40 (so score >=40 is match). I'd be interested to hear what kinds of > scores you get for false rejectances, and also in comparison to scores > you get when purposely scanning 2 different fingers. > I tried runs of 10 or 20 and got scores from 9 to 58, averaging roughly 30 using the right finger. Using the wrong finger, I got scores of 7 to 12. Of course, it would be best to test all combinations of fingers, but these give you a rough idea.
I could adjust the threshold score to get less rejections. > I wouldn't rely on the visual results too much. With a very dirty > sensor, I get many false minutiae being detected around the edge of my > finger, which change in position on each scan. Yet for some reason, the > matching accuracy is still very high. > > > Comparing my image to the sample image on the wiki shows that mine is > > much "muddier". I have gaps in the ridges, smeary areas, and various > > unclear spots. Many of these make for minutiae that are really artifacts > > rather than true ridge patterns. I considered that my reader was > > defective and captured a blurry image. > > Are these "muddy" features persistent? For example, on my fingers, I > have a couple of "skin creases" in various places. However these creases > seem to be persistent. Even though fprint detects various minutiae at > these points where there truthfully aren't any, the fact that these > features are persistent means that matching is very accurate regardless. > I have more of these creases than my son and they are persistently present, but like everything else variable in image quality. Compared to my son's print mine is "grayer". My son's print changes appearance relatively little when it is binarized. Mine changes a fair amount. Both the darks and the lights are more gray and in some areas the contrast is quite poor. There are lots of irregular little light spots in the dark lines and irregular little dark spots in the light lines. These things are not persistent. The ridges are broader on my print and seem to have more fine structure visible which is quite variable. After a hot shower, I get sweat artifact. This appears as very dark spots and is resistant to wiping my finger. > > Then I got my son (age 22) to try it. His fingerprint is as clear as the > > sample on the wiki! I could clearly see all of the true minutiae on his > > print. I hate to think that I am getting old and my fingerprint is > > getting soft, but I guess so. > > Interesting. I'm 21. While a population size of 3 isn't enough to make > reliable conclusions here, this is an interesting observation. > You are right about the small sample size, but it makes intuitive sense. Skin loses elasticity with age and this could make a print less clear especially while being pressed. > > The windows software works well for me, but it captures the same finger 4 > > times to enroll a print. We could do the same and just enroll the subset > > of minutiae that consistantly match. For verification, we would consider > > the proportion of enrolled minutiae that matched. It is easy to make > > suggestions like this, but not so easy to do the implementation work. > > Looking at the code, I can see that you would have to bring some match > > data out of the bozorth library routines or add some functionality to > > them to generate this subset and use it. As it is now, the main match > > function just returns a score. > > There are more simplistic things we can do first. Firstly on the driver > level: > uru4000 samples the finger immediately as it is detected. In real life, > pressing your finger on a sensor is not an atomic process, so if we were > to take several samples of the finger and use one of the later samples > instead, we'd have a more complete (and maybe a 'more settled') print. > > Secondly on the library level: > - take say 3 enrollment prints > - use NIST's fingerprint image quality algorithm to pick the best > one(s) > - if NFIQ provided more than 1 best prints, choose the one with the > most minutiae, or the highest 'confidence per minutiae' average, or > something like that. > > It's hard to say how much this will improve things, but it is quite > realistic to be able to implement these without too much trouble. This > is planned but I've got a lot of other stuff to do before I can focus on > this. > I am interested in this subject right now and might work on this, but I am a slow amateur programmer, so I won't make promises. > Thanks, > Daniel _______________________________________________ fprint mailing list [email protected] http://lists.reactivated.net/mailman/listinfo/fprint
