The human face conveys our thoughts and emotions, and we seek it out when trying to identify others. It's also a focal point for the future of identification technology.
Governments, law enforcement and even businesses are interested in being able to identify people at a distance.
Alice O'Toole is a professor at the University of Texas at Dallas who specializes in the face, and she talked with KERA about the strides technology has made with facial recognition.
Interview Highlights
On the importance of the human face
It's very important; it is one of the first things the we look at to get social information, intent of another person. We look at it to recognize people, to categorize them along a lot of dimensions, like their sex, their race, their age, all of the these things help us to approach them in ways that are socially appropriate and adaptive.
On the advancement of the technology
A decade ago or so computers were very good at recognizing faces when the conditions that the image was taken under are really optimal — so passport photos and things where you have some control over facial expression, illumination and viewpoint.
What happened about five or six years ago, 2012 or so, was the introduction of a new class of deep learning algorithms. They're modeled after the human visual system or the primate visual system, and they involve tens of millions of computations that are neural-like in character. Those computations cascade one layer of neurons onto the next layer and so on, and that seems to have introduced a bit of a breakthrough in algorithms.
Now, we are able to use these algorithms to learn lots of different images of a face and map all of those wildly different images onto a single identity. And that takes the technology quite a ways over where it was five or 10 years ago.
https://www.youtube.com/watch?v=nT_PXjLol_8
On facial recognition algorithms being racially biased
Algorithms might be more accurate on one race of face than on other races of faces. It is really not a surprise to scientists that this is the case because algorithms or the algorithms we use today learn by example.
They are trained to basically map different images of an identity onto a single identity. And computer vision people use training data that is available usually from the web. So, if you downloaded eight million images from the web, which is not out of range for what these things are trained on, you would ask yourself what percentage of those would be from a particular race or another race and so on.
Just like people, we expect the algorithm to be best on the image type it's learned the most of.
Interview responses have been lightly edited for clarity.