Facial recognition systems and software offer ways to identify and confirm an individual's identity using their face. These systems can be used to identify people in photos, videos, or in real time. Facial recognition can also be used as a category of biometric security, similar to voice, fingerprint, and iris recognition. A facial recognition system can be used to unlock electronic devices and for access control. It is seeing increased use by law enforcement to identify people during routine police stops, during moments of political unrest, and during events for protected speech and expression, such as protests or riots.
Facial recognition systems use algorithms to generate a mathematical or numerical map of a person's face. These are based on the details of the face, such as the distance between the eyes and the shape of the chin. Some of these systems are developed to positively identify a person; however, it is often the case that a system cannot get enough of the specific details to make a positive identification, and it can make mistakes. Some systems, to combat the potential for these mistakes, provide a probability score for a match between an image or unknown person and faces from a database. The algorithm tends to go through three basic steps, which can also be three standalone technologies combined to improve a facial recognition system. These steps are the following:
- Detection: This is the process of finding a face in an image, which is similar to face detection systems used in some cameras and smartphones. This system is only focused on finding a face and can be used standalone, such as in camera systems.
- Analysis: The analysis or attribution system works to map the face. This is done by a series of measurements, which will depend on the type of facial recognition system or algorithm used, which are converted into a string of numbers or points, also known as a "faceprint." Social media filters often use these systems to map a filter on a face. Analysis systems can suffer from glitches, especially in case of misidentification, which can prove problematic when paired with a recognition database.
- Recognition: These are databases that work to take the numbers generated by an analysis system to confirm the identity of the face. This process is frequently used for verification, such as in security features on smartphones or laptops, or for identification, such as at borders or at traffic stops.
Depending on the system and what it is developed to achieve (as different systems can achieve different goals), the system will vary in its ability to identify people. This is more complicated under certain conditions, such as poor lighting, low-quality image resolution, or suboptimal angles or views. These can generate different types of errors: "false negatives" and "false positives."
A false negative error occurs when the face recognition system fails to match a person's face to an image in a database. In this kind of error, the facial recognition system will return zero results in a response to a query, even when the correct answer exists in a database. This type of error has been detected in various studies in which a system is tested against suboptimal images from a known database image.
A false positive error occurs when the facial recognition system matches a person's face in an image to a corresponding face in the database, but the match is incorrect. This error has also been studied, where a known image is given to a facial recognition system, and the facial recognition system presents an incorrect answer with the confidence of the algorithm. The false positive error is concerning as it can lead to misidentifications that, especially in law enforcement scenarios, can cause an innocent individual to go to jail.
A comparison threshold can use the false negative rates and false positive rates of a facial recognition system to generate a comparison threshold. The comparison threshold is a way of using the similarity scores calculated by facial recognition algorithms to tune a system's sensitivity to these types of errors, and thresholds can be adjusted to account for trade-offs between accuracy and risk when returning results. Comparison thresholds, essentially, are a way to use common errors in facial recognition systems to generate better, more accurate results.
Another attempt to increase the accuracy and ability of facial recognition systems has been the development of three-dimensional facial recognition systems, which can create more points of analysis in a face as the depth of facial features can be measured. Similarly, texture analysis systems have been developed, which use a patch of skin and algorithms to turn the patch into a mathematical, measurable space and distinguish any lines, pores, and actual skin texture to identify differences between similar faces. The texture analysis is only meant to be applied to a larger facial recognition system but also can be used in combination to develop a more comprehensive and more accurate facial recognition system.
Facial recognition systems tend to struggle with specific challenges, including changes in expression, such as blinking, frowning, or smiling; the inclusion of a mustache or beard; and wearing eyeglasses, especially sunglasses. New systems and new topography systems have been developed to address these challenges. However, other factors can still trick these systems, including significant glare on eyeglasses or sunglasses; long hair obscuring any central part of the face; poor lighting that can cause either under- or over-exposure of a face; and any lack of resolution.
There are various potential advantages for the use of facial recognition technology. These include its use for increased security, such as helping identify wanted criminals or terrorists; building access; building surveillance; and locking and unlocking personal devices.
Facial recognition can make it easier to track down burglars, thieves, and trespassers. Knowledge of the presence of a facial recognition system can serve as a deterrence system, especially in the case of petty crime. Also, similar to security, companies can use facial recognition for cybersecurity and for digital access control, using the potential for devices to recognize a face in place of passwords since, in theory, a face cannot be phished, hacked, or changed.
Facial recognition technology can help remove bias and ease public concern over certain law enforcement activities, such as unjustified police stop and searches that are controversial. This can be done by automating the process, with facial recognition technology scanning crowds or individuals and helping reduce searches on law-abiding citizens, while also picking out potential suspects or individuals for a stop and search or related activity.
As the technology becomes more widespread and more robust, more use cases aimed at convenience are being developed. For example, using facial recognition for unlocking personal electronic devices can be faster than password or passcode entry, and it is similar in speed to other biometric measures but can be perceived as faster as the process can be initiated sooner. It has been also been suggested for use for payment, with facial processing systems at checkout or ATM that could be used to link to an individual's bank account or payment system and not require them to pay for items without getting out a credit card or cash, making payment automatic and seamless.
Similar to the above example in convenience, because facial recognition can be achieved faster, it can outpace potential cyberattacks and advanced hacking tools to ensure the verification of a person's identity happens before any information can be compromised.
Most facial recognition systems can be integrated into other tools. This can increase the potential use cases and advantages of the use of facial recognition tools. For example, facial recognition could be used to help law enforcement agencies track down missing persons, identify criminals or suspects of crimes, increase security in airports, and identify potential threats.
Facial recognition has many uses, and more continue to be discovered and developed. It can be used for marketing, sending targeted ads, security and law enforcement, access control for buildings or personal devices, banking, tracking attendance, and gaming.
Facial recognition use cases
Airports and border control
Facial recognition is becoming more familiar at borders and in airports. This is as biometric passports, which allow individuals to walk through automated ePassport control to reach the gate faster. This can reduce wait times but also increase security, with some prediction that these type of systems can be as accurate up to 97 percent.
Facial recognition could be used for biometric online banking and replace one-time passwords, allowing users to authorize transactions by looking at their smartphone or computer. With facial recognition, hackers cannot compromise a biometric system. And in the case someone tries to use a photo to spoof the facial recognition system, some facial recognition systems use "liveless" detection, which works to determine if the biometric sample is a live human or a fake representation. This could further make payment cards and signatures obsolete.
Facial recognition-ready photographs are being included on identification cards, such as passports or driver's licenses. This can allow officers at routine traffic stops to use facial recognition to confirm the individual presenting the license is the owner of the license, which can further detect and reduce ID theft and fraud.
Facial recognition systems have been trialed in use with drones for law enforcement and military applications. Especially on the border, the use of drones could use facial recognition or soft biometric recognition to recognize and track individuals, and could include the capacity to see through walls and ceilings to detect and identify those individuals.
Finding missing persons
Facial recognition can be used to find missing persons and victims of human trafficking. This can be especially useful when a missing person is in a database, and the system can alert law enforcement as soon as they are recognized by a facial recognition system, whether it is in an airport, retail store, or public space.
Facial recognition systems are being developed and deployed for security and law enforcement, while the United States has no laws to protect an individual's biometric data. This has led to concerns, as more than half of the United States population is considered to have already had their faceprint captured with or without consent, and with that, data stored without their knowledge. This data could be accessed by malicious actors or used by government agencies or advertisers to track individuals. This is even more concerning given the potential for false positives that could implicate an innocent individual in a crime. Further, General Data Protection Regulations (GDPR), Europe’s data privacy and security laws, do not address biometric data. According to the Department of Homeland Security, the only way for individuals to avoid having their biometric information collected when traveling internationally is to not travel.
Facial recognition systems have been found to have around 90 percent accuracy; however, this is not true when assessing the facial recognition results of some populations, such as people with darker skin and women. Facial recognition systems have been found to exhibit a racial bias and falsely identify Black and Asian faces 10 to 100 times more often than white faces. Further, the technology has been found to misidentify or falsely identify women more than men, making Black women particularly vulnerable to algorithmic bias. And some algorithms used by U.S. law enforcement falsely identified Native Americans more than people from other demographics.
Most developers of facial recognition algorithms have seen an increase in accuracy as models have been used in the wild, and the technology has improved drastically with the introduction of deep learning techniques. However, the bias has still been found to exist despite the increases in the accuracy of the models. This has led to calls for facial recognition software to be banned, or moratoriums to be put in place until those biases are removed from the algorithms, especially as they may injure the rights of individuals in specific demographic groups over others.
Another concern around facial recognition is the potential for privacy and data privacy intrusion. This is partially due to the lack of transparency around how information is gathered, stored, and managed, with some suggesting the use of facial recognition infringes on citizens' inherent rights to not be under constant government surveillance and to control the use of their own image. In 2020, the European Commission banned facial recognition technology to allow for changes to be made to their legal framework in terms of guidelines on privacy and ethical abuse.
Privacy concerns tend to be focused on unsecured data storage practices that could expose facial recognition data, creating potential security threats, especially as many organizations host the data on local servers where a lack of IT security professionals can fail to ensure network security and lead to security vulnerabilities. These are not always a concern when systems are used on a personal device, as often the data is handled and stored on the device.
However, other privacy concerns are centered around the lack of informed consent or transparency. Especially as any form of data mining, especially online, for facial recognition systems uses large data sets of images, ideally captured multiple times under various lighting conditions and angles, and the largest sources of this type of data tend to be online sites, such as social media sites, or an individual's cloud camera storage. And some of these services and copyright licenses allow for various liberal interpretations and sometimes illegitimate use.
Facial recognition could also lead to ubiquitous cameras and data analytics and eventually mass surveillance that could compromise citizens' liberties and privacy rights. While it can help law enforcement and governments with criminals and border controls, it can compromise and erode the privacy rights of ordinary and innocent people. These concerns have led to calls for blanket bans on all facial recognition tools, especially those used for mass surveillance, especially as the use of artificial intelligence can increase the power and reach of artificial intelligence. This opens the potential for malicious use of facial recognition to manipulate and threaten people, government agencies, and democracy, in similar ways artificial intelligence has been used to create bot attacks.
As noted above, law enforcement agencies have begun more frequently using facial recognition in routine police duties. Agencies collect mugshots from arrestees and compare them with local, state, and federal face recognition databases. Once an arrestee's photo is taken, the mugshot can be uploaded to one or more databases and can be scanned every time a law enforcement agency does a facial recognition search. Some of these systems can also run these mugshots against photos from social media, CCTV, traffic cameras, or photographs officers have taken in the field. Some estimates have 25 percent, if not more, of local, state, and federal law enforcement agencies in the U.S. being capable of running facial recognition surveys on their own databases or databases of other agencies. The Washington Post reported that, as of 2013, twenty-six states allow law enforcement agencies to search or request searches of driver's license databases to compare against mugshots.
Despite local law enforcement agencies keeping their own databases of mugshots, many of the agencies share access to databases with each other. Often this is handled through the FBI's Next Generation Identification database, which contains more than 30 million facial recognition records, and access to this database is offered to local and state law enforcement agencies. The FBI has a department focused on facial recognition services—Facial Analysis, Comparison, and Evaluation (FACE) Services. This team has access to over 400 million non-criminal photos from sources, including state DMV databases and the State Department and sixteen U.S. states that allow access to driver's licenses and ID photos to be utilized. Georgetown University has estimated that close to half of all American adults have been entered into at least one facial recognition database, due to the access to DMV databases and the number of Americans who are passport and U.S. visa holders and, therefore, have photos in the State Department's database.
The concerns with the use of facial recognition by law enforcement come from how the facial data is generated and how it is applied. For law enforcement, a lot of facial recognition data is derived from mugshot images. These images are taken upon arrest before guilt has been determined, and mugshot photos are almost never removed from the database, regardless of the outcome of the criminal trial. Also, as noted above, facial recognition is prone to error, with the FBI admitting in its privacy impact assessment on facial recognition that the system may not be sufficiently reliable to accurately locate other photos of the same identity, increasing the percentage of misidentification.
The FBI asserts its system can find the true candidate in the top fifty profiles 85 percent of the time, but this turns out to only be true in the case a true candidate exists in the gallery. If the true candidate is not in the system, it is possible the system will still produce one or more potential matches, creating false positive results. As explored above, many facial recognition systems have difficulties and biases in results for specific communities, such as ethnic minorities in the United States. Meanwhile, the criminal databases across the United States are disproportionately filled with faces of African Americans, Latinos, and immigrants, due, in part, to racially biased police practices, which these facial recognition programs can replicate through false positives and the system's difficulties in identifying these communities.
The concerns with mass surveillance and invasions of data privacy, with a lack of informed consent, can be furthered by law enforcement agencies more than any other use of facial recognition, especially in the name of border security, airport security, and catching criminals. However, unlike other criminal searches, most jurisdictions do not require law enforcement agencies to file a warrant for a facial recognition search. Some jurisdictions do not even require law enforcement to suspect someone of committing a crime before using facial recognition to identify them. And only some agencies or jurisdictions have any rules or laws protecting individuals engaged in protected free speech.
In Canada, the RCMP was investigated for its use of facial recognition technology, which allowed the law enforcement agency to match photographs of people against a databank of more than 3 billion images scraped from the Internet and without the consent of the individuals. The result was that billions of people were unknowingly included in what was called a 24/7 lineup. This would be found to be mass surveillance and a violation of Canada's Personal Information Protection and Electronic Documents Act (PIPEDA). During investigations into RCMP's unlawful use of facial recognition technology, the RCMP would acknowledge its use but suggest it was limited and primarily used to rescue children who have been or are victims of online sexual abuse.
However, investigation proved the RCMP's allegations of its scope of use did not satisfactorily account for the vast majority of searches made. Perhaps more critical to this, the investigation found the RCMP had serious and systematic gaps in policies and systems to track, identify, assess, and control novel collections of personal information. These checks are important to ensure the RCMP complies with the law when using new technology and new sources of data.
However, the investigation into the RCMP's use of facial technology and the acknowledgment that the law enforcement agency contravened PIPEDA stopped its use of facial recognition technology. During the investigation, the RCMP suggested a third-party legal compliance check to ensure the service remained in compliance with PIPEDA would create an unreasonable obligation on the RCMP, but the RCMP would agree to implement recommendations to improve policies, systems, and training. This included the privacy assessments of third-party data collection practices and ensuring they are used in accordance with PIPEDA and related privacy legislation. This case led to calls for the Canadian Parliament to amend PIPEDA to clarify the obligation of law enforcement in Canada in regard to its collection of personal information.
2. Public more likely to see facial recognition use by police as good, rather than bad for society
March 17, 2022
October 24, 2017
Facial Recognition in the US: Privacy Concerns and Legal Developments
Facial Recognition Is Everywhere. Here's What We Can Do About It.
July 15, 2020
Facial Recognition: Applications, Benefits, and Challenges -- ITRex
February 1, 2022