top of page
Abhijit Ahaskar

Explained: What is ailing face recognition tech?

Updated: Dec 14


face recognition tech

Social media is abuzz with cases of people being misled or forced at airports to sign up for DigiYatra and use their face biometrics. Several airports in India are now using facial recognition technology (FRT) at entry gates for passenger verification. Their adoption in public places is expected to grow further. RTI filings by digital rights group Internet Freedom Foundation (IFF) reveal that 170 FRT systems are being deployed across India in airports, rail stations, ports, schools, and government offices. 


What is FRT?

FRT uses machine learning (ML) to extract data points from a person’s face and create a digital signature. This signature is then compared with an existing database to find possible matches. An FRT system leverages a network of CCTV cameras equipped with computer vision, which allows computers to analyze digital images and videos to extract information. In addition to verifying passengers at airports and workers at offices, FRT is increasingly being put to use for identifying wanted individuals in crowded places. 


What makes FRT a double-edged sword?

FRT can be quite effective in ideal conditions such as good lighting and access to high quality images. However, its use for law enforcement hasn’t gone down well with privacy and human rights advocates. 


They warn that lack of high quality face data for training, potential developer bias towards people of colour and misidentifications due to false positives could lead to unfair targeting and harassment of people from marginalized communities. 


There have been several instances in the US where use of FRT has led to wrongful arrest and jail time for black people, NYT reported in January 2021. In a more recent case, from February 2022, a pregnant woman Porcha Woodruff was arrested by Detroit Police for robbery and carjacking based on a false face recognition match. The woman is now suing the city and police officials who arrested her. 


In India, FRT was used by Delhi Police to identify around 1100 people who were suspected of being involved in communal riots in February 2020, which reportedly claimed nearly 50 lives. 


Legal research group, Vidhi Centre for Legal Policy, warned that use of FRT by police in Delhi could lead to unfair targeting of Muslims. Their concern stems from two issues– uneven distribution of CCTV cameras in the city and  over policing of Muslim neighborhoods. 



Why is it important to have a framework in place to govern the use of FRT? 

While India now has a data protection law called Digital Personal Data Protection (DPDP) Act,  it is yet to lay down a specific law or framework for fair and consistent use of technologies such as artificial intelligence (AI) or FRT. 


Lawyers and digital rights advocates believe that in the absence of specific laws or frameworks, FRT can be misused or used arbitrarily by police. 


For instance, Delhi Police while responding to a RTI query by IFF in July said “all matches above 80% similarity are treated as positive results while matches below 80% similarity are treated as false positive results which require additional corroborative evidence.”

Legal and AI experts argue that an 80% accuracy rate in any AI system is insufficient, as it leaves significant potential for false positives. Even FRT systems with higher accuracy thresholds of 97% and 99% have resulted in false positives in several instances. Factors like skin tone can still lead to errors.


Many big tech firms working on face recognition based AI systems  have also expressed concerns over FRT and have asked for regulatory intervention. 

In November 2021, Meta announced that it is shutting down its facial recognition system and will delete face scan data of more than one billion Facebook users. The social media platform was using FRT to automatically tag photos with names, building one of the largest photo repositories in the process. 


Similarly, Microsoft and Amazon have refused to sell facial recognition software to law enforcement agencies. Microsoft insists that it will not do so until there is a federal law to regulate its use.  


Despite these concerns, the technology is seeing an increase in adoption by police. In July 2022, the city of New Orleans overturned a previous ban on use of FRT by police. However, it can only be used in investigations of violent crimes with approval from high-ranking officials. 

On the other side of the Atlantic, the UK Border Force is planning to use FRT at airports to ease congestion at migration counters, while the UK Home Office intends to use FRT to monitor migrant offenders. 


Will a FRT framework increase accountability? 

A framework will make stakeholders including private entities and police more accountable. For instance, FRT operations under DigiYatra are currently handled by a not-for-profit private company called DigiYatra Foundation. It is a joint venture between five India airports and has 26% government stake. What is worrying is that DigiYatra Foundation does not fall under the purview of the RTI Act, which means it cannot be forced to disclose information on its privacy and data handling practices.  For use by law enforcement agencies, a legal framework can set a standard threshold for positive matches, which can minimize the risk of false positives and wrongful arrests. 


Can generative AI improve FRT?

Recent advancements in AI, particularly generative AI, is also poised to boost the accuracy of FRT. Generative AI can generate large volumes of realistic and varied synthetic data that can be used to train facial recognition algorithms. It can also create deepfakes to test the robustness of FRT systems. This can reduce the risk of false positives and make them more resistant to spoofing.



Image credit: Pixabay

bottom of page