top of page

Meta to bring back face recognition tech to detect scams, verify users

Staff Writer

Updated: Dec 14, 2024


Meta

Facebook-parent Meta Platforms is re-introducing face recognition technology (FRT) for user verification and scam detection, three years after shutting it down over privacy concerns. The social media giant announced on Monday that it plans to use FRT to protect users from scammers running malicious celeb- bait ads. Additionally, Meta will use FRT to help users verify their identity and reclaim control over their compromised Facebook or Instagram accounts. 


Celeb-bait ads is a form of advertising which uses photos or videos of celebrities to grab attention. However, such ads are also widely used by scammers to impersonate celebrities and trick gullible fans into engaging with them. Clicking on such ads leads users to malicious websites where they are scammed into sharing personal data or money. Many of these scams use AI-generated images to impersonate celebrities. 


Meta currently uses machine learning (ML) classifiers to review millions of ads on its platforms every day so it can detect scams or violations of ad policy. While their analysis includes text, image or video, Meta acknowledged that celeb-bait ads are not easy to detect. 

The new FRT system will compare faces in a celebrity endorsed ad with that of the celebrity’s Facebook and Instagram profile pictures. If the FRT finds a match and further investigation reveals that the ad is a scam, it will be blocked. 


Meta plans to inform the celebrities who are widely impersonated in such scams and automatically enroll them under a new protection program. Celebrities can opt out of the protection program, if they want to, from the Accounts Centre page on Instagram or Facebook. 


Further, FRT will be used to assist users regain access to their accounts compromised in a breach or when they don’t remember the password. As part of the account recovery process, users will have to upload a video selfie and the FRT will compare it with their profile picture. 


Though Meta has assured that facial data generated during the FRT system’s comparison process will not be retained for any other purpose and will be immediately deleted, its revival is likely to spark fresh criticism. 


In November 2021, Meta announced that it was rolling back the FRT system and deleting the facial recognition templates of more than 1 billion people.

The terminated FRT system automatically recognised people from photos and videos. It also provided users the option to enable face recognition for suggested tagging or see a suggested tag with their name in photos and videos.  


At that time, it was one of the largest repositories of face biometric in the world. Even though Meta claimed that it was only meant for internal use, it was seen as a concern for many given the company’s lackluster attitude towards user privacy. 


In February 2022, Meta was sued by the US state of Texas for collecting biometric data of millions of people without their consent and sharing it with third parties for commercial purposes. In July 2024, Meta agreed to pay $1.4 billion in settlement to the state of Texas. 


Use of FRT has also been flagged by privacy advocates. Privacy watchdog Electronic Frontier Foundation has called them “alarmingly more error-prone when applied to anyone who is not white and cisgender.”


Testing of various FRT systems by National Institute of Standards and Technology (NIST) also shows a dramatic trend of disparate false positives and higher error rates for non-white and male.  


In India, use of face recognition has grown in the last few years, despite the fact that the country doesn't have any laws to regulate and prevent misuse of the technology by law enforcement or other agencies. 



Image source: Pexels


bottom of page