Social media and video streaming services are conducting “vast surveillance” on their consumers to monetize their personal data, said US antitrust regulator Federal Trade Commission (FTC) in its latest findings published Thursday.
FTC examined data collection and usage practices of all major social media and video streaming services and found that they are harvesting large volumes of personal data, often using privacy invasive tracking tools, to make billions of dollars every year through targeted advertising.
The report also found that the information collected by these companies includes personal data of not just users but also non-users of their platform, gathered through data brokers. It was also found that the data collection and retention practices of these companies was “woefully inadequate” and they are in a habit of retaining some user data even after users have explicitly requested deletion of their data.
Meta, YouTube, Twitch, TikTok, Snap, X, Discord, Reddit, and WhatsApp are the nine firms which are under FTC scanner and the report is based on an evaluation of their data collection practices.
The report also sheds light on the practice of secretly harvesting data from users and non-users to train proprietary algorithms and AI models.
FTC’s claim mirrors concerns raised in recent lawsuits and public backlash against firms such as Google, Meta, and OpenAI . These firms are facing accusations of scraping user data from social media and other online platforms without explicit permission from users and using it to train their AI algorithms.
Early this week, LinkedIn was caught utilizing user data to train its AI model without first seeking their consent. As reported by 404 Media, LinkedIn rolled out a new privacy setting and opt-out form before releasing the updated privacy policy that would inform users about their data being used to train AI models.
Further, FTC’s report reiterated what many privacy advocates have said before that the data collection practices of many of these platforms pose a serious risk to user privacy. This poses a significant risk not just to adults but millions of children and teenagers who are active on social media and other online platforms.
FTC found that many of these platforms are not taking adequate measures or enforcing age restrictions to protect children leading to mental health issues among them. Many even falsely claimed that there are no children on their platform to avoid liability.
“While lucrative for the companies, these surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms, from identity theft to stalking. Several firms’ failure to adequately protect kids and teens online is especially troubling,” said Lina Khan, FTC commissioner and chair.
Last November, Meta faced allegations of violating a federal children’s privacy law in the US by knowingly allowing underage users on Instagram. Meta received more than 1.1 million reports of users under the age of 13 on Instagram since 2019, according to a legal complaint filed by the attorneys general of 33 US states.
Yet, Meta allegedly disabled only a fraction of these accounts and continued to collect personal data from children, including their locations and email addresses, without seeking permission from the users or their parents.
FTC’s Khan believes that the timing of their findings is significant as federal and state policymakers are planning to bring laws to protect people from abusive data practices.
Image credit: Flickr