Is AI a Threat to Your Safety?

by Chief Editor

Your Face is the New Data: How AI is Quietly Collecting Your Biometrics

Artificial intelligence is no longer a futuristic concept; it’s woven into the fabric of our daily lives. From suggesting responses in text messages to powering viral trends on Instagram Reels, AI’s presence is undeniable. But beneath the surface of convenience lies a growing concern: the quiet collection of our biometric data, and what companies are doing with it.

Snapchat’s Hidden AI Training

Snapchat, a platform hugely popular with teenagers, automatically activates generative AI settings upon download. Unless manually disabled, any publicly shared content – Stories, Snap Map Snaps, Spotlight posts – is used to train Snapchat’s AI models. A recent M-A Chronicle survey revealed that nearly half (47.6%) of junior students feel uncomfortable with their faces being used for AI training, yet 71% still employ the app, suggesting a lack of awareness about this practice.

The implications extend beyond simple image analysis. Snapchat’s terms of service state that even opting out doesn’t erase previously shared data, and public content is still processed for “other purposes.” Using AI filters like “My Selfie” grants Snapchat “irrevocable and perpetual” rights to use generated images, potentially in advertisements, without further notice or compensation.

The Age Verification Dilemma: Roblox and Beyond

The push for online safety is driving a surge in AI-powered age verification systems. Roblox recently implemented a facial age estimation feature, utilizing Persona, an AI service, to confirm user ages. However, this sparked immediate backlash. Concerns center around data breaches – Roblox has experienced them in the past – and the potential misuse of biometric data.

“It’s suspicious, like, why do they want my photo?” questioned Kayla Romeyn, a freshman. “There are AI deepfakes, and companies pay other companies for your information, so what if Roblox is gonna sell my photo?”

Roblox claims to delete images immediately after processing, but user reports suggest otherwise, with age estimations being updated even after initial verification. This echoes similar issues with Discord, whose third-party AI verification system was hacked in October 2025, exposing the personal data of approximately 70,000 users.

Apple’s “Clean Up” Tool and Identity Protection

Even seemingly benign AI features can raise privacy concerns. Apple’s photo app offers a “Clean Up” tool that uses AI to remove unwanted elements from images. When a face is selected for removal, the app displays “Identity protection applied,” pixelating the area. While this indicates a privacy measure, it also highlights the underlying processing of biometric data.

Future Trends and What to Expect

These examples represent a growing trend: the increasing integration of AI into everyday apps and services, coupled with the often-unseen collection of biometric data. Here’s what we can anticipate:

  • More pervasive biometric collection: Expect to witness facial recognition, voice analysis, and even gait analysis integrated into more apps, ostensibly for security or personalization.
  • Increased scrutiny of terms of service: Users will demand to become more diligent about reading and understanding the terms of service of the apps they use, paying close attention to data collection practices.
  • Demand for greater transparency: Pressure will mount on companies to be more transparent about how they collect, use, and store biometric data.
  • Development of privacy-enhancing technologies: Expect to see the emergence of tools and technologies designed to protect biometric data, such as facial obfuscation software and privacy-focused apps.
  • Stricter regulations: Governments worldwide are beginning to grapple with the privacy implications of AI and biometric data, and stricter regulations are likely on the horizon.

Did you know?

Even if a company claims to delete your biometric data, it may still be stored in anonymized or aggregated form for research and development purposes.

FAQ

Q: What is biometric data?
A: Biometric data refers to unique biological characteristics that can be used for identification, such as facial features, fingerprints, and voice patterns.

Q: Why are companies collecting biometric data?
A: Companies collect biometric data for various purposes, including security, personalization, and AI training.

Q: Can I opt out of biometric data collection?
A: In some cases, yes. However, it often requires manually adjusting settings or avoiding certain features altogether.

Q: Is biometric data secure?
A: Biometric data is vulnerable to breaches and misuse, as demonstrated by recent incidents involving Discord and concerns surrounding Roblox.

Q: What can I do to protect my biometric data?
A: Read privacy policies carefully, adjust app settings, and be mindful of the features you use.

Pro Tip: Regularly review the privacy settings on your social media and other apps to ensure you’re comfortable with the data being collected.

The future of AI is undoubtedly exciting, but it’s crucial to approach it with a healthy dose of skepticism and a commitment to protecting our personal data. What steps will you take to safeguard your biometric information?

You may also like

Leave a Comment