Esta página no tiene traducción en el idioma seleccionado.

Meta will use AI visual analysis to determine underage users on Instagram & Facebook.

Meta will use AI-powered technologies, such as photo scanning, to detect underage users by analyzing visual cues like bone structure. While using AI scanning and techniques to determine a user's age raises privacy concerns, Meta claims this is “not facial recognition”.

To ensure underage users are blocked from using Facebook and Instagram, Meta will use advanced AI systems including visual analysis of users’ images and videos.

With the global push for age verification and governments clamping down on Tech Giants to block underage users from platforms, Meta is on a mission to ramp up its underage enforcement measures. Now, the Big Tech company announced to use AI visual analysis to detect if children and teens on its platforms have lied about their age or are under the age of 13. Meta’s move to scan users' media with AI is heavily criticized by privacy activists around the world. Let’s find out why!


Meta will use advanced AI systems to detect underage accounts. AI will analyze entire user profiles for contextual clues to estimate age based on context. For example, comments, captions, bios, and posts. Visual analysis will be used to scan photos and videos for ‘visual cues’ like bone structure and height to estimate age.

Screenshot of Meta's blog post, "We’re also adding visual analysis as a new technique to aid our detection efforts. This technology allows our AI to scan photos and videos for visual clues about a person’s age that text might miss. We want to be clear: this is not facial recognition. Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image. By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove." Screenshot of Meta's blog post, "We’re also adding visual analysis as a new technique to aid our detection efforts. This technology allows our AI to scan photos and videos for visual clues about a person’s age that text might miss. We want to be clear: this is not facial recognition. Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image. By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove."

While Meta will use AI visual analysis to scan images and videos, it wants to be clear that this is not facial recognition. Screenshot: Meta

Meta will use AI to analyze everything (just so it can detect underage users)

On May 5th, Meta released an article in which it shared plans to use AI to scan pictures and videos for visual cues, such as height or bone structure in order to determine if a user is younger than thirteen. The use of the technology is intended to remove its underage users from its platforms, and to move teenagers into more appropriate account types, like Meta Teens accounts. This is part of Meta’s investments into its own age assurance technology. The Big Tech claims “this is not facial recognition”:

“We want to be clear: this is not facial recognition. Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image,” Meta said in its blog post.

If Meta distinguishes that an account could be underage, it will be deactivated and require the user to provide proof of age through its age verification process, otherwise the account will be deleted. For now, many of Meta’s AI improvements are already being used worldwide, but some of its advanced AI systems, like visual analysis, are currently used in specific countries only while Meta works towards a bigger roll-out.

Schalte die Privatsphäre ein.

Meta to expand automatic Teen Accounts

In 2024, Meta introduced its Teen Accounts on Instagram, and the Silicon Valley tech giant has gradually rolled out these accounts to Facebook and Messenger, and for its users aged between 13 and 17 in different regions. The aim of these accounts specially made for a more suitable teenager experience is to improve the privacy, safety, and well-being of Meta users under the age of 18. In the blog post, Meta also announced its plans to expand its technology that automatically moves users it identifies as teenagers into Teen Account protections on Instagram in the EU and Brazil, and on Facebook in the US.

Can you trust Meta?

The tech giant, has said that this kind of AI visual analysis is not facial recognition, but the idea that a user’s every click, comment, post and interaction - not to mention images and videos - can be scanned and analyzed by Meta’s AI systems might not sit well with many people. This is understandable because Meta has landed in the spotlight multiple times for different kinds of scandals, violations of users’ privacy, and unethical practices.

From opting users into having their Facebook and Insta data used to train its AI models, to being sued by 30 US states for allegedly consciously and purposefully creating features on Facebook and Instagram that were addictive and bad for young people’s mental health in 2023, to its infamous Cambridge Analytica Scandal, in which the data of millions of Facebook users was harvested and used for targeted political advertising.

To top it off, in March of this year, Meta was ordered to pay $375 million for misleading users about the safety of its platforms, and for failing to protect children online, violating state law in New Mexico, USA.

So when you look at Meta’s history, especially now with the knowledge that every Big Tech is racing to develop the most advanced AI models with the help of vast amounts of user data, it’s understandable that users could feel skeptical about Meta using AI visual analysis on users’ data. Looking at other recent AI data abuse scandals - like LinkedIn using your data to train its AI without asking for consent or like Google pushing you to use its Gemini AI on Android – it’s no surprise Meta’s latest decision might not be about protecting children and teens only.

The Age of Age Verification

This year, there’s been an even bigger push globally for age verification and introductions of social media bans for teenagers are under way in multiple countries. Now, tech giants like Meta and Google have to be compliant or face the consequences. As a result, there are different kinds of age assurance and age verification systems being introduced, for example, YouTube now requires people to verify their age, and Discord also plans to roll out an age verification system.

While it’s understandable that Big Techs like Meta have to put these systems in place to be compliant with laws, users of these platforms should be cautious and need to ask themselves if they can trust companies to truly respect and protect their personal data, like official IDs, when verifying themselves.

In the age of age verification, we have to ask ourselves: Do we really need age verification to protect teens from the bad consequences of social media, or do we need to change the algorithms that run social media and that are the route cause of the negative impact social media has on people’s mental health altogether?

Illustration eines Telefons mit Tuta-Logo auf dem Bildschirm, daneben ein vergrößertes Schild mit einem Häkchen, das die hohe Sicherheit der Tuta-Verschlüsselung symbolisiert.