Instagram is testing the use of AI to flag underage users who lie about their age, as part of new measures to better protect teens online, parent company Meta said Monday.
The platform will now “proactively” look for teen accounts it suspects belong to underage users, even if those users entered a false birthdate when signing up.
The app has used AI to estimate users’ ages in the past, but the new approach represents a more aggressive strategy.
Meta says it trains its AI to detect signals – like account activity, profile details and content interactions – to flag users who may have misrepresented their age.
If Instagram determines that a user has lied about their real age, the account will be reclassified into the teen category. This comes with stricter privacy and safety settings.
Teen accounts are private by default. Messaging is limited to people the user already follows or is connected with. Content deemed “sensitive” – such as violent videos or posts promoting cosmetic procedures – will be restricted, Meta said.
Teen users will receive time-limit notifications after 60 minutes of use. A “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10pm to 7am.
Instagram will also begin sending notifications to parents, encouraging conversations with their teens about the importance of providing accurate age information online, according to Meta.
The update comes as tech companies face increased scrutiny over the impact of social media on young users’ mental health. Several US states are also advancing legislation to require age verification on social platforms, though many of these efforts have faced legal challenges.
Meta and other tech firms have argued that app stores, not individual platforms, should bear responsibility for verifying users’ ages.
Additional sources • AP