YouTube will begin testing their new age verification system that uses artificial intelligence to estimate the age of users. According to a YouTube blogpost, the tech giant will be using AI technology to “infer a user’s age and then use that signal, regardless of the birthday in the account, to deliver our age-appropriate product experiences and protections.” If AI flags a user as being under 18, YouTube will enact its age restrictions to prevent minors from watching content that is not appropriate. If a user wants to dispute an inaccurate age-restriction, they must provide a government-issued ID, a credit card or a selfie. Having to provide additional verification for YouTube to accurately assess one’s age means handing over more personal data to the tech behemoth in an age when data has become highly coveted.
AI age checks aren’t always reliable; evidence reveals disparities in who AI is more likely to flag as under 18. According to one 2024 study, AI had a higher false positive rate, which is the percentage of minors that the algorithm wrongly classified as adults, for certain groups compared to others. The study didn’t analyze race per se but used a person’s region of birth as an indicator of ethnicity, with six regions included in the study: East Africa, West Africa, East Europe, South Asia, South East Asia and East Asia. Multiple algorithms assessed for the study revealed that minors from East and West Africa were misclassified as older than they were compared to minors of the same age from the other regions, indicating that those with darker skin tones were perceived by AI systems as being older than they are. AI is simply reflecting societal biases, like the adultification bias; this is phenomenon where Black children are seen as older than they are.
The same study demonstrated gender differences in AI age-verification, with false positive rates being higher for women versus men, meaning more underage girls were misread by AI tool as adults. AI tools that oversee age verifications may be less accurate for Black people and women. The harms that AI age-verification tools can cause spans beyond YouTube. Industries like online gaming, gambling, and streaming services already require age verifications but it’s not outside the realm of possibility to see this type of technology being used in the workplace. If this type of tool were to be implemented in the workplace, there are several impacts that must be considered.
There are already autonomous fast-food restaurants; AI verifications may be implemented in self-service bars. In the future, nightclubs and concert venues may also adopt this type of technology, in lieu of hiring someone to check and verify IDs. Employers may integrate AI age-verification tools to comply with youth labor laws and these tools may also be integrated into human resource processes used during the hiring seasonal or gig workers. Before widespread adoption of AI age-verification tools, we must be aware of its pitfalls. The current racial and gender disparities embedded in AI age-verification tools could mean that Black workers and female workers are less likely to be protected from worker exploitation and other types of workplace harm that minors may be more prone to. AI tools that misclassify some younger individuals as adults could lead to minors not having access to programs like internships or early-career programs that are designed for workers of a particular age.
AI age-verification tools aren’t used within many companies currently, but as Big Tech leans on these tools more, we may see a rise in their popularity and use. The adoption of these AI tools could cause disparate treatment, which is when a seemingly neutral workplace practice causes unintentional harm to a particular protected group. Workplaces that implement mandatory AI age-verification could open themselves up to systemic bias. Tools that cause discrimination are often rectified after the fact, and usually not at the time of creation. If bias is not tackled early and the companies that create these tools don’t have safeguards in place to address this issue, the deployment of these tools will reproduce societal harm.