Politics

Age verification spreads worldwide as governments restrict kids’ internet access

Age checks are now internet policy’s default answer, but the cost is spreading to users, sites, and privacy before the technology has proven it can work.

Marcus Williams6 min read
Published
Listen to this article0:00 min
Share this article:
Age verification spreads worldwide as governments restrict kids’ internet access
Source: theverge.com

A global clampdown built on uncertain technology

Age verification has moved from a niche compliance idea to a standard expectation across large parts of the internet. Governments are using it to keep children away from pornography, social platforms, and other regulated content, but the basic tension is hard to miss: the rules are getting stricter even as the systems meant to enforce them remain easy to evade, hard to scale, and often intrusive.

The result is a policy paradox. Lawmakers want a cleaner boundary between childhood and the adult internet. In practice, the burden is landing on children who are still likely to get around the checks, on parents asked to absorb more responsibility, on adult users who must prove their age, on smaller sites that lack the resources to build compliance systems, and on privacy itself.

Britain set the template

The United Kingdom has become one of the clearest examples of how quickly age checks can move from theory to routine. Section 81 of the Online Safety Act 2023 requires porn providers to use age verification or age estimation so children are not normally able to encounter regulated pornographic content. That wording matters because it shows the policy is not just about asking for a birthdate. It is about building a barrier strong enough to stand up to ordinary use.

Ofcom then translated that law into deadlines. All user-to-user and search services likely to be accessed by children had to complete children’s access assessments by 16 April 2025. The children’s safety duties took effect on 25 July 2025. Together, those dates turned the law into an operational requirement for platforms that had previously treated age gating as an optional extra.

Britain’s approach is also a test of enforcement credibility. If the assessments are weak, or if services can easily be routed around, the policy can look tougher on paper than it is in practice. If the assessments are strong, the compliance burden can become expensive fast, especially for smaller services that do not have the engineering teams of the biggest platforms.

France pushed from law to enforcement

France has gone further in tying porn access to formal age verification rules. The SREN law of 21 May 2024 authorized Arcom to require age-verification systems on pornographic sites. Arcom followed with a technical reference framework in October 2024, after consulting the national data-protection authority CNIL. That sequence is important: France did not simply demand age checks, it tried to define how they should work.

By 1 August 2025, Arcom had issued formal notices to five porn sites established in the European Union after finding no age verification measures in place. By early 2026, Arcom said all 17 sites designated by the ministerial order of 26 February 2025 had either implemented age verification or made their services inaccessible in France. Some sites chose the latter path, a reminder that regulation can produce withdrawal as easily as compliance.

The behavioral effect was visible. Médiamétrie data cited by Arcom showed that 12- to 17-year-olds spent 35% less time on those porn sites between November 2024 and November 2025. That is the clearest sign yet that age checks can change usage patterns. It is also not the same as proving the internet has become safer. Teenagers may spend less time on named sites, while traffic shifts elsewhere or moves to services that are harder to regulate.

Australia turned age checks toward social media

Australia has extended the age-verification logic beyond pornography and into social media. After late-2024 amendments to the Online Safety Act 2021, platforms covered by the social-media minimum age framework must take reasonable steps from 10 December 2025 to prevent under-16s from holding accounts. That makes Australia the first country to enforce a minimum-age requirement for some social media accounts.

AI-generated illustration
AI-generated illustration

The government has also tried to avoid one of the most obvious implementation traps. Users will not be forced to use an accredited Digital ID service to prove their age. That position signals a political effort to separate age assurance from a universal government identity system, even though the details of how platforms will verify users without over-collecting data remain central to the debate.

Australia’s model shows how age checks can widen from a narrow content policy into a broader design rule for platform access. Once the standard is minimum age rather than just restricted content, the stakes rise. More users have to prove who they are, more platforms have to build or buy verification tools, and more questions emerge about error rates, exclusion, and how easy it is to route around the system.

Brussels is betting on privacy-preserving verification

The European Commission has moved quickly to create a continental version of the same idea. It released an age-verification blueprint in July 2025, followed by an enhanced version in late 2025. On 15 April 2026, the Commission said its European age verification app was technically ready and would soon be available to citizens.

The Commission’s pitch is that the system is privacy-preserving and based on zero-knowledge-proof style verification. In plain terms, that means a user should be able to prove they are old enough without handing over more personal information than necessary. That is the strongest argument the age-check movement has in its favor, because it tries to reduce the classic trade-off between safety and data collection.

But the Commission is also testing a much harder question: whether such tools can actually work at scale across a fragmented digital market. A prototype can be privacy-preserving in principle and still struggle in real-world deployment, where platforms differ, enforcement varies, and users can move quickly from one service to another.

The real costs are already visible

The most immediate cost of age verification is friction. Adults who are legally entitled to access content can be asked to jump through more steps. Smaller sites can be pushed toward expensive compliance systems or, as happened in France, toward blocking entire countries rather than trying to satisfy regulators. Platforms have to decide whether to build, buy, or refuse the new infrastructure.

There is also a privacy cost. Any system that confirms age creates a new point of trust, and a new target for misuse or breach. That is why privacy advocates warn that age checks can become surveillance systems in disguise, especially if governments or vendors collect more data than is necessary to make the check work.

For children, the policy may not be a clean win either. Teenagers who are determined to get around the rules often will. The result can be a system that inconveniences compliant users, drives some services underground or offshore, and still fails to eliminate access. For parents, the burden can shift rather than disappear, with more pressure to supervise, consent, or mediate access across multiple platforms and devices.

That is the central contradiction shaping the age-verification wave. Governments want to impose a clear line, but the internet does not respect clean lines. The next phase will not be about whether the rules exist. It will be about whether regulators can enforce them without turning privacy, competition, and usability into collateral damage.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Politics