SuperAwesome’s Legal and Policy team spent the end of the year sharing our expertise and listening in at the top industry convenings focused on youth safety online, advertising best practices, privacy, and emerging technology. Stops on our tour included panels at the ANA’s Masters in Advertising Law, National Advertising Division’s annual conference and both IAB’s Tech Lab Addressability Summit and Privacy Law Summit. We also participated in important conversations at IAPP’s Privacy Security Risk conference, FOSI’s Annual Conference and CARU’s Kids Industry Connect. 

Here are the themes we heard most consistently, and what they mean for teams building products, advertising, and experiences for kids and teens.

1) Regulation Is Accelerating- And Becoming More Precise

FTC Chair Andrew Ferguson made clear in his FOSI keynote that enforcement comes first. COPPA remains the “highest priority,” with Ferguson admittedly directing staff to bring as many cases as possible and push the law to its limits. The Commission is also scrutinizing children and teen data under its “unfairness” authority and using its 6(b) powers to demand information from major AI developers. While he emphasized the importance of U.S. competitiveness, his message was unambiguous: the trade-off for innovation cannot be children.

We heard rumblings that Congress was going to throw a ton of energy into kids legislation before the end of the year. This proved to be true, which we saw when the House Committee on Energy and Commerce dropped a package of nearly 20 bills related to children and teen digital privacy and safety. The package was a mix of previously introduced, completely new, as well as newly updated proposals (KOSA, COPPA 2.0). We’ll keep you posted as we learn more about the fate of these bills. In the meantime, though, the regulatory complexity is compounded by the U.S. state patchwork. New state app-store accountability laws were a hot topic of discussion, as they will generate first-party age signals. With this new data companies will be faced with “actual knowledge” of users’ age, which will likely trigger new privacy obligations for companies even if services are not intended to be child-directed. The first of these laws is scheduled to go into effect January 1, 2026.

2) Age Assurance Has Entered the Operational Phase

Age assurance has moved from policy discussion to implementation. The clearest consensus: there is no single “silver bullet,” and the right approach is risk-based. Viewing content, participating in chat, making purchases, and accessing mature features should not all require the same level of assurance. Companies are moving toward layered systems that combine signals and protections while attempting to align divergent expectations across jurisdictions.

3) Safety Is Becoming a Commercial Differentiator

A theme that kept surfacing: safety is shifting from “cost of doing business” to “feature customers choose.” The “seatbelt moment” analogy captures it well – companies will increasingly compete on trust, and safety-by-design is becoming part of brand value, partner selection, and long-term growth. For teams building in youth digital, this is a strategic shift, not just a messaging trend.

4) Youth Voices Are Becoming an Expected Input

The CARU and FOSI events reinforced a simple reality: adults often misread how kids and teens actually use technology. Youth-informed design and research are increasingly viewed as a practical risk-reduction measure, not a ‘nice-to-have’. When youth input informs safety decisions, organizations reduce blind spots and build interventions that work better in real life.

5) AI Output Risk Is the New Frontline – for Safety and Advertising

At CARU in particular, the focus was not only whether kids can access AI, but what the AI produces. Hallucinations, unsafe recommendations, anthropomorphic cues, impersonation, and emotionally persuasive interactions can create consumer protection and advertising risk even without data privacy issues. In chat-based experiences, the line between “helpful assistant” and commercial influence can blur quickly – and for younger users, that ambiguity drives heightened concern.

2025’s conversations converged on a simple standard: measurable safeguards beat statements of intent. The most future-proof path remains consistent – minimize data, build age-appropriate experiences, and treat safety and trust as core product requirements, particularly as AI becomes more interactive and more commercial.