To celebrate Safer Internet Day 2026, members of the SuperAwesome team have curated a series of blogs featuring expert insights on youth privacy and safety, with perspectives from our legal, product, and insights teams. In this blog, Hannah Grant, Senior Product Manager at SuperAwesome, unpacks SuperAwesome’s view on smarter moderation and how it connects to this year’s Safer Internet Day theme: “Smart tech, safe choices – exploring the safe and responsible use of AI.”

The kids’ content landscape evolves at a remarkable pace. New content is uploaded onto YouTube far quicker than our team of moderators can review it. The advancement of AI has given us the ability to scale our moderation capability in a way that remains safe for the advertisers who trust us, while enabling sustainable monetisation that supports and strengthens the kids’ content ecosystem as a whole. Without scalable moderation, the pipeline simply breaks down, which has knock-on effects for creators, brands, and ultimately the young audiences who rely on quality, safe content.

How we use AI: Moderation and Categorisation

At SuperAwesome, we have two core functionalities where we utilise AI: moderation and categorisation. We define moderation as ‘a review of a piece of content that has been uploaded to ensure it follows our safety guidelines. This results in a simple approve or reject decision. This process sits alongside categorisation, where we assess the content’s relevance for different audiences using our unique kids taxonomy – a framework built specifically to reflect how young people actually engage with content, rather than applying a one-size-fits-all adult lens. 

It’s also worth being clear about what our AI has been built to do. Our moderation and categorisation AI has been designed to act as a moderator, not a marketer, which is a distinction that matters enormously. Built in close partnership with our moderation team, who bring over ten years of hands-on expertise in the kids’ content space, the model has been trained to assess content through the same considered, safety-first lens that they apply every single day. It is specifically designed to look for risk, instead of optimising for engagement or reach.

So, how do we decide where AI steps in and where humans remain in charge? The answer lies in how we define and categorise risk.

Defining Clear-Cut  Decisions

AI at SuperAwesome is utilised to make low-risk decisions. When we say ‘low-risk,’ we mean decisions that carry no potential to compromise the safety, appropriateness, or compliance of our products and services. For example, AI is capable of rejecting content from our system that simply does not meet our most basic technical requirements – content that has been determined as being in a language that we do not support, or a channel where no new content has been added for more than six months. These are clear-cut situations in which an automated decision is both accurate and appropriate. 

Alongside clear-cut determinations, our system is also able to flag content that it assesses, with a high level of confidence, as being inappropriate for under-18 audiences. Where that confidence threshold is met, the content is rejected automatically, without any need to route it through to a human reviewer. This is AI doing exactly what it should in the right context: handling volume, applying consistent standards, and freeing up human expertise for the cases that genuinely require it. 

Where Humans Stay in Control

For higher-risk decisions, however, we believe in empowering humans to review content both efficiently and effectively. This is where the “human-in-the-loop” element of our approach becomes essential. We use AI to surface content with any level of ambiguity and present it to a human as quickly as possible. Crucially, it doesn’t just flag the content and leave the reviewer to start from scratch; it identifies what it believes might be risky about a given piece of content, provides reasons why those concerns have been raised, and highlights the specific moments within a video that most need human attention. All of this additional context and intelligence is generated by AI and handed directly to the human, creating a process that is both safer and more efficient than either AI or human review working in isolation. 

This layered approach, where AI handles what it can confidently and safely decide, and humans retain authority over anything that requires genuine judgment, is at the heart of responsible AI deployment in the kids’ content space. It’s not about replacing human expertise. It’s about ensuring expertise is applied where it’s needed most.

Innovation and Responsibility, in Balance

Our approach reflects a deliberate balance between innovation and responsibility. By thoughtfully integrating AI into our moderation workflows, we increase efficiency and scale without compromising the safety and integrity that define our products and services. Technology enables us to grow, to keep pace with a landscape that moves quickly, and to consistently apply standards across an enormous volume of content. Human expertise still remains at the core of every critical decision we make, ensuring that the kids’ content ecosystem is both protected and sustainably supported for the long term.