"By the time a child turns 13, there are 72 million pieces of data on them": Dan Richardson on Australia's Children's Online Privacy Code
by on 13th May 2026 in News

Charlotte McEleny talks to independent ad tech consultant and parent Dan Richardson about Australia's proposed Children's Online Privacy Code, what the industry is getting wrong about it, and what practitioners need to do before December.
Australia's Children's Online Privacy Code closes for public consultation on 5 June 2026. The Office of the Australian Information Commissioner (OAIC) has been working on it for some time. Still, for practitioners operating across Australia's ad-funded open web, the exposure draft raises questions and the need for education.
Dan Richardson brings over 15 years of experience in data-driven marketing, ad tech, and martech. He is currently working independently to review the proposed Code, submit feedback to the OAIC ahead of the June deadline, and document his experience and knowledge on his Substack. He is also a parent of young children who navigate the internet every day, which, as he puts it, makes it easier to put on your thinking cap.

You’ve been close to the development of the Code. What questions is it raising for the industry so far?
The OAIC ran a more thorough consultation than the industry has given it credit for. Three phases, 65-plus stakeholder engagements, drawing workbooks for six-year-olds, asking them to express what privacy means to them. The intent is serious and the process was legitimate. But a close reading of the exposure draft reveals something the consultation did not adequately surface. The Code is attempting to do the work of three separate regulatory instruments simultaneously. When you try to do three things at once in a single legislative instrument applied to the broadest possible scope of services, you risk doing none of them precisely enough.
The first is Australia's own Social Media Minimum Age scheme. Part 4A of the Online Safety Act came into force in December 2025 and requires age-restricted platforms to take reasonable steps to prevent under-16s from having accounts. Its age assurance obligations are already live, already being tested, and already struggling. The eSafety Commissioner's March 2026 compliance report found all five platforms under review had gaps. Platforms using layered AI verification, including facial estimation, device history, and behavioural signals, still cannot reliably keep a determined teenager out. Importing that same age-assurance logic into the Children's Online Privacy Code and applying it across the entire scope of services likely to be accessed by children conflates an access problem with a data-handling problem. They are related but they are not the same thing.
The second is the New York SAFE for Kids Act (Stop Addictive Feeds Exploitation), signed in June 2024, the first of its kind in the US. The SAFE Act was specifically designed to address one identified harm mechanism: algorithmic feeds and their documented links to depression, anxiety, and self-harm in children. Its scope is deliberately narrow and it targets the specific mechanism the research identified as driving harm. The Australian Code’s approach to consent, default settings, and notification obligations echoes the SAFE Act’s intent. Still, it applies it to a scope the New York legislation was never designed to reach, including streaming services, retailers, gaming platforms, and messaging apps. Borrowing the consent logic without the scope constraint produces a much blunter instrument.
The submission worth making is that these are three legitimate problems, each deserving a precise solution: an access problem, an algorithmic-harm problem, and a data-privacy problem. Conflating them into one broad instrument, without the proportionate risk calibration the OAIC's own guidance recommends, risks solving none of them cleanly. It risks creating the very overcollection problem the Code is designed to prevent.
Where do you stand personally, as both a practitioner and a parent?
By the time a child turns 13, an estimated 72 million pieces of data will have been collected about them, according to digital safety expert Donell Holloway. As a parent of young kids who crave YouTube, streaming shows, and navigating the open web every day, I am completely on board with minimising the risk of addictive media and squashing the dark patterns that have been baked into children's digital experiences for years. That part of the Code's intent is not just defensible but overdue.
Where it gets complicated is the gap between the Code's 'strictly necessary' standard and what is already proposed in Tranche 2 of the Privacy Act reform. Tranche 2 introduces a fair and reasonable test for the collection and use of personal information. That test is designed to be contextual and proportionate. Strictly necessary sets a much narrower bar because it asks whether the data is essential to deliver the core service, and nothing beyond that. For services operating across both frameworks, and most significant digital businesses will be, that tension creates a real compliance problem. You may be acting consistently with the fair and reasonable standard under Tranche 2 while simultaneously failing the strictly necessary test under the Code. Nobody has resolved that yet and it needs to be raised before December.
There are examples of where it’s done well and Lego shows what is possible when a brand deliberately makes the structural investment. Lego.com and kids.lego.com are separated at the domain level. Kids.lego.com is age-gated, carries no advertising, collects no third-party data, and is built from the ground up around what is actually in the best interests of a child visiting it. My daughter loves the Lego Play Zone. I know it is safe for her. That confidence as a parent is not accidental. It is the product of a design decision Lego made before any code required them to. And it represents a genuine commercial opportunity that the industry has been too slow to see. Safe, curated digital spaces for children, built on trust rather than data extraction, are not a regulatory concession but a product category that parents will pay for, recommend, and return to. The brands that move first will own that relationship.
The challenge is that most brands have not made that separation, particularly those operating on a shared domain, for example, a retailer, a financial services provider, or an entertainment platform. The same URL serves adults making purchases and children browsing, playing, or being purchased for. Under the Code, if a child is reasonably identifiable from the data collected in that shared domain (account data, session data, device identifiers, clicked links, uploaded images, local store selection), that data qualifies as personal information and must be handled in accordance with the best interests of the child test. The Code does not require you to have a kids.yourbrand.com but it does require you to know when a child is on your domain and treat their data accordingly. Most brands cannot do that yet.
Can children's privacy and commercial viability actually co-exist?
The answer is not simply to flip a switch from behavioural to contextual advertising and call it done. That is overly binary and does not reflect how compliance will actually work across the range of entities covered by the Code. Contextual targeting is one viable path for some inventory in some contexts but not the universal solution.
Amazon switched on mandatory advertising across all Prime Video accounts in Australia in early 2024. By Q4 2025, Amazon DSP had generated USD 1.3 bn (£1bn) in global advertising revenue for the quarter alone. That model depends on behavioural data from every viewing session, including those on adult accounts in households where children are watching. A child browsing under a self-named standard profile on Netflix or Prime Video sits outside the protection of every Kids mode. The targeting data flows regardless. The recommendation engine runs regardless. Under the Code, any session identified as likely child usage needs to be stripped of behavioural targeting, removed from audience segments built on that data, and excluded from programmatic delivery that relies on it. That is a material change to how some of the most commercially valuable streaming inventory in Australia is currently monetised.
Sensitive categories of data, where advertising and data collection are already prohibited or heavily restricted, need to be the immediate priority. Any entity that has been collecting or using data that falls into sensitive categories related to children needs to conduct that audit now, not in November. That is not a future compliance question.
The more important distinction the industry should be pressing for is clarity of scope. Services created first and foremost for children carry a different, more immediate obligation than services merely accessed by children. A platform designed for kids should be held to the highest standard from day one. A retailer, a streaming service, or a publisher whose audience includes children needs a workable transition period to understand their data, conduct proper privacy impact assessments, run data audits, put data removal processes in place, and build consent mechanisms where feasible and required. That is not a low bar but a significant body of work. Giving entities that time and that clarity is how you get genuine compliance rather than performative compliance, rushed to meet a deadline.
What should advertisers and media platforms be doing to prepare?
In short? Map your data. Do the PIA before you build. Fix your consent flows. Brief your agentic buying logic. The Code is not ambiguous, so the only variable is whether you have started.
First, answer the 'likely to be accessed' question honestly for every service you run, not just the ones you think of as children's products. The ICO's published guidance on this is a practical reference point for Australian practitioners, given that the OAIC explicitly modelled the Code on the AADC. The ICO sets out a non-exhaustive checklist of factors that need to be considered and documented. It's worth running through these now because the assessment needs to be defensible, not just assumed.
The ICO checklist asks you to consider: whether children can actually access the service and whether any age-gating in place is robust, with self-declaration alone explicitly noted as insufficient; the number of child users and their proportion of your total user base; whether the subject matter, content, or design features of the service have particular appeal to children (cartoon animations, gamification, music or audio content, child-focused incentives, the presence of influencers popular with children) and whether you have received customer complaints or support requests from children or parents. The ICO is clear that this assessment must be completed before processing begins, documented as part of a DPIA, and kept under regular review, not done once and filed.
For Australian entities, this maps directly onto the Code's obligations. If that assessment indicates children are likely to access your service, you need to map your data against the personal information threshold in a children's context. Under the Privacy Act's fair and reasonable test, if a child is reasonably identifiable from what you hold (account data, IP addresses, device identifiers, session data, browsing history, clicked links, uploaded images, local store selection), that is personal information and must be handled consistently with the best interests of the child test. Most retailers and publishers haven't mapped this yet and they need to.
Second, conduct your privacy impact assessments before building anything new that children will touch. Section 38 of the Code makes this mandatory. But the more important reason is that a PIA at the design stage costs a fraction of what it costs post-incident. Privacy by design is like a proper brisket: you cannot rush it, you cannot bolt it on at the end, and if you have been cutting corners all day, no sauce fixes the last hour.
Third, audit your consent architecture now. The Code eliminates bundled consent requests, confirmshaming, pre-ticked boxes, and nudge techniques. A 2025 Consumers' Federation of Australia report found gaming platforms routinely deploying these as standard engagement mechanics, not edge cases but standard practice. A 2024 ICPEN review found 76% of websites and mobile apps worldwide use at least one dark pattern. If your consent flows rely on any of those mechanisms, they're non-compliant from December.
For media agencies specifically: review your streaming line items. Defaulting to contextual targeting (genre, content type, daypart) rather than behavioural audience segments is the safest instruction to build into agentic buying logic until the supply chain has session-level age detection built in. The compliance checkpoint your agent can't perform is the one a human needs to own.
Charlotte McEleny is our APAC and MENA columnist. Check out all her ExchangeWire features here.
Ad SpendAPACAustraliaCybersecurityDataGen Zonline SafteyPrivacyRegulationSocial Media




Follow ExchangeWire