Is it legal to use ChatGPT to write observations about children in Australian childcare?
A clear, citation-backed answer to the question every educator and director in Australia is asking in 2026 — with the exact laws, the exact regulations, and what services can actually do safely.
We get asked this question almost every week by directors, by educational leaders, by primary contact educators, sometimes by nervous nominated supervisors who just walked past the staffroom and saw something they didn't like on someone's phone screen. It deserves a straight answer.
The short answer
No. Pasting a child's name, photo, age-with-context or behavioural notes into consumer ChatGPT (or any free or consumer-tier general-purpose AI) to generate an observation, learning story or documentation is almost certainly a breach of:
- the Privacy Act 1988 (Cth), specifically Australian Privacy Principles 6 and 11;1
- Regulation 168 of the Education and Care Services National Regulations, as amended 1 September 2025, which now requires a specific policy on the safe use of digital technologies;5
- the National Principles for Child Safe Organisations, specifically online safety expectations integrated into the NQF child-safety amendments.6
This is not a theoretical concern. The OAIC's October 2024 guidance is unusually direct:
"APP entities should exercise particular caution if using a commercially available AI product that involves personal information, especially sensitive information… input of personal information into a publicly accessible generative AI product, without specific controls, is likely to result in disclosure to the provider."— OAIC, Guidance on privacy and the use of commercially available AI products, 2024
Children's observations almost always contain what the Privacy Act calls sensitive information — including health information, developmental concerns, and cultural background. Pasting them into ChatGPT is exactly the scenario the OAIC flagged.2
The three laws educators unknowingly breach
1. Australian Privacy Principle 6 — use or disclosure
APP 6 says you can only use or disclose personal information for the purpose it was collected for, unless the individual (or in the case of children, their parent/guardian) has consented, or another exception applies.
When parents sign your enrolment form consenting to their child being observed for the purpose of your service's documentation, they are almost never consenting to that information being processed by a third-party AI provider in the United States for the provider's model-improvement purposes. That is a secondary purpose and it is not covered by a generic enrolment consent.
2. Australian Privacy Principle 11 — security
APP 11 requires you to take "such steps as are reasonable in the circumstances" to protect personal information from misuse, interference and loss, and from unauthorised access, modification or disclosure.
When you paste data into ChatGPT consumer, you do not know — and cannot prove — where the data is stored, who can access it, how long it is retained, or how it is destroyed. That is the definition of not reasonable steps.
3. Regulation 168 — digital technology policy
The September 2025 Regulation 168 amendment requires services to have policies and procedures for the safe use of digital technologies and online environments. Services without a policy — or with a policy that doesn't address AI — are non-compliant.3 (We covered this in detail in our Regulation 168 guide.)
What OpenAI actually does with your prompts
Many educators assume "ChatGPT" is a single product. It isn't — there are meaningful differences between consumer and enterprise tiers, and the defaults are the worst part.
OpenAI's own help-centre documentation confirms that on the free and Plus consumer tiers, conversations are used by default to improve model performance unless the user manually opts out.4 That means every observation an educator pastes in contributes to training data that OpenAI uses to develop future products. Even with opt-out, data is retained for a period for abuse monitoring, stored on servers typically located in the United States.
What counts as "identifiable child information"
The Privacy Act uses a broad test: information is "personal" if the individual is identified or reasonably identifiable from it, alone or in combination with other information. For children in care, the bar is especially low — small cohorts mean fewer data points are needed to identify someone.
A rough guide of what is almost certainly identifiable:
- the child's first name (especially combined with age or room);
- photos — including those a parent might share on the family app;
- date of birth or a specific age-in-months;
- family member names, cultural background, medical conditions, allergies;
- even descriptive combinations: "the three-year-old twin who joined in February with a Greek-Australian heritage" is identifiable in a 24-place centre.
The legitimate grey areas
There are uses of a general-purpose AI tool that probably do not breach the Privacy Act because no personal information is involved. For example:
- Generic professional development: "Explain EYLF Learning Outcome 4 in plain English so I can share it with a new educator" — no child data, low risk.
- Pedagogical idea generation: "What are five loose-parts play invitations that support sustainability for a 3–5 room?" — low risk.
- Template drafting without names: "Write a template social story about transitioning from home to childcare, no names or identifying details" — low risk.
But three things to note even here. First, your service's digital-technology policy still needs to cover these uses. Second, once you paste any identifiable language (a child's name, a specific photo, a specific behavioural example), you are back in breach territory. Third, anything an AI generates must still be reviewed by a qualified educator for accuracy and EYLF alignment before it goes into a child's record.
What to do instead
The honest answer is that services have three reasonable options.
Option 1 — Don't use AI for documentation.
Perfectly valid. Document the way you always have. It's slower, but it's simple from a compliance perspective.
Option 2 — Use an enterprise AI contract with your own data protections.
OpenAI, Anthropic and Google all offer enterprise tiers with no-training commitments and data-processing agreements. This works — but requires a legal review, a custom implementation, and usually a dedicated prompt-engineering layer to ensure EYLF alignment. For most services this is not realistic on its own.
Option 3 — Use a purpose-built ECEC AI tool that is NQF-aligned by design.
This is the category Little Narratives sits in. Tools in this category:
- redact or pseudonymise child data before it ever reaches an AI model;
- hold their data in Australian regions;
- map outputs explicitly to EYLF v2.0 Learning Outcomes and NQS elements;
- keep humans in the loop with editable, reviewable outputs;
- maintain an audit trail you can hand to an authorised officer at assessment.
Whatever you choose, document the decision. At an NQF assessment in 2026 you will be asked how your service handles AI — and "we banned ChatGPT and wrote it into our policy" is an answer that works. "We weren't really sure" is not.
References & further reading
- Privacy Act 1988 (Cth) — Schedule 1, Australian Privacy Principles (APPs). Commonwealth of Australia.
- Office of the Australian Information Commissioner. (2024). Guidance on privacy and the use of commercially available AI products.
- ACECQA. (2025). Strengthened NQF child safety and protections — Policy requirements from 1 September 2025.
- OpenAI. (2024). Privacy Policy and Data Controls FAQ — ChatGPT consumer plans use conversations to improve model performance by default.
- Education and Care Services National Regulations — Regulation 168 (as amended September 2025).
- Australian Government, National Office for Child Safety. (2022). National Principles for Child Safe Organisations — Principle 9: Implementation of the National Principles is regularly reviewed and improved.
- OAIC. (2024). Notifiable Data Breaches scheme — reporting obligations.