For Educators·11 min read

AI and the NQS: what Regulation 168's digital safety amendments mean for your service

From 1 September 2025, the Education and Care Services National Regulations explicitly require your service to have policies for the safe use of digital technologies. For most services, that now includes an AI policy. Here's what it has to contain — and what it doesn't.

By The Little Narratives teamPublished 23 April 2026

If you're a nominated supervisor or centre director, you've probably had the same conversation we have in the last six months: an educator proudly demonstrates how they use ChatGPT to turn a scribbled jotting into a "beautiful EYLF-linked observation" in thirty seconds. It looks great. Families love the new learning stories. Documentation time is down.

Then you ask the question that freezes the room: where did the child's name, photo and context actually go?

That conversation is no longer a "good practice" debate. As of 1 September 2025, it is a regulatory one.1

What actually changed on 1 September 2025

Regulation 168 has always required approved services to have a set of written policies and procedures covering a specific list of topics — from providing a child-safe environment to incident management, nutrition, and the administration of medication. The September 2025 amendments expanded that list.2

The updated regulation now explicitly requires policies and procedures addressing the safe use of digital technologies and online environments, including:

  • the taking, use, storage and destruction of images and videos of children;
  • obtaining authorisation from parents to take images and videos of children;
  • the use of digital technologies and online environments at the service.

ACECQA's updated Policy Guidelines: Providing a Child Safe Environment (version 2, September 2025) explicitly contemplates that "digital technologies" includes AI-assisted tools such as observation generators, transcription services, and large language models.3

Does "digital technologies" include AI tools like ChatGPT?

Yes — and this is where a lot of services are quietly exposed. The phrase digital technologies in the amended Regulation 168 is intentionally broad. It captures any tool that collects, stores, processes or displays data about children. That includes:

  • general-purpose AI chatbots (ChatGPT, Claude, Gemini, Copilot);
  • AI-assisted documentation tools used to write observations or learning stories;
  • photo editing and captioning tools that use cloud inference;
  • voice-to-text transcription that processes audio on external servers;
  • messaging apps used between educators and families.

Community Early Learning Australia's September 2025 sector guidance is explicit: services "must demonstrate ethical and responsible AI management" and "maintain commitment to privacy and transparency with staff and families", ensuring that AI supports rather than replaces relational and professional expertise.4

What your AI / digital technology policy must cover

ACECQA's September 2025 guidelines describe the minimum a compliant digital-technologies policy should contain. In plain English, your policy needs to address eight things:

  1. Purpose and principles. Why you use digital technologies, and the pedagogical or operational value they add. Generic "productivity" isn't enough — connect it to the EYLF.
  2. Approved tools list. Which specific tools are authorised. An open-ended "staff may use AI as helpful" paragraph will not survive assessment.
  3. Prohibited uses. What educators must not do. This is where you explicitly prohibit uploading children's identifiable information (names, photos, ages with context, behavioural notes) to non-approved tools.
  4. Consent and authorisation. How families consent to their child's information being processed by digital tools, including AI. Blanket enrolment consent is not sufficient under the 2025 guidelines.
  5. Data handling. Where data is stored (Australian residency is strongly preferred), how long it's retained, and how it's destroyed. This aligns with APP 11.6
  6. Training and induction. How educators are trained on the policy — documented, and refreshed annually.
  7. Incident response. What happens if an educator uploads a child's information to an unapproved tool. This is now a notifiable incident for most services.
  8. Review. The policy must be reviewed at least annually and whenever a new tool is adopted.

The line services keep crossing: children's personal information

Under the Privacy Act 1988 (Cth), "personal information" means information about an identified individual, or one reasonably identifiable. In an early childhood context, that's a wide net — it includes not just names and dates of birth, but photographs, voice recordings, behavioural observations, developmental assessments, and medical information.6

APP 6 prohibits the use or disclosure of personal information for a purpose other than the one it was collected for, unless an exception applies. APP 11 requires you to take "reasonable steps" to protect personal information from misuse, interference and loss. Pasting a child's data into a consumer AI product fails both tests — because the information is now being used for model training (a secondary purpose) and stored in systems you have no control over.

"APP entities should exercise particular caution if using a commercially available AI product that involves personal information, especially sensitive information."— OAIC, Guidance on privacy and the use of commercially available AI products, 2024

Sensitive information has a specific meaning under the Act, and it explicitly includes health information. Developmental observations about self-regulation, language delay, continence, or behavioural challenges almost always qualify as sensitive information — which triggers enhanced protections and makes consumer-AI use substantially more risky.

A director's checklist for assessing any AI tool

Before approving any AI tool for use at your service, your nominated supervisor or ICT-responsible person should get written answers to these nine questions. If a vendor hesitates on any of them, walk away.

What authorised officers will actually ask for

Under Quality Area 7 (Governance and Leadership), authorised officers conducting an assessment and rating visit under the NQF are now expected to see evidence that your digital-technology and AI policies are operational, not just written. In practice, that means having ready:

  • the policy document, with a review date in the last 12 months;
  • a register of approved digital tools and the risk assessment for each;
  • evidence of educator training (attendance log, signed acknowledgement);
  • sample consent forms that reference AI/digital-technology use;
  • an audit trail showing educator review of AI-generated content;
  • an incident register for any digital-technology breaches.

The last one is the most commonly missed. You need a way to prove that if an educator has used an AI tool to draft an observation, the content was reviewed and approved by a qualified staff member before it entered the child's record — and that you can export that trail if asked.

How Little Narratives is built for this regulation

We built Little Narratives specifically for the post-September-2025 NQF. Our approach:

  • No child photos or names uploaded to third-party AI providers. All identifiable content is redacted before reaching the model.
  • Australian data residency for all child records, backed by Australian-region cloud infrastructure.
  • EYLF v2.0 native mapping — every generated observation and story is explicitly tagged to a Learning Outcome and an NQS element.
  • Human-in-the-loop — educators review and can edit every output. Nothing auto-publishes to a family feed without a qualified educator's approval.
  • Full audit trail — every AI assist, every edit, every approval is logged and exportable.

If you want the full compliance brief, read our AI Safety & Compliance page — and if you'd like a policy template you can adapt for your own service, get in touch. We keep one in line with the ACECQA September 2025 guidance and update it whenever the regulations shift.

References & further reading

  1. ACECQA. (2025). Strengthened NQF child safety and protections: changes taking effect 1 September 2025. Australian Children's Education & Care Quality Authority.ACECQA — official guidance portal
  2. Education and Care Services National Regulations — Regulation 168: Education and care service must have policies and procedures.NSW consolidated regulation (applies nationally via ECSNL)
  3. ACECQA. (2025, September). Policy Guidelines: Providing a Child Safe Environment — version 2. Australian Children's Education & Care Quality Authority.PDF — PolicyGuidelines_ProvidingAChildSafeEnvironment_v2
  4. Community Early Learning Australia (CELA). (2025, September). Using AI responsibly and ethically in ECEC. Amplify! blog.CELA sector guidance
  5. Office of the Australian Information Commissioner. (2024). Guidance on privacy and the use of commercially available AI products. OAIC.OAIC — Australian Privacy Principles guidance
  6. Privacy Act 1988 (Cth), schedule 1 — Australian Privacy Principles (APP 6: Use or disclosure of personal information; APP 11: Security of personal information).Privacy Act 1988 — APPs
  7. Australian Government, National Office for Child Safety. (2022). National Principles for Child Safe Organisations.National Principles — Principle 9: "Online environments"