In schools and beyond, everybody is talking about, relying on and using AI. There are some brilliant tools out there which have the potential to change our lives, and education, for the better. However, there are some steps you should take before diving in head first.
Here are three things you must do before thinking about AI at your school:
–
- Consider the DfE guidance
The UK government hasn’t launched any formal AI curriculum or usage law yet, but the Department for Education (DfE) released some AI guidance. We’ve summarised their position into a quick read below.
Current Government Position (as of 2024–25)
“AI has the power to transform education by helping teachers focus on what they do best: teaching. This marks a shift in how we use technology to enhance lives and tap into the vast potential of AI in our classrooms.”
- Use AI to support, not replace, professional judgement.
- Schools should have clear and transparent policies.
- Staff and students should be aware of AI limitations (accuracy, bias, hallucination).
Key DfE Guidance Points:
- Ethics: AI must not reinforce bias (e.g. behaviour predictions based on postcode)
- Transparency: Students and staff should know when AI is being used
- Data protection: AI tools must comply with UK GDPR (free AI tools often don’t)
- No risk-free AI: Staff must remain accountable for decisions (e.g. not blindly following AI advice)
What schools should do:
- Create an AI policy – aligned to safeguarding, digital strategy, and GDPR
- Audit current use – what are staff and students already using?
- Start light – Only use tools that are built into existing systems (like Ask Arbor)
Links to resources
- Key Group Policy Guides
- Department of Education Guidance
–
- Understand what AI can and can’t do
At first, it can seem like AI is magical and can do everything! It’s important to understand its potential and limitations so that you and your colleagues can use AI only for what it’s designed to do. We’ve also done some myth-busting below, so that you can get ahead of any misconceptions about AI.
AI can:
- Generate text, ideas, and resources
- Analyse large sets of data quickly
- Save time on repetitive admin
- Provide conversational interfaces to systems (e.g., Ask Arbor)
AI can’t (or shouldn’t):
- Fully understand context or nuance like a human
- Guarantee 100% accuracy (hallucinations happen and responses are only as good as the prompt)
- Replace the judgment of trained staff – Use it to assist, not decide.
- Solve school-specific problems without structured input
- Know school context intuitively unless it’s designed around school data (like Ask Arbor)
- Guarantee compliance as many free tools are not GDPR compliant
Top tip: “AI is best at saving time and giving you a head start, not making final decisions for you.”
| Myth | Reality |
| “AI knows everything.” | It is built to recognise patterns, not facts. Because it makes guesses based on data it has seen before, it is subject to the same misconceptions, biases, and fake news that we as humans are. You have to make your own judgment on what is real |
| “AI will replace jobs.” | In education, it removes the manual admin work – it is not here to replace anyone, nor does it have the skills or judgement to make decisions about what is right for school or teachers. |
| “We can just use free tools.” | There are many free tools As with any tool, please make sure you’re using AI from a trusted provider, and take time to review their data policy before adopting it. Many free tools will re-use your data to train the models. Never enter personal data into free tools. |
–
- Read up on safeguarding risks to help you manage student use
AI isn’t just a tech tool, it’s something students are already using, often without supervision or structure. From ChatGPT to Snapchat AI filters, the technology is here, whether schools are ready or not.
As educators, we need to:
- Understand the risks
- Put guardrails in place
Empower staff and students to use AI safely, ethically, and legally
Key risks to watch out for and action against:
- Inaccurate or misleading information (“hallucinations”)
AI can sound confident but still make things up, especially when asked for facts or summaries. Safeguarding risk: Students may rely on AI-generated content without realising it’s wrong or biased. - Bias and harmful content
AI models are trained on internet data, which includes bias, stereotypes, and inappropriate content. Safeguarding risk: Some prompts may trigger harmful or discriminatory outputs. - Privacy & data protection
Free AI tools (e.g. ChatGPT, Bard) are not designed for school use and may not be GDPR compliant. Safeguarding risk: Students or staff may unknowingly input sensitive data (e.g. student names, attendance, safeguarding issues) into tools that store and train on that data. - Over-reliance on AI-generated work
Students may use AI to do tasks for them instead of with them like writing essays or homework.Teachers may use AI to supplement information that they do not know and unknowingly share fake information. Safeguarding risk: Undermines learning, masks gaps in understanding, and can impact integrity and attainment. - Impersonation & misuse
Some students may try to use AI to mimic teachers, fake messages, or trick systems. Safeguarding risk: Reputational damage, inappropriate content, potential for bullying. - Data breaches
Some free tools like ChatGPT, Google Gemini, and others may use the information you input to train their models. That means if you enter any private or sensitive data — including personal, student, or staff information, it could be stored, processed, or even exposed in future outputs. Safeguarding risk: Privacy breaches due to personal or identifiable data being used outside its intended context, unauthorised data sharing, and breaking data protection agreements with schools, staff, or parents
–
Want to learn more?
Get stuck in with our AI webinars, reports and community.

Join the Big AI Summit, our free AI webinar series

Hear from school and trust leaders in our AI report




