AI literacy that meets every student where they are.
Reception to Sixth Form, students face different AI challenges at every stage. GEN:R’s whole-school program gives each year group its own frameworks, scenarios, and tools, and brings parents into the conversation with a dedicated evening session.
Smart, Safe, Ready
Students are using tools like ChatGPT, Character.AI, and AI voice assistants daily, and 68% of parents don’t even know which ones. GEN:R’s programme gives every year group the right frameworks for their age, brings parents into the conversation, and gives your school a shared language for responsible AI use that actually holds.
Age-Calibrated Content
Every year group gets a different workshop with frameworks, scenarios, and language designed for their stage of development.
Practical, Not Theoretical
Real scenarios, hands-on activities, and take-home tools, not a lecture about being careful online.
Empowering, Not Fear-Based
We teach students to thrive with AI, not just avoid harm.
Current for 2026
Content reflects the latest tools students actually use, UK Online Safety Act requirements, exam board policies, and updated safeguarding pathways.
Whole-School Alignment
Students, educators, and parents receive consistent frameworks and shared language across every year group.
Flexible Delivery
Book as a full-day program or split across multiple sessions to suit your school's timetable and priorities.
What Each Year Group Learns
Every workshop is built for its specific audience: different frameworks, different scenarios, different takeaways. Here’s what each group covers.
Reception – Year 2: "My Digital World"
Even the youngest children are interacting with technology every day: tablets, smart speakers, educational apps, video platforms that choose what plays next. They don’t know that AI is involved, and they don’t yet have the language to ask questions about it. This workshop introduces the very first building blocks of digital awareness in a way that feels like play, not instruction.
What students learn:
- The difference between something a person made and something a computer made, and why it matters
- That not everything on a screen is true, even if it looks real or sounds friendly
- Simple, memorable rules for staying safe: ask a grown-up before you share, keep personal things personal, and speak up when something feels wrong
- That talking to a machine is different from talking to a person: machines don’t have feelings, don’t know you, and can’t keep you safe
- The beginning of a questioning habit: learning to wonder “is this real?” and “who made this?”
Years 3–4: "Is That Real?"
Children this age are starting to use the internet more independently, watching videos, talking to voice assistants, playing games that adapt to them, without realizing AI is behind much of what they see and hear. This workshop makes the invisible visible and builds the confidence to question rather than passively accept.
What students learn:
- Where AI already shows up in their daily lives, from suggested videos to photo filters to voice assistants
- How AI can create images, text, and voices that look and sound real but aren’t, and why that matters
- What’s safe to share, what to keep private, and what to do when something doesn’t feel right
- The habit of pausing and questioning before trusting what a screen puts in front of them
Years 5–6: "AI & Me: Smart, Safe, Curious"
By this age, children are actively using AI tools, asking voice assistants questions, encountering AI-generated content, and in some cases experimenting with chatbots. They need more than awareness. They need a decision-making framework they can use independently.
What students learn:
- What AI actually is, in terms they can grasp, and where it shows up in tools they use every day
- The Think-First Model: Pause, Ask, Check: a framework for any AI decision
- When AI is a helpful tool and when it isn’t, from creative inspiration to understanding why it can’t replace a trusted adult
- Why AI is not a friend, why it makes mistakes with confidence, and why personal information should never be shared with it
- The why behind responsible AI use, building judgment, not just rules, so the habits stick
Years 7–9: "Using AI Without Losing You"
This is the age where AI stops being a novelty and starts becoming a real force in students’ social, emotional, and academic lives. Homework pressure makes AI-generated essays tempting. Deepfakes and misinformation fill social media feeds. Some students are forming emotional attachments to AI companion apps that can’t provide real support or real care. This workshop gives students a universal framework for navigating all of it.
What students learn:
- The 3C Test: Credibility, Consent, Consequence: a decision-making tool for any AI situation, now or in the future
- How to spot AI hallucinations and misinformation
- Deepfake awareness, including AI-generated video and voice cloning
- The real risks of AI companion apps like Character.AI, Replika, and Chai: why they’re appealing, why they aren’t real relationships, and the warning signs of emotional dependency
- Consent issues around using someone’s voice, face, or identity with AI
- Upstander strategies: practical approaches for intervening when AI is used to bully, harass, or deceive, including how and when to report harm
Years 10–11: "AI for GCSE: Advantage, Not Avoidance"
The same AI tools that can dramatically improve GCSE revision and understanding can get a student disqualified if used incorrectly in assessed work, and the line between the two isn’t always obvious. This workshop makes it clear. The message isn’t “don’t use AI.” It’s “use it in a way that makes you smarter, not dependent, and know exactly where the rules are.”
What students learn:
- A colour-coded ethical framework — green, amber, red — applied to real scenarios students face this year
- Honest, practical comparison of AI study tools: what works, what’s risky, and what to avoid
- How to verify AI-generated information rather than trusting it blindly, including detecting fabricated sources
- How machine learning actually works and where it breaks down
- Data privacy: what happens to what you type, and your rights under UK GDPR
- The environmental cost of AI use
- Current JCQ and Ofqual exam board policies in plain language, including the fact that students have already been disqualified for AI malpractice in 2024–25
Years 10–11: "AI for GCSE: Advantage, Not Avoidance"
Sixth formers are making decisions in the next twelve months that will shape the next decade: university choices, UCAS applications, and first job interviews. AI is already part of all of it, and universities and employers are paying attention to how applicants engage with it.
What students learn:
- How universities actually view AI use: the Russell Group principles, plus specific policies that operate at department level
- The distinction between AI-assisted and AI-written work that could determine whether an application succeeds or fails
- Decision hygiene: prompt logs, citation standards, transparency protocols: what separates a thoughtful AI user from a careless one
- How to draft a Responsible AI Statement for CVs, personal statements, and interviews
- AI career pathways with honest attention to the diversity gap and the growth in jobs requiring professional AI literacy
- The human skills that remain automation-resistant and how to articulate them
For Parents: "Parenting in the Age of AI"
68% of parents don’t know which AI tools their children use. This session closes that gap, practically, without panic.
What parents learn:
- The AI tools children are actually using, like ChatGPT, Character.AI, Replika, Snapchat My AI, voice assistants, AI video generators, and what each one does
- Real risks with real context: deepfakes, AI companion emotional dependency, data collection, academic integrity pressures
- Age-specific boundaries broken down by year group: what to allow, what to supervise, what to prohibit at each stage
- Conversation scripts with actual phrases that work for different ages: non-confrontational approaches that build trust rather than secrecy
- A Family Tech Agreement template designed to be reviewed every three to six months
- Crisis protocols: exactly what to do and who to contact if a child is targeted by a deepfake, engages in harmful AI use, or shows signs of AI companion dependency with updated UK reporting pathways for 2025–26
A Living System of Learning
GEN:R’s school model is built on a dynamic, three-stage cycle that ensures content remains responsive, relevant, and rooted in real classroom experience:
- Pre-Delivery Input: Teachers complete short surveys and self-assessments to gauge confidence and digital readiness.
- In-Module Feedback: Interactive polls and real-time reflections monitor understanding and emotional engagement.
- Post-Training Evaluation: Structured reviews assess growth, identify high-impact modules, and guide improvements.
Throughout the year, impact checks track classroom application and support needs. These insights feed into an annual update cycle, ensuring GEN:R evolves with your school.
Our Foundations for Responsible Digital Citizenship
Critical Thinking with Context
Learners of all ages are guided to question what they see online, understand how AI systems shape their experiences, and reflect on the ethical implications of digital choices, not just to consume information, but to think deeply about it.
Ethical Digital Citizenship
This curriculum goes beyond safety rules to foster empathy, fairness, and responsibility in digital spaces. It helps the entire school community understand how technology affects others and how to act with care and integrity online.
Human-Centered Data Literacy
Rather than just teaching how data works, we explore how it feels: how it’s collected, how it influences AI, and how it impacts people. This empowers learners to make informed, values-led decisions about their digital lives.
Co-Creation & Peer Leadership
From students to staff, everyone is invited to shape a safer digital future. Through peer mentoring, collaborative projects, and real-world problem solving, the curriculum builds a culture of shared responsibility and digital leadership.
Three Dimensions of Learning
1. Learner to Content
“Relevance drives retention.”
Students engage with simulations, ethical dilemmas, and real-life scenarios that make AI concepts tangible and personal.
2. Learner to Learner
“We learn better when we learn together.”
Peer-to-peer learning fosters shared insight and collaborative growth — essential in a field where answers are always evolving.
3. Learner to Facilitator
“Teachers don’t need to be experts; they need to be confident co-learners.”
Educators guide inquiry, model curiosity, and create safe spaces for ethical exploration.
Why GEN:R for Schools?
Whole-Community Learning
Students, educators, and families learn together
Human-Centered Approach
Focus on emotions, relationships, and fairness, not just functionality
Collaborative Culture-Building
Peer mentoring, co-creation, and reflective dialogue foster digital leadership
Empowering Your Journey
Ready to Empower Your School?
Let’s build Generation Responsible together. Book a consultation or enroll your school today