Student Voice and the EdTech Product Safety Expectations
Alongside the Generative AI for Education summit in London yesterday, the DfE updated their ‘Product Safety Expectations’ for EdTech developers, and for schools to consider when deciding which tools are safe to use.
Many of the updates reflect discussions from the GenAI in Education: Have Your Say process - offering a powerful mandate from students for these expectations. Check out what students had to say in the video here https://connectedbydata.org/ai-in-education/ or the full report.
The updates are significant, adding new sections on cognitive development, emotional and social development, mental health and manipulation: and I suspect many tools currently being explored or deployed in schools won’t yet meet the new standards.
The new cognitive development expectations capture the idea of a ‘schools mode’ that pupils raised in GenAI in Education: Have Your Say. DfE say “We expect products not to provide final answers, full solutions, or complete worked examples by default”, but instead to help students learn step-by-step.
Under emotional and social development, the expectations oppose the anthropomorphising of AI tools: calling for tools to drop self-descriptions or conversational modes that might imply products have their own agency, and requiring tools to include time-limits on their use.
Mental health expectations call for tools to detect signs of learner distress, to signpost learners to human help if required, and to involve child mental health expertise in product design.
The expectations on manipulation calls on tools not to deceive or mislead users, to portray absolute or unjustified confidence, or to use peer-pressure to motivate engagement. Tools should not steer users towards paid options through biased wording or layouts.
These should not be radical expectations to have of generative AI tools in schools, and yet, when EdTech tools are built on foundation models tuned towards sycophancy and fast answers over pedagogic principles, implementing them can involve swimming against the tide.
It’s great to see such a strong connection between the views students shared in our distributed dialogues, and the updated product safety expectations. The real test though now is whether these expectations can be operationalised.
That might require one more future update: updating the recommendations on design and testing to strengthen the role of both government and students in making decisions about whether AI for Education tools are ready for use.
We heard across the distributed dialogue that students want to see tools proven as safe, accurate and fair before they are deployed. They expect those assurances from the government, but they also want students to be listened to and involved in those decisions.
Deliberative approaches that provide space for exploring not only whether AI tools do what they say, and meet required standards, but also assessing whether they make a net positive contribution in practice to the kind of education that students and teachers value, should be a central part of this.
Join us for a Webinar on April 22nd when we’ll be talking more about learning from Generative AI in Education: Have Your Say, and thinking about ways for students to have a powerful voice in decisions about data and AI from classroom to the capital.
(Footnote: Because gov.uk pages lack a tracked changes, I had to dig out this archived copy of the standards to fully see what’s changed).