AI in the Classroom

IOKA 2024 - 2025

Project Summary

Approached by a small school to bring AI to the classroom - the realisation that AI is here and a need to start looking at how the opportunities on offer by a new technology, can be implemented safely and securely. This initiative is still in progress. Our input has thus far been into drawing up an AI Policy along with a proposal towards implementation. The following outlines the challenges a small, privately funded institution has to address and suggests a strategic approach towards implementation.


Regulatory compliance

This particular institution, while located within the UK and outside the scope of the EU AI Act (AIA), would benefit from referring to the guidelines within the EU AIA. These guidelines emphasize a risk-mitigation approach. Further to this are aspects within ISO/IEC standards, which highlight the importance of explainable and transparent AI system deployments for users. Finally, considering that this is a school and it's users will not yet be classified as 'adult' - places this particular use of an AI System in the medium to high risk category. (See Risk Assessment further down).


Steps to implementation

Gathering stakeholder input and gaining alignment that will become the agreed implementation protocol. The current use of AI will likely be different across teachers, classrooms, students and at home. The goal is a uniform and consistent approach. This phase will involve levels of information sharing and education in order to eliminate any perceived misconceptions of what Generative AI actually does. Workshops and meetings along with clearly defined terminology. Explainable AI that instills levels of trust and transparency (ISO/IEC 42001). Once the objectives are agreed, comes an assessment of scope  - essentially a gap analysis based upon current uses of AI as a learning, teaching or studying tool and the desired minimum requirement for implementation. Further factors such as infrastructure and training will be taken into account.

We can agree that operating with no Policy and no consistent approach across the institution, translates to zero control high risk! The AI Policy will be written by an elected committee of Senior Stakeholders - ranging from department heads, teachers, members of staff responsible for IT infrastructure management, parent governors and outside SMEs (subject matter experts) acting in an advisory capacity. The Policy is the likely starting point that will help shape the agreed approach objective, it therefore needs to contain terminology, a clear understanding of how AI will be presented and used within the classroom and outside the classroom, who is responsible for its oversight and in particular what is expected of the users. 

An extension of the existing Digital Use Policy with particular emphasis on Privacy and Security, there will be special attention placed on the content generation (output) from the AI tools that could negatively impact or potentially cause harm, alongside other more obvious concerns such as plagiarism and the misrepresentation of factual information. The final Policy will be circulated for feedback and adjustments, then agreed. However due to the very nature of AI and the pace at which it changes, the Policy shall not be fixed and will require regular review.

As already noted, the approach to Risk Assessment and Management shall be based upon guidance from both the EU AIA and ISO/IEC standards documentation. For this particular use case, the school is defined as the Deployer of an AI System (see Regulations and definitions). The key components within the Risk Management strategy are : Risk Identification (all use cases), Risk Assessment (impacts and classification), Risk Treatment (mitigation for each identified risk) and how an organisation will carry out continued Risk Monitoring and Review.

The AI Policy phase has identified particular areas of risk such as the conversational element within AI products, that could negatively influence younger users who may not fully understand the technology and be vulnerable to bias, miss representation and emotional manipulation. Large Language Models (LLMs) have been trained on vast amounts of data and will respond in a 'human-like' way. They need to be trained further to adapt to specific use cases, in this case as a teaching aid within a classroom.


Ideally, all users engage with the technology in the same way - logged into secure platforms and learning environments with built-in control parameters. Increased control offers reduced risk.

There are options such as Century Tech an off-the-shelf licensed platform, or switching entirely over to an MS enterprise type solution with lots of co-pilots! Another route could be to cherry pick from the GPT store a collection of GPTs for various uses. OR the building of a bespoke Subject Bot utilizing the API from one of the popular foundation models - 'feeding it' with selected syllabus material but generally the costs of design and development outweigh the more immediate deployment of a solution that's ready to go.


The underlying issue with any of those solutions is costs and in particular data tokens. [see table below]

Estimates from early 2024. Please goto OpenAI Tokenizer for latest

4. Implementation strategy

Within this particular project, we have a small business (a school) that has a well established digital approach to learning however does not have the financial resources to purchase licenses for a whole new AI learning platform. Instead the ideal approach would be one that AI can be integrated into their existing 'toolset' - some of which such as Canva, already adopt elements of AI into its offering. And Kerboodle, that allows students to engage with digitised material from syllabus textbooks but does not yet have an LLM or any advanced AI capabilities.


There is no free approach to implementing AI into the classroom, at a minimum time will need to be invested into training everyone within the learning life-cycle from teacher to student, including parents, as to the guidance required to apply and monitor the use of AI safely. In this case the following was proposed:

5. RAG and The power of the prompt

One of the best examples of how data processing and AI have evolved into game changing applications through RAG (Retrievable-Augmented Generation). In short, this methodology ensures AI engagement is not being influenced or biased by outside sources, instead requires the user to upload select source files and the model will only respond based on that material. One such product is Google NotebookLM. A student or teacher could upload specific syllabus content. This institution already subscribes to Google Classroom but certain administrative controls will need to be adjusted since Google does not allow, by default, Google Classroom or Google Business accounts access to Gemini or Google NotebookLM. (see here Google's Data Policy for NotebookLM)

6. Governance and Compliance

As an AI System Deployer the school will be required to establish a governance framework over the AI System across the institution, carried out by a select oversight committee. Currently almost all regulations and compliance documentation published by various regional and international bodies, are for guidance only. However it is generally expected that enforcement of protocols will become mandatory in the near future - and most likely will start in areas of highest risk where the most vulnerable users are being exposed to AI models, such as schools.

In conclusion

Costs and existing infrastructure are barriers for many institutions to make AI available. The democratisation of access is an ongoing debate since those that could benefit the most - are often the ones that are the last on the list. The use of a RAG application does offer additional support to the teachers and students - ensuring accuracy and reducing the risk of bias,  almost like having access to the teacher 24/7. However, the majority of prescribed subject textbooks are not yet digitised and many are under strict copyright restrictions. A scholar having to thumb through his/her textbook is an example of how much still needs to happen around the technology before we are in a place to fully adopt an AI approach - which some will say is a good thing.