Browser does not support script.

Policies and documents

Case study examples of AI use

Explore the case studies on this page to understand what you could or should not use AI for in assessed work.

Case studies

Context: Second-year undergraduate History module

AI Principles: 'You stay in charge', 'Think critically', 'Check the facts'

Formative exercise: Students are asked to use ChatGPT to generate a short historical explanation of the causes of the French Revolution. They must then evaluate the output for factual accuracy, bias, and depth of analysis.

Learning activity:

  • Students annotate the AI response with fact-checking notes and their own sources.
  • Reflective commentaries are peer-reviewed as either a written statement or brief presentation explaining where AI was helpful, misleading, or superficial.

This exercise could be repeated using alternative sources (peer-reviewed journals, textbooks, newspapers, blogs, podcasts, and so on) to focus on the learning of evaluative skills.

Benefits:

  • Encourages active engagement with AI rather than passive consumption.
  • Builds evaluative judgement and fact-checking skills.

Explore this further with the Toolkit: Part 2 - Developing critical and evaluative engagement with AI

Context: Year 1 Biosciences practical module

AI Principles: 'Be honest and transparent', 'Act fairly'

Summative assessment: Students may use AI (for example, Grammarly, ChatGPT) for editing and language support, but they must explain this on their assignment coversheet and keep their original and draft versions along with a transcript of the prompts used for their tutor to check if required.

Learning activity:

  • Assessment instructions include explicit guidance on permissible AI use.
  • Coversheet includes a declaration of how AI tools were used.

Benefits:

  • Develops students' self-awareness about their writing processes.
  • Maintains fairness across students with varying levels of digital fluency by revealing and valuing the processes through which learning occurs.

Explore this further with the Toolkit: Part 3 - Developing your skills to use AI

Context: Final year dissertation preparation

AI Principles: 'Use with care', 'You stay in charge'

Summative assessment: As part of their methodology, students use various AI tools such as JSTOR's AI research tool, Elicit or Consensus to identify possible gaps in the literature. They are guided by in-class activities and their assignment brief to cross-check sources, evaluate the methods and conclusions the literature GenAI has suggested as well as identify any 'hallucinations' (made-up references or misinterpretations of the data) produced by Generative AI.

Learning activity:

  • Workshop on using AI for research with hands-on prompts.
  • Proposal includes a methodology section on AI involvement, search terms, and validation strategy.

Benefits:

  • Encourages transparency of how AI tools are used appropriately and ethically in accordance with research integrity.
  • Promotes independent research and critical use of tools.

Explore this further with the Toolkit: Critical questions and ethics prompts from Part 1 and Part 2.

Context: MA Design exploring digital tools

AI Principles: 'Think critically', 'Use with care'

Summative assessment: As part of a wider exploration of digital design tools, students use Adobe Firefly to explore AI-powered tools to co-create visual elements, ensuring the tools used meet ethical training standards. They submit a process log in which they critically reflect on choice of tools, design reasoning, and any AI-generated assets.

Learning activity:

  • Peer critique sessions focus on how AI contributed to or detracted from original ideas.
  • Students assess each other's projects using assessment criteria / rubrics that account for originality, tool use, and reflection.

Benefits:

  • Builds digital design literacy while addressing issues of fairness and originality.
  • Teaches ethical decision-making in creative AI use.

Explore this further with the Toolkit: Library of AI Tools for image generation and ethical questions.

Context: Liberal Arts Foundation Year 1 modules with high English as a Second Language (ESL) enrolment

AI Principles: 'You stay in charge', 'Be honest and transparent', 'Think critically'

Formative activity: Students are invited to use GenAI for paraphrasing and language support. Workshops provide opportunities to discuss and distinguish between using AI for learning support versus copying outputs.

Learning activity:

  • Scaffolded activity: students attempt paraphrasing themselves, compare with AI, and revise.
  • Reflection explores how AI use impacted on their confidence and understanding.

Benefits:

  • Acknowledges and addresses language barriers.
  • Develops core academic skills while supporting access and equity.

Explore this further with the Toolkit: Discussion prompts on paraphrasing and communication tools.

Context: Year 3 undergraduate Nursing (Adult or Mental Health)

AI Principles: 'You stay in charge', 'Check the facts', 'Use with care', 'Think critically'

Formative activity: Students are given a case scenario and asked to use GenAI to generate a proposed nursing care plan. They then critically evaluate the AI output against NICE guidelines, Nursing and Midwifery Council (NMC) standards, and their own clinical reasoning.

Learning activity:

  • AI outputs are reviewed and discussed in class or written reviews are submitted to a Moodle forum for peer review with student commentary justifying accepted/rejected recommendations.
  • Students are encouraged to include or consider a risk reflection in decision-making reports: what harm could result from over-trusting AI in clinical contexts?

Benefits:

  • Reinforces safe, evidence-based practice.
  • Prepares students to work in AI-augmented health systems without losing clinical judgement.

Explore this further with the Toolkit: Critical evaluation prompts, Library of tools (for example, ChatGPT for summarisation, not diagnosis)

Context: Core module in a postgraduate MBA programme

AI Principles: 'You stay in charge', 'Be honest and transparent', 'Think critically'

Summative assessment: Students are tasked with developing a strategic plan for a new product launch. The assignment consists of two components: The Strategic Plan (as presented to the client) (30%) and The Portfolio (detailing the process in researching and designing the plan) (70%). GenAI can be used for trend forecasting, SWOT analysis or drafting reports - but students must document the tools, prompts, and critical checks used, evaluating and justifying their use within The Portfolio, and clearly citing such use in the Strategic Plan.

Learning activity:

  • A template is provided which includes an AI audit trail and a series of short strategic reflections such as: Where did GenAI add value? Where did it fall short?
  • Students peer-review each other's processes, transparency and justifications.

Benefits:

  • Develops authentic skills for future business environments.
  • Encourages responsible use of AI in decision-making and communication.

Explore this further with the Toolkit: AI tools for planning, business modelling, and summarisation (for example, Copilot, Gemini)

Context: MSc in Data Science, Semester 1 module

AI Principles: 'Be honest and transparent', 'Check the facts', 'Act fairly'

Summative assessment: Students use GenAI tools (among other tools) to help interpret and visualise a real-world dataset. They are encouraged to use Large Language Models to draft exploratory data summaries or generate code snippets - but must explain the rationale and assess reliability.

Learning activity:

  • Code notebooks must contain comments indicating where and how AI was used.
  • Students annotate where they improved or corrected AI-generated code.

Benefits:

  • Encourages critical engagement with AI-generated code, not passive use.
  • Teaches model verification, reproducibility and responsible open-data practices.

Explore this further with the Toolkit: AI tools for summarisation, coding assistance, and visualisation (for example, Jupyter and ChatGPT, Perplexity, Copilot)

Context: Sport and Exercise Science BSc (Hons), final year project

AI Principles: 'Act fairly', 'Be honest and transparent', 'Think critically'

Summative assessment: Students use GenAI to help structure a performance analysis report, generate training recommendations, or produce visuals. Students submit a report of their analysis (40%) and a portfolio (60%) containing a process log sampling their research, annotated bibliography, and GenAI use along with a critical reflection on GenAI's contributions. In their portfolios, students critically evaluate generated plans using sports science principles and client needs.

Learning activity:

  • Students prepare a video or in-person presentation explaining their performance review, including an audit of any AI use.
  • Optional: include a client-facing version of their plan with AI-supported visuals and an internal technical rationale.

Benefits:

  • Enhances communication and visualisation of performance data.
  • Builds ethical awareness of the risks of applying generic AI outputs to individual athletes.

Explore this further with the Toolkit: AI tools for analysis, design, communication (for example, ChatGPT, Canva, Copilot, wearable tech analysis platforms)

Context: PGCE Primary or Secondary - Professional Studies module

AI Principles: 'You stay in charge', 'Be honest and transparent', 'Use with care'

Summative assessment: Students are required to submit a portfolio of their lesson pans, learning activities and the research and critical thinking beneath these. Students are allowed to use GenAI to help draft lesson plans, generate ideas for differentiation of activities, and to support inclusive design. They should provide samples of the original prompts, AI outputs, and commentary on adaptations they made using the portfolio template provided.

Learning activity:

  • In seminars, student-teachers discuss the risks and opportunities of using GenAI in school-based practice, including safeguarding, GDPR, and accessibility.
  • Final portfolio includes a reflective piece on responsible use of AI in their future classrooms.

Benefits:

  • Supports inclusive lesson planning and workload management.
  • Builds a values-based understanding of technology use in professional settings with children and young people.

Explore this further with the Toolkit: AI tools for planning, differentiation, and reflection (for example, ChatGPT, Copilot, Goblin Tools)

Inappropriate uses of AI

Context: Undergraduate submitting a reflective essay

Why it's inappropriate:

  • Essays, especially reflective essays, are an opportunity to express and explain your knowledge and experiences, and apply your critical thinking skills. Offloading this work to a Generative AI tool removes any evidence of your knowledge and skills and passing it off as your own work is unethical.
  • Violates Principle 2: Be honest and transparent.
  • Breaches academic integrity by passing off unacknowledged AI output as original work.

Toolkit relevance: Misrepresents your own voice and ideas.

Risk: Constitutes plagiarism; lacks evidence of learning, reflection, or personal insight.

Context: Student completing a psychology dissertation

Why it's inappropriate:

  • We expect research to be honest and true, using data that is accurate and methods that are appropriate to answer the research question. The conclusions and recommendations of research are used to inform everything from personal health care routines through to international policies. Decisions made on fabricated data may therefore be dangerous as well as unethical.
  • Violates Principle 3: Check the facts and Principle 6: Use with care
  • Misleading, unethical, and potentially harmful if conclusions are drawn from invented data.

Toolkit relevance: Ignores need for verification and undermines research integrity.

Risk: Academic misconduct and reputational damage; distorts evidence base.

Context: ITT student using GenAI to generate entire lesson plans without comprehension

Why it's inappropriate:

  • AI is an extremely powerful tool, but it is only a tool. The responsibility for the outputs and how these outputs are used is the responsibility of the user: you. As a teacher you must consider the needs of your students, their level of study and development, and the intended learning outcomes, contexts or subject being studied, as these are all your responsibility.
  • Violates Principle 1: You stay in charge.
  • Undermines professional development and puts pupils at risk.

Toolkit relevance: Disregards critical learning about pedagogy, safeguarding, and differentiation.

Risk: Fails to meet programme learning outcomes and misrepresents competence.

Context: MSc student using ChatGPT to generate references for an assignment

Why it's inappropriate:

  • Referencing and citing are an essential part of academic work as they demonstrate the rigour of the learning / research. Citing sources that you have not read means your interpretations, and therefore conclusions, are neither rigorous nor trustworthy. Citing sources that GenAI has hallucinated is the same as falsifying your data which is a serious issue.
  • Violates Principle 3: Check the facts and academic integrity expectations.

Toolkit relevance: AI tools often 'hallucinate' references that appear real but are fabricated.

Risk: Academic misconduct, credibility loss, and spreading misinformation.

Context: A student uses GenAI to generate discussion-board posts and assignment responses but doesn't read or think critically about the content before submitting it.

Why it's inappropriate:

  • Contradicts Principle 4: Think critically - the student relies on AI instead of engaging with the ideas.
  • Undermines Principle 5: Act fairly - gaining marks or participation credit without doing the intellectual work.

Toolkit relevance: Fails to evaluate or validate AI outputs; risks passing off AI thinking as human understanding.

Risk: Diminishes genuine learning, misleads staff assessing comprehension, and sets a poor precedent for future academic and professional work.