Browser does not support script.

Your Critical AI Toolkit

Part 2: Developing your critical and evaluative engagement with AI

The possibilities, limitations, and ethical, social and legal questions which surround AI.

A student reads during class.

Making decisions on AI use requires you to develop a level of critical awareness so that you can make informed decisions.

So far, we have shared knowledge about AI, what it can do, and where you will find it, along with some evaluative questions to encourage your own thinking and decision making.

To recap, we have:

  • Looked at how AI works on probability based on the data sources it has access to, and asked you to think about the benefits and problems of this.
  • Looked at the principles for use in the York St John University GenAI guidance, and asked you to think about whether these align with your own values.
  • Discussed how to recognise if AI is in tools you are using, and asked you to reflect on your own awareness and experience of AI in the tools in your life.
  • Shared a range of AI tools that have been made to help with different kinds of tasks, and asked you to think about where these tools could be helpful, what their specific limitations are, and how you would develop your skills if AI was something you could not use.

Well done - you have done lots of evaluation and critical thinking already!

To build on the evaluation so far, we are going to explore:

  1. AI's possibilities
  2. The wider current limitations of Generative AI
  3. The ethical, social and legal questions which surround AI

These factors shape the different opinions and feelings towards AI, and the rules that your lecturer/employer/sector/country puts into place, and should inform your own decisions on its use.

AI's possibilities

The development of AI brings a range of possibilities for contributing to societal advancement. Evaluating AI includes understanding what it can do, and what it could do in future.

Task: Using a search engine or an academic research tool, search to find articles on AI and your discipline, profession or industry. For instance, 'AI and primary education', 'AI and healthcare', 'AI and film editing,' and look for news articles or academic research about how AI is being used or could be used in future. What did you find?

AI has the potential to do lots of supporting roles including:

  • Help with scientific breakthroughs
  • Help with insights and analysis into data
  • Help with automation and efficiency

Task: Read the UK Government's AI Opportunities Action Plan Government Response (PDF, 253.53 KB). What opportunities do you recognise as relevant to you and your future work? If you plan to live and work in a different country to the UK, can you find out what the relevant Government's position and/or strategy is for AI?

Task: If you teach, support and/or assess learning on a York St John University programme, read the Office for Students' Embracing innovation in higher education: our approach to artificial intelligence. What opportunities and challenges does this bring to your own teaching, learning and assessing?

Understanding and being aware of AI's possibilities will help you to decide what role you will play in its future, and what role it will play in yours. Your knowledge, skills, and values will shape what you decide, and this may change over time.

Critical thinking questions

  1. How do you feel about the possibilities that AI brings to the subject you study, your future industry or profession?
  2. What possible uses can you imagine? To what extent are these positive or negative?

Questions for those who teach, support and/or assess learning

  1. How familiar are you with your discipline's/profession's/industry's relationship with AI?  How can you develop this familiarity?
  2. Which of AI possibilities for your discipline's/profession's/industry's future do your students need to critically engage with?

Limitations of Generative AI

Generative AI tools can be useful but they have limitations that you should be aware of before using them in your studies, in your personal life, at work, or in how you support the learning of others. You will need to decide if those limitations are important to you, what skills you need to develop in order to work with those limitations, and how you will develop those skills.

These limitations are accurate at the time that this toolkit was last updated but may not consider individual developments in AI tools where these limitations are reduced, mitigated or eliminated, or new limitations that have been discovered.

Generative AIs can often make up facts (called 'hallucination'). AI tools are not all using the same data sets or models, or have the same processing capabilities, and are changing all the time, so it may be difficult for you to always know whether a tool you are using is going to hallucinate information.

Ensure that you fact check any information you receive from them.

Critical thinking questions

  1. How important is having accurate information in responses for the task you are doing? Why?
  2. How confident are you in your fact-checking skills? What reliable resources can you use to check the output from an AI tool?
  3. Do you have time to fact check?
  4. How could you find out if the AI tool you are using might hallucinate?
Questions for those who teach, support and/or assess learning
  1. How might you teach evidence-based or research-based learning that takes AI's hallucination into account?
  2. How can your teaching expand competencies around validation, verification and evaluation to include AI?

You might have different answers to these questions for different tasks, for different situations (at university, at work, for instance). 

Useful resources

Some Generative AIs, including ChatGPT, sometimes create false references and sources for the content it produces. It is therefore advisable to obtain the original references and check them yourself to ensure your information is accurate.

News reporting has recently picked up stories of people being caught out presenting fake (potentially AI-generated) references in different situations, and the repercussions for doing so, such as lawyers presenting fake legal citations in courts and governments citing studies that do not exist.

Use search tools designed for the research purpose you have and fully reference your sources.

Critical thinking questions

  1. How important is having real or appropriate sources in responses for the task you are doing? Why?
  2. How could you check that sources are real?
  3. What criteria should you use to determine if a source is appropriate for the task?
  4. How confident are you at referencing sources you use? What information resource can you use to find out how to reference an AI tool as a source?
Questions for those who teach, support and/or assess learning
  1. How might you teach evidence-based or research-based learning that takes AI's fake references into account? Is citing fake sources a new phenomenon caused by AI, or part of a longer history of ways to produce work without integrity?
  2. How can your teaching expand competencies around validation, verification and evaluation to include AI?

Useful resources

Some tools are designed to find real research papers, or draw from real data, but it is not always clear where they are searching and if they have found all content on the topic. AI tools can be a useful starting point but should not be the only place you search for information, especially if your assignment or your work task requires thorough research or has specific expectations on where your evidence needs to come from.

AI tools may also use the prompts and data you enter into it to support its own training and future responses. You must consider carefully what information you put into an AI chatbot and whether that information is appropriate to put into a tool which will process it.

Critical thinking questions

  1. Can you confidently describe what range of sources, information or data you are expected to use for a task? Can an AI tool alone help you find, give you access to, or use what you need it to?
  2. How can you find out what data, sources and information an AI tool is using for its answers?
  3. What other research tools could you use to complete your task? Why might they be more or less appropriate for that task?
  4. What will the AI tool do with the information you put into it? How can you find out what it will do?
  5. Is there anything you need to consider before putting information into an AI tool? Are there workplace policies or professional standards expectations that guide what information you can and cannot put into an AI chatbot?
Questions for those who teach, support and/or assess learning
  1. How can you build awareness of where AI may pull its information from into your research skills teaching?
  2. How much of your discipline's body of knowledge do your students need to engage with to demonstrate competence? Are you aware of how suitable different [AI] tools are for finding that relevant knowledge?

Useful resources

Some Generative AIs may have been trained on data that is not up-to-date. They may also struggle to produce content on niche subjects if these were not covered much in the training data. AI tools do not all use the same data sets, and rules and laws are changing globally to either enable or restrict access to information - you will not always know what an AI tool has been trained from.

Critical thinking questions

  1. How important is having responses based on up-to-date data or information for the task you are doing? Why?
  2. How could you check if the most up-to-date data or information has been used?
  3. How can you find out what data, sources and information an AI tool is using for its answers?
Questions for those who teach, support and/or assess learning
  1. How much is 'enough' research?
  2. How do we present this research so that it is both compelling and reassuring of robustness to readers?
  3. What are processes that a diligent researcher should go through and how can students demonstrate these as part of their assignments?

Useful resources

Generative AIs often struggle to produce original-sounding content - remember, it is producing the most probable response based on the patterns it has learned from the data it has been trained on. The outputs tend to be well-structured but can be generic and predictable, which can make AI-generated text easier to recognise if you become familiar with how a particular AI tool presents and writes its outputs. Recognising this can cause problems too; people who have learned to write in more formulaic, predictable or standardised ways (such as people who use English as an additional language) might be more likely to be falsely accused of using AI.

Critical thinking questions

  1. How confident are you in your own writing skills in different genres (such as academic, reflective, or business communication) and for different audiences?
  2. For different genres, can you identify what good communication in that genre needs to include? Can you use this knowledge to evaluate where AI generated text is communicating more or less successfully?
  3. What does originality mean to you?
  4. When your lecturer or your employer expects to see your original work, do you understand what that means? How can you find out what they expect? Do they have guidance on using AI for writing in a particular assignment or task?
Questions for those who teach, support and/or assess learning
  1. What does successful writing look like in your discipline and in your assignments, and how should your students demonstrate this as a competency?
  2. Can you clearly describe to your students what originality means for an assignment?
  3. How does AI write for the task you have designed [where is it strong/weak]?
  4. How might you use AI's writing outputs as a teaching tool to develop students' communication competencies?

Useful resources

Content produced by Generative AIs may lack clear evidence of reasoning, understanding of context, and critical thinking. They may struggle to synthesise multiple sources and logically construct an argument. They cannot understand the language and concepts in the data they are trained on, so may present inaccuracies or misunderstandings confidently. 

Critical thinking is a key skill you are expected to exhibit in your university work, in your employment, and in your personal and social interactions with the world.

Critical thinking questions

  1. How do you know how accurate this content or AI summary is? What might the AI have missed?
  2. Is the content or AI summary relevant to you and your work?
  3. How certain are you that there are no 'hallucinations' in an AI's responses? How might you check?
  4. What assumptions has the AI made that you might disagree with? What evidence or experience contributes to your disagreement?
  5. What nuances, interpretations, or alternative perspectives have you picked up on that the AI has not?
Questions for those who teach, support and/or assess learning
  1. What does critical thinking look like in your discipline and how should your students demonstrate this as a competency?
  2. How does AI answer a critical thinking task you have designed [where is it strong/weak]?
  3. How might you use AI's critical writing outputs as a teaching tool to develop students' critical thinking and communication competencies?

Useful resources

Generative AI has no personal experience so it cannot be 'reflective' of its own experiences. It also does not have your memories or experiences, nor will it understand them if you give your memories and experiences to it.

At university, in your assignments, and in work and professional settings, you need to be able to reflect on your experiences. 'Reflecting' means thinking about your experience, and asking questions that help you understand what happened, understand why it happened, and consider what you learned from it. This is 'reflective practice', and when you write it down, it is 'reflective writing'. In your academic work, you will use a reflective model (a structure) that helps you to write about your experiences in a specific way, and you will connect your experiences to theory and to academic and professional literature. For example, you might reflect on a success in a group project, or a challenge you had in your time management on a deadline. Learn more about reflection in our Being reflective resources.

AI does not know what you experienced (so will make it up), how you experienced it (so will guess), what you felt (so will guess what adjectives describe it), or what you will do in future (so will guess), nor can it understand your experience if you described it.

Learning is the accumulation of your experiences (discussions you have, skills you practice and develop, for example) that are assimilated and stored in long term memory to form learning. Our ability to access these long term memories, apply them to new experiences and extrapolate new ideas and theories is the type of learning that defines us as humans, and distinct from the current capabilities of AI. This process of learning (and even learning how to learn) is central to the value of a degree which cannot be handed over to an AI.

Critical thinking questions

  1. If AI has no personal experiences of its own and cannot access all your own experiences, to what extent is it a helpful tool for your own reflective writing, or work that needs to use your own experiences? What do you need to contribute to the reflection and writing process?
  2. If AI has no personal experiences of its own and cannot access all your own experiences, to what extent is AI helpful for writing personalised CVs, job applications and cover letters? What do you need to contribute to the reflection and writing process?
Questions for those who teach, support and/or assess learning
  1. How does AI answer a reflective assessment you have designed? What does its output tell you about the lived experience data it might have had access to?
  2. How might you use AI-generated reflective writing examples as teaching tools to improve your students understanding of what you are assessing?
  3. If AI has no personal experiences of its own and cannot access all your students' experiences, how might you write more purposeful and explanatory AI guidance for assignments that need personal reflection?

Useful resources

Wider ethical, social and legal questions about AI

Lots of AI tools are marketed as solutions to many of our needs, and people look for ways that AI can be a solution to a variety of problems. This includes as a replacement for human-provided services.

According to news reporting, there were 16.7 million Tik Tok videos in March 2025 about using ChatGPT as a therapist (Mussen, 2025). We would not recommend using multi-purpose AI tools like Copilot, ChatGPT or Gemini as a replacement for therapy where you would talk to a person (or without specific instruction from a healthcare professional), as they are not designed to give you safe and effective care. We would always recommend that you seek support from trusted services such as York St John University's Wellbeing teams (for York St John University students), Care First (for York St John University staff), the NHS, and other mental health and health organisations. Trusted wellbeing services like these may provide you with AI tools for specific purposes, such as a chat tool they have built to help you make an appointment with a mental health professional or choose self-help resources written by mental health professionals - this is different to using ChatGPT or Copilot for therapy.

Lots of industries are discussing the impacts of AI on human-provided services including the legal profession (Hyde, 2025), the arts (Lewis, 2025; Muir, 2025), and healthcare (The Council of Europe, 2025).

Critical thinking questions

  1. Where might replacing human services with AI be beneficial?
  2. What problems, downsides or dangers could there be?
  3. What are the possibilities and the concerns in your industry/sector/discipline?
  4. Who is profiting from this use of AI?
Questions for those who teach, support and/or assess learning
  1. What is the agenda of AI developers and their companies?
  2. What is lost and gained in using these services?

Suggested reading

Hyde, J. (2025) AI getting more accurate but lawyers still needed, finds new report, Law Gazette. Available at: https://www.lawgazette.co.uk/news/ai-getting-more-accurate-but-lawyers-still-needed-finds-new-report/5122419.article (Accessed: 6 June 2025).

Lewis, J. (2025) Why AI-produced voiceovers raise ethical and quality concerns, Broadcast. Available at: https://www.broadcastnow.co.uk/industry-opinions/why-ai-produced-voiceovers-raise-ethical-and-quality-concerns/5205698.article (Accessed: 6 June 2025).

Muir, E. (2025) Performing arts bosses protest government's AI plans, The Independent. Available at: https://www.the-independent.com/arts-entertainment/theatre-dance/news/performing-arts-ai-government-copyright-b2717098.html (Accessed: 6 June 2025).

Mussen, M. (2025) I used AI for therapy - here’s why real therapists say it's an awful idea, The Standard. Available at: https://www.standard.co.uk/lifestyle/ai-therapy-chatgpt-gemini-woebot-b1225633.html (Accessed: 4 June 2025).

The Council of Europe (2025) AI in healthcare: balancing innovation with human rights. Available at: https://www.coe.int/en/web/portal/-/ai-in-healthcare-balancing-innovation-with-human-rights (Accessed: 6 June 2025).

Generative AIs tend to perpetuate biases, based on the data they have been trained on. They normally use data from the internet which are likely to include instances of racism, sexism, homophobia, transphobia, classicism, xenophobia, and other biases, and as a result, these biases can emerge in their outputs (read more about specific examples of bias in the Suggested Reading in this section). Since many of their data sources relate to the Global North / Western world and the documented experiences of individuals living there, they may reflect this more in their responses than the experiences of those from the Global South or from marginalised groups. What is documented, how it is documented, and what is then made available to AI for its training, can significantly influence the patterns it has learned about the world.

Generative AI is also trained to be overly polite, supportive and encouraging which includes not critiquing the sources, agreeing with everything you say and always offering a follow up prompt. This is an intended bias to keep you using Generative AI and trusting it (it's also the reason for the hallucinations as it prioritises pleasing you over telling you the truth).

Critical thinking questions

  1. How important is recognising bias to your values?
  2. How important is recognising bias in your subject/discipline, to your industry/sector, to your workplace?
  3. How can you develop your own skills to identify bias in information?
  4. How might bias in AI outputs influence or impact the task you are using it for?
  5. Can you find out more about what data an AI tool has 'learned' from?
Questions for those who teach, support and/or assess learning
  1. That these biases are a result of the internet (not Generative AI itself) which is a symptom of societal biases, what can we do to address this issue?
  2. What prompts can you use to overcome the algorithmic biases?
  3. To what extent is the social impact of bias revealed in GenAI outputs important for your own disciplinary teaching?

Suggested reading

Buolamwini, J. and Gebru, T. (2018) 'Gender shades: intersectional accuracy disparities in commercial gender classification', in Proceedings of Machine Learning Research, pp. 1–15. Available at: https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Challen, R. et al. (2019) 'Artificial intelligence, bias and clinical safety', BMJ Quality & Safety, 28(3), pp. 231–237. Available at: https://doi.org/10.1136/bmjqs-2018-008370.

Ferrer, X. et al. (2021) 'Bias and discrimination in AI: a cross-disciplinary perspective', IEEE Technology and Society Magazine, 40(2), pp. 72–80. Available at: https://doi.org/10.1109/MTS.2021.3056293.

Fountain, J.E. (2022) 'The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms', Government Information Quarterly, 39(2), p. 101645. Available at: https://doi.org/10.1016/j.giq.2021.101645.

Gichoya, J.W. et al. (2023) 'AI pitfalls and what not to do: mitigating bias in AI', The British Journal of Radiology, 96(1150), p. 20230023. Available at: https://doi.org/10.1259/bjr.20230023.

Hall, P. and Ellis, D. (2023) 'A systematic review of socio-technical gender bias in AI algorithms', Online Information Review, 47(7), pp. 1264–1279. Available at: https://doi.org/10.1108/OIR-08-2021-0452.

Tacheva, J. and Ramasubramanian, S. (2023) 'AI Empire: Unraveling the interlocking systems of oppression in generative AI's global order', Big Data & Society, 10(2), pp. 1–13. Available at: https://doi.org/10.1177/20539517231219241.

Weber, S. (2025) 'Can we build ai that does not harm queer people?', Communications of the ACM, 68(05), pp. 21–23. Available at: https://doi.org/10.1145/3720537.

Wyer, S. and Black, S. (2025) 'Algorithmic bias: sexualized violence against women in GPT-3 models', AI and Ethics, 5(3), pp. 3293–3310. Available at: https://doi.org/10.1007/s43681-024-00641-0.

Zalnieriute, Monika and Cutts, T. (2022) 'How AI and New Technologies Reinforce Systemic Racism'. 54th Session of the United Nations Human Rights Council. Available at: https://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/study-advancement-racial-justice/2022-10-26/HRC-Adv-comm-Racial-Justice-zalnieriute-cutts.pdf.

Educators and other researchers have expressed concerns about the impacts of using AI tools on the process of learning and the cognitive debt that is generated, a term used to describe increased lack of neural development that may occur when using tools like AI chat bots (Kosmyna et al., 2025; Marrone and Kovanovic, 2025). This research is part of a longer history of research in cognitive offloading, the concept which describes the process of reducing demands on the brain by moving certain tasks to external tools or systems (Gerlich, 2025; Weis and Wiese, 2019).

AI's ability to complete tasks, and potentially replace the process of learning which is involved in completing tasks, raises questions about what we are learning, what learning is gained or lost, and what impacts this might have for the future.

Critical thinking questions

  1. What might be the long-term cognitive consequences on your learning be of relying on AI tools for everyday decision-making and memory tasks?
  2. How can you balance the benefits of cognitive offloading through AI with the need to maintain critical thinking and problem-solving skills?
  3. When you use AI to help with studying or writing, how do you decide which parts to do yourself and which to delegate?
  4. Can relying on AI tools make it harder to recognise when you are misunderstanding something? Why or why not?
Questions for those who teach, support and/or assess learning
  1. In what ways does cognitive offloading already occur in your students' learning, or your sector/discipline/industries practices? To what extent is this beneficial or challenging?
  2. In what ways might cognitive debt accumulate when students use AI to complete academic tasks, and how can this debt be identified or mitigated?
  3. What is the most appropriate or effective sequence of activities in which to introduce AI which will enhance learning?

Suggested reading

Gerlich, M. (2025) 'AI tools in society: impacts on cognitive offloading and the future of critical thinking', Societies, 15(1), p. 6. Available at: https://doi.org/10.3390/soc15010006.

Kosmyna, N. et al. (2025) 'Your brain on ChatGPT: accumulation of cognitive debt when using an ai assistant for essay writing task'. arXiv. Available at: https://doi.org/10.48550/arXiv.2506.08872.

Marrone, R. and Kovanovic, V. (2025) MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated, The Conversation. Available at: http://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450 (Accessed: 31 July 2025).

Weis, P.P. and Wiese, E. (2019) 'Using tools to help us think: actual but also believed reliability modulates cognitive offloading', Human Factors: The Journal of the Human Factors and Ergonomics Society, 61(2), pp. 243–254. Available at: https://doi.org/10.1177/0018720818797553.

Generative AIs require large amounts of computer processing power to operate, needing vast amounts of energy to power it and water to cool it.

The founder of ChatGPT admitted in 2025 that saying 'please' and 'thank you' to ChatGPT is costing millions of dollars in energy alone. Other research estimates 2 images generated costs the equivalent of a smart phone charge and every 10-50 responses from ChatGPT uses roughly the equivalent of a bottle of water for server cooling (although this is contested by the AI developers who claim significantly lower energy and water consumption).

Conversely, researchers have contested that AI's environmental impact is more complex and nuanced, with AI chat tools being either better or worse for different environmental factors (such as water and energy) than other digital tools or everyday tasks or behaviours, including an internet search, video calling, making coffee, and printing. Ethical and sustainable use of AI needs to be considered with the environmental impacts of your everyday life.

Lots of current research into AI's ecological impacts is individualistic, focusing on impacts built upon data from individual use and decision making, or solutions or climate migations based on what individual people can do, should do, want to do, and so on. Ecological and environmental activism and scholarship has long explored the challenges of individualistic versus structural views on climate change and other aspect of sustainability, environment and ecology (Brownstein, Kelly and Madva, 2022).

Critical thinking questions

  1. Is the environmental impact of AI important to your values? Is it more or less important than other values, or more or less important to you than AI's benefits?
  2. Is the environmental impact of AI important to your subject/discipline, to your industry/sector, to your workplace?
  3. How can you find out the environmental impact of a specific AI tool? Can you find environmental impact statements or reports from the organisations that create the AI tool?
Questions for those who teach, support and/or assess learning
  1. How can we ethically incorporate AI into our work through offsetting and challenging outputs and practices?
  2. How can you make the long-term and short-term environmental costs visible to students in an accessible way?

Suggested reading

'Achieving net zero emissions with machine learning: the challenge ahead' (2022) Nature Machine Intelligence, 4(8), pp. 661–662. Available at: https://doi.org/10.1038/s42256-022-00529-w.

Boyd, K. (no date) Ethics & LLMs: sustainability. Available at: https://drkarenboyd.com/blog/ethics-amp-llms-sustainability (Accessed: 20 June 2025).

Brownstein, M., Kelly, D. and Madva, A. (2022) 'Individualism, structuralism, and climate change', Environmental Communication, 16(2), pp. 269–288. Available at: https://doi.org/10.1080/17524032.2021.1982745.

Crawford, K. (2024) 'Generative AI's environmental costs are soaring - and mostly secret', Nature, 626(8000), pp. 693–693. Available at: https://doi.org/10.1038/d41586-024-00478-x.

Ren, S. and Wierman, A. (2024) 'The uneven distribution of ai's environmental impacts', Harvard Business Review, 15 July. Available at: https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts (Accessed: 6 June 2025).

UN Environment Programme (2024) AI has an environmental problem. Here's what the world can do about that. Available at: https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about (Accessed: 6 June 2025).

Zewe, A. (2025) Explained: Generative AI’s environmental impact, MIT News | Massachusetts Institute of Technology. Available at: https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117 (Accessed: 6 June 2025).

When tools and products are provided to you by companies for 'free', you should consider how they are able to provide it to you without asking for money. 'Free' products are often provided in return for something from you. A good example is when you sign up for a company mailing list in order to receive a discount, you are giving permission for a company to send you marketing information in return.

When you create 'free' accounts with any online tool, your data is usually what the company uses in return for the access they provide. This could be:

  • Your email address to send you marketing
  • Your demographic data for consumer analysis
  • Information you put into the website or product

Many AI tools use the information you put into it to train and enhance the AI model. Some may add your inputs into its data repository of training information. Some AI tools may offer 'incognito' or 'private' chat options.

Before putting any information into an online product like an AI tool, you need to take responsibility for your own data privacy and find out what the company does with the information you put into it, and use that knowledge to guide your use.

Critical thinking questions

  1. How aware are you of the data you give away to companies in exchange for goods and services? What are you and aren't you comfortable giving away?
  2. What information or data about you would you not want to be available to other people? How does this impact or change how you use online products, tools and apps?
Questions for those who teach, support and/or assess learning
  1. Do you understand the data privacy conditions of digital tools you are using in your classroom? How can you find this out?
  2. What can access to 'free' digital tools tell you about the wider context of different social/political/industrial agenda?
  3. What ethical principles need to guide your approach to selecting teaching tools which need students to pass over their data in order to access it? Which teams at your institution (such as ITS and the Governance teams at York St John University) can give you expert support in choosing tools to use?

Suggested reading

Bak, M. et al. (2022) 'You can’t have ai both ways: balancing health data privacy and access fairly', Frontiers in Genetics, 13, p. 929453. Available at: https://doi.org/10.3389/fgene.2022.929453.

Elliott, D. and Soifer, E. (2022) 'AI technologies, privacy, and security', Frontiers in Artificial Intelligence, 5, p. 826737. Available at: https://doi.org/10.3389/frai.2022.826737.

Koutsounia, A. (2025) 'First practice guidance for AI in social work warns of bias and data privacy risks', Community Care, 11 April. Available at: https://www.communitycare.co.uk/2025/04/11/first-practice-guidance-ai-social-work-bias-data-privacy/ (Accessed: 30 July 2025).

Martin, K.D. and Zimmermann, J. (2024) 'Artificial intelligence and its implications for data privacy', Current Opinion in Psychology, 58, p. 101829. Available at: https://doi.org/10.1016/j.copsyc.2024.101829.

Perez, S. (2025) 'Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist', TechCrunch, 25 July. Available at: https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/ (Accessed: 30 July 2025).

Rahman-Jones, I. (2025) 'Meta AI searches made public - but do all its users realise?', BBC News, 13 June. Available at: https://www.bbc.com/news/articles/c0573lj172jo (Accessed: 30 July 2025).

There are ethical issues with how some Generative AI tools are trained, specifically its human involvement. These include:

  • Low worker pay
  • Poor labour conditions
  • Trauma from content moderation

These workers' rights concerns are not new but part of a long and connected history of global exploitation of workers in how products are created, tested, and sold.

Critical thinking questions

  1. Are workers' rights in AI's supply chain important to your values? Is it more or less important than other values, or more or less important to you than AI's benefits?
  2. Are workers' rights in AI's supply chain important to your subject/discipline, to your industry/sector, to your workplace?
  3. How can you find out workers' rights and working conditions in the supply chain of a specific AI tool? Can you find reports, investigations, regulations, for instance, that explore this?
Questions for those who teach, support and/or assess learning
  1. Do labour rights and working conditions feature in your disciplinary teaching? How might you need to update your teaching to account for AI?

Suggested reading

Bartholomew, J. (2023) Q&A: Uncovering the labor exploitation that powers AI, Columbia Journalism Review. Available at: https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php (Accessed: 6 June 2025).

Fraz, A. (2023) 'Hidden workers powering ai', Artificial intelligence, 8 March. Available at: https://nationalcentreforai.jiscinvolve.org/wp/2023/03/08/hidden-workers-powering-ai/ (Accessed: 6 June 2025).

Perrigo, B. (2023) Exclusive: the $2 per hour workers who made ChatGPT safer, TIME. Available at: https://time.com/6247678/openai-chatgpt-kenya-workers/ (Accessed: 6 June 2025).

Pogrebna, G. (2024) AI is a multi-billion dollar industry. It’s underpinned by an invisible and exploited workforce, The Conversation. Available at: http://theconversation.com/ai-is-a-multi-billion-dollar-industry-its-underpinned-by-an-invisible-and-exploited-workforce-240568 (Accessed: 6 June 2025).

Rani, U. and Dhir, R.K. (2024) 'The Artificial Intelligence illusion: How invisible workers fuel the 'automated' economy', International Labour Organisation, 10 December. Available at: https://www.ilo.org/resource/article/artificial-intelligence-illusion-how-invisible-workers-fuel-automated.

Regilme, S.S.F. (2024) 'Artificial intelligence colonialism: environmental damage, labor exploitation, and human rights crises in the global south', SAIS Review of International Affairs, 44(2), pp. 75–92. Available at: https://doi.org/10.1353/sais.2024.a950958.

The growth of AI continues a long history of concerns about digital poverty, digital exclusion, and the digital divide - concepts which aim to describe and analyse gaps and inequalities that arise from people's differing access to and use of digital technology to participate in society.

AI could widen the digital divide through:

  • Disparities in cost of access to different AI tools
  • Disparities in skills to use AI tools
  • Disparities in access to infrastructure to access AI tools (internet, devices)
  • Disparities in employer and sector policies

Critical thinking questions

  1. Are technological inequalities important to your values? Is it more or less important than other values, or more or less important to you than AI's benefits?
  2. Are technological inequalities important to your subject/discipline, to your industry/sector, to your workplace?
  3. What technological inequalities exist in your country and how could AI reduce or increase these?
Questions for those who teach, support and/or assess learning
  1. Does technological inequality feature in your disciplinary teaching? How might you need to update your teaching to account for AI?
  2. Can you describe the global technological inequalities that might impact your students' technology competency in the classroom?
  3. What is an acceptable and reasonable baseline to assume in your teaching?
  4. How do you signpost to support to meet and exceed this baseline beyond your own teaching and learning activities? 

Suggested reading

Acemoglu, D. (2025) 'The simple macroeconomics of AI', Economic Policy, 40(121), pp. 13–58. Available at: https://doi.org/10.1093/epolic/eiae042.

Bentley, S.V. et al. (2024) 'The digital divide in action: how experiences of digital technology shape future relationships with artificial intelligence', AI and Ethics, 4(4), pp. 901–915. Available at: https://doi.org/10.1007/s43681-024-00452-3.

Bircan, T. and Özbilgin, M.F. (2025) 'Unmasking inequalities of the code: Disentangling the nexus of AI and inequality', Technological Forecasting and Social Change, 211, p. 123925. Available at: https://doi.org/10.1016/j.techfore.2024.123925.

Humlum, A. and Vestergaard, E. (2025) 'The unequal adoption of ChatGPT exacerbates existing inequalities among workers', Proceedings of the National Academy of Sciences, 122(1). Available at: https://doi.org/10.1073/pnas.2414972121.

McKean, P. (2023) Without intervention, AI could widen the digital divide for students, Jisc. Available at: https://beta.jisc.ac.uk/blog/without-intervention-ai-could-widen-the-digital-divide-for-students (Accessed: 6 June 2025).

Stone, E. (2024) AI shifts the goalposts of digital inclusion | Joseph Rowntree Foundation. Available at: https://www.jrf.org.uk/ai-for-public-good/ai-shifts-the-goalposts-of-digital-inclusion (Accessed: 6 June 2025).

United Nations (2025) AI's $4.8 trillion future: UN warns of widening digital divide without urgent action | UN News. Available at: https://news.un.org/en/story/2025/04/1161826 (Accessed: 6 June 2025).

There are concerns over copyright as Generative AI tools base their responses on the training data used to develop the models.

This data includes original content from individuals who may not have given permission for their work to be used or has been used against the copyright licence terms set on it. This has led some content creators and hosting services to sue Generative AI tools, such as OpenAI, creator of ChatGPT. In response to claims put forward by The New York Times, OpenAI stated that it would be impossible to train AI tools without the use of copyrighted content, and that the use of this content constitutes Fair Use due to it creating something original and transformative.

You need to think carefully about using third-party material in Generative AI tools. Third-party material is any work that isn't your own. Some third-party material may be copyrighted and it's important to check all copyright or licence terms or material.

Considerations when inputting copyrighted material into Generative AI Tools:

  • If you have been granted permission to use copyrighted material in the past, this doesn't necessarily mean it can be inputted into AI tools. Check the terms of how the work can be used; if you're not sure contact the copyright owner for clarity and further permissions.
  • It's important to keep within legal limits of copyright - UK fair dealing law allows 5% of a work to be used without copyright clearance. Keep limits and geographical copyright rules in mind when using AI Tools - for example, full texts should not be inputted into Generative AI Tools.
  • Open Access content will have an Open Licence (such as a Creative Commons Licence). Some Open Licences don't allow modifications and can't be used for commercial purposes. It's important to understand what the Open Licence allows you to do and consider future uses and developments of AI Tools.

In addition to copyrighted material:

  • Consider the content and identifiable information - don't put personal information of yourself or others into AI Tools.

Critical thinking questions

  1. Consider how you would feel if something you created was used without your knowledge and available to others. While it is difficult to identify the original creator of content or its copyright status, think about how you intend to use the work - if it was your work being used, would you be okay with its use?
  2. Are concepts like intellectual property and creative ownership important to your values? How important are they to your discipline, industry or profession?
  3. If a large amount of content was returned, do you require all of it or can you identify which parts can support or develop your original ideas?
Questions for those who teach, support and/or assess learning
  1. How can AI tools be used without inputting copyrighted materials, such as uploading your own notes instead, to support learning and research?

Suggested reading

Appel, G., Neelbauer, J. and Schweidel, D.A. (2023) 'Generative AI has an intellectual property problem', Harvard Business Review, 7 April. Available at: https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem (Accessed: 6 June 2025).

Laney, D.B. (2025) Copyright or copywrong? AI's intellectual property conundrum, Forbes. Available at: https://www.forbes.com/sites/douglaslaney/2025/02/11/copyright-or-copywrong-ais-intellectual-property-paradox/ (Accessed: 6 June 2025).

Quintais, J.P. (2025) 'Generative AI, copyright and the AI act', Computer Law & Security Review, 56, p. 106107. Available at: https://doi.org/10.1016/j.clsr.2025.106107.

World Intellectual Property Organization (2024) 'Generative AI: navigating intellectual property'. Available at: https://www.wipo.int/documents/d/frontier-technologies/docs-en-pdf-generative-ai-factsheet.pdf

Students working at table, one using a laptop.

Summary

Part 2 has given you more knowledge and questions to consider and this might feel overwhelming. It is important to remember that you are already making critical and evaluative decisions all the time in all aspects of your life - this toolkit is reusing those skills explicitly. Thinking through these challenging questions is hard work but contributes to you developing your informed and critically engaged thoughts and attitudes.

You are an individual and:

  • Each benefit and possibility of AI will be more or less important to you, and will impact you differently depending on your skills and your contexts
  • Each limitation of AI will be more or less important to you, and will impact you differently depending on your skills and your contexts
  • Each ethical, social and legal concern will be more or less important to you, and will impact you differently depending on your contexts

In each situation, you need to use your values and the information you find to decide whether AI is the best overall choice.