Understanding AI / Exam Practice
Teacher View Active - you can see mark schemes, examiner notes and discussion prompts. Students see only questions and answer tips.
AI Series - Exam Practice

AI Ethics: Exam-Style Questions

Eight original questions written to match the style and mark allocations of GCSE and A-Level Computer Science ethics papers. Work through them in exam conditions, then reveal the mark scheme.

GCSE and A-Level
8 questions - allow 70 minutes in exam conditions
Free, no login required
Teacher view reveals mark schemes, examiner notes and discussion prompts across all pages.
Quick Questions
Questions 1-4: 2 and 4 mark questions. These test accurate recall and the ability to develop a point with an example or explanation.
Question 1
AI Bias
2 marks
State two ways in which a training dataset can introduce bias into an AI system.
How to answer a "State" question: No explanation needed - one clear, accurate point per mark. Do not pad with extra detail; examiners award the mark when the point is made.
Mark Scheme - 2 marks
Award 1 mark for each valid point up to 2. Accept any two of the following.
Historical bias - the dataset reflects past human decisions that were themselves biased (for example, historical hiring data that systematically favoured one group).
Sampling bias - the dataset does not represent all groups equally (for example, a facial recognition dataset trained mainly on lighter-skinned faces).
Labelling bias - human annotators apply their own unconscious biases when labelling or categorising training data.
Selection bias - data is collected in a way that excludes or underrepresents certain groups or situations.
Common mistakes
X "The AI is biased because it was programmed badly" - this confuses deliberate programming errors with training data bias. Direct students to the concept of bias emerging from data, not from code.
X Vague answers such as "not enough data" without specifying what is missing from the data. Prompt: which groups or situations are under-represented?
Discussion prompt

Ask students: if a company trains a spam filter on emails from 2005, what kinds of modern spam might it miss - and why? Use this to illustrate how historical bias and selection bias can both be present at once.

Question 2
Fairness
2 marks
Give two reasons why it is difficult to guarantee that an AI system will treat all groups of people equally fairly.
How to answer a "Give" question: Like "State" - one identifiable reason per mark. These should be distinct reasons, not the same point rephrased.
Mark Scheme - 2 marks
Award 1 mark for each valid reason up to 2.
Training data may not represent all groups equally, so the model learns to perform better for some groups than others.
Different mathematical definitions of fairness can conflict - for example, equal accuracy rates across groups versus equal false positive rates cannot always both be achieved simultaneously.
Society itself contains historical inequalities; an AI trained on real-world data will reflect and potentially amplify those inequalities.
Fairness is context-dependent and there is no single agreed definition of what a fair outcome is.
Common mistakes
X Students often give only one reason and repeat it in two different ways. Encourage them to ask: "are these the same point?"
X "AI cannot be fair because it has no feelings" - this misunderstands the question. Fairness here means statistical equity across groups, not empathy.
Discussion prompt

Present this scenario: an AI loan approval system is 90% accurate for applicants from Group A and 70% accurate for Group B. Is that fair? What would making it equally accurate for both groups require? Use this to surface the tension between group fairness and individual accuracy.

Question 3
Facial Recognition
4 marks
Describe two ethical concerns raised by the use of facial recognition technology in public spaces. [4 marks]
How to answer a "Describe" question (4 marks / 2 concerns): Each concern is worth 2 marks. The first mark identifies the concern; the second develops it with an explanation of why it is ethically problematic. Do not just name the concern - explain the impact or consequence.
Mark Scheme - 4 marks (2 x 2)
Privacy [2]: Members of the public can be identified and their movements tracked without their knowledge or consent; this constitutes a significant intrusion on the right to privacy and anonymity in public spaces.
Misidentification [2]: Facial recognition systems have higher error rates for certain groups, particularly darker-skinned faces and women; an innocent person could be incorrectly matched to a suspect and wrongly detained.
Chilling effect [2]: Knowing they are being identified may deter people from attending lawful protests or expressing legitimate dissent in public; this has implications for freedom of assembly.
Function creep [2]: Data collected for one stated purpose (such as crime prevention) may later be used for other purposes without consent, for example tracking political activity or immigration enforcement.
Accept any two well-developed concerns. Both marks for a concern require identification and explanation of the ethical impact.
Common mistakes
X Listing two concerns without developing either: "It invades privacy and it can be wrong." This would score 1 mark (for two identified concerns) rather than 4. Push students to always ask: "why is this ethically significant?"
X Repeating the same underlying concern twice with different labels, for example "privacy" and "surveillance." These are the same point and would only score once.
What a full-mark answer does

A student scoring 4/4 identifies two distinct concerns, names them precisely, then explains the specific ethical harm: who is affected, how, and why that constitutes an ethical problem rather than just an inconvenience.

Discussion prompt

The Metropolitan Police began routine use of live facial recognition cameras in London in 2023. Ask: should the public be notified when a facial recognition camera is in use? What would notification change? Explore whether awareness removes the ethical concern or just the surprise.

Question 4
Autonomous AI
4 marks
A hospital uses an AI system to prioritise which patients receive urgent appointments. Describe two ethical issues this raises. [4 marks]
Context questions: Read the scenario carefully. Your ethical issues must relate to the specific context given (healthcare, appointment prioritisation). Generic answers about "AI can be biased" without linking to healthcare will score fewer marks.
Mark Scheme - 4 marks (2 x 2)
Accountability [2]: If the AI makes a wrong decision that results in harm to a patient, it is unclear who bears legal and moral responsibility - the hospital, the software developer, or the clinician who delegated the decision to the system.
Transparency [2]: Patients may not know that an algorithm determined their priority; they cannot question or appeal a decision they did not know was made by a machine, undermining their right to understand their own care.
Bias [2]: If the training data reflects historical healthcare inequalities (for example, certain groups historically receiving less prompt care), the AI will perpetuate that disparity and potentially widen existing health inequalities.
Loss of professional judgement [2]: An algorithm cannot account for contextual factors a clinician would notice; over-reliance on AI prioritisation may lead to cases being deprioritised that a doctor would immediately recognise as urgent.
Common mistakes
X Answers not linked to healthcare: "AI can be hacked" or "AI uses lots of electricity" are irrelevant to this scenario.
X "AI might make mistakes" earns no marks without explaining what type of mistake, why it matters in a medical context, and what the consequence is.
Discussion prompt

A patient is deprioritised by the AI and their condition worsens while they wait. Who should be held responsible? Ask students to argue: the doctor who accepted the AI's recommendation, the hospital that deployed the system, or the company that built it. Can responsibility be genuinely shared?

Extended Response
Questions 5-8: 6 and 8 mark questions. These reward organised, developed arguments - not lists. Quality of reasoning matters as much as coverage of content.
Question 5
Criminal Justice
6 marks
Explain three ethical considerations that should be taken into account when using AI to assist in criminal sentencing decisions. [6 marks]
How to answer an "Explain" question (6 marks / 3 points): Each consideration is worth 2 marks. Identify it clearly, then explain why it matters in this context with enough detail to demonstrate understanding. A one-sentence answer rarely earns both marks for a point.
Mark Scheme - 6 marks (3 x 2)
Transparency [2]: Defendants have the legal right to understand and challenge the basis of their sentence; black-box AI systems that cannot explain their outputs make this right impossible to exercise, undermining fundamental principles of justice.
Bias and fairness [2]: AI systems trained on historical sentencing data will encode whatever racial, socioeconomic, or geographic biases existed in previous sentencing decisions; the COMPAS system in the USA was found to falsely predict higher recidivism rates for Black defendants at roughly double the rate for white defendants.
Accountability [2]: Criminal sentencing has profound and irreversible consequences for individuals; using AI introduces ambiguity about who is responsible when the system contributes to an unjust outcome - the judge, the developer, or the state that authorised its use.
Human oversight [2]: AI should inform rather than replace judicial decision-making; removing or marginalising human judgement from sentencing undermines the principle that each case deserves individual consideration from a person who can be held accountable.
Proportionality [2]: An AI optimises for statistical patterns across many cases but cannot weigh the specific mitigating or aggravating circumstances of one individual's situation, risking disproportionate sentences.
Award marks for any three well-explained considerations. A stated point alone earns 1 mark; a stated and explained point earns 2.
Common mistakes
X "AI can be hacked" - this is a security issue, not an ethical consideration in the context of sentencing decisions.
X Three-word answers: "Bias, transparency, accountability" with no explanation would score 0. Make clear to students that identification alone does not earn marks at 6+ mark level.
X Repeating bias in multiple guises (data bias, sampling bias, historical bias) without explaining the distinct ethical implication of each.
Examiner note

The COMPAS reference is good for developing the bias point but is not required. Credit any accurately described real example of AI in criminal justice contexts. If students cannot recall an example, a hypothetical that correctly describes the mechanism still earns the mark.

Discussion prompt

Argue both sides: a judge uses COMPAS to help decide between a custodial and a community sentence. The AI recommends prison. Should the judge be able to override it freely? Or does overriding it undermine the point of using AI? What does this tell us about the appropriate role of AI in high-stakes decisions?

Question 6
Large Language Models
6 marks
Explain three ethical concerns raised by the widespread use of large language models (LLMs) such as ChatGPT. [6 marks]
Tip for LLM questions: Do not just list things LLMs "cannot do." Ethical concerns require you to explain who is harmed, how, and why that constitutes an ethical problem rather than just a technical limitation.
Mark Scheme - 6 marks (3 x 2)
Misinformation and hallucination [2]: LLMs can generate plausible-sounding but factually false information presented with confidence; users who trust this output - in medical, legal, or scientific contexts - may make decisions that cause real harm.
Copyright and intellectual property [2]: LLMs are trained on vast amounts of text created by human writers, artists, and researchers; the use of this material without permission or compensation raises significant ethical questions about ownership and fair use.
Academic dishonesty [2]: Students using LLMs to complete assessed work gain an unfair advantage and do not develop the skills the work was designed to build; this undermines the integrity of qualifications.
Bias in output [2]: LLMs reproduce biases present in their training data, potentially generating content that stereotypes or discriminates against particular groups; this can normalise biased views at enormous scale.
Environmental cost [2]: Training and running large models requires enormous energy consumption; this has a significant carbon footprint that raises questions about whether the benefits of AI justify its environmental impact.
Privacy [2]: Users may share sensitive personal information with LLM services; data entered into these systems may be used for model training or be accessible to the company, raising consent and confidentiality concerns.
Common mistakes
X "ChatGPT could be used for crime" - too vague to earn marks. Which crime? By whom? Why is this an ethical concern beyond it just being illegal?
X Conflating "it makes mistakes" with hallucination. The ethical issue is not that it makes errors (all systems do) but that it presents errors confidently as fact, making them harder to detect.
Discussion prompt

Present the Samsung incident: engineers accidentally leaked proprietary code by pasting it into ChatGPT. Ask: who is responsible - the individual, the company that allowed it, or the AI provider that retained the data? Extend: should workplaces be able to ban LLM use? What would need to be true for that ban to be proportionate?

Question 7
Employment
8 marks
A company wants to use AI to automatically screen job applications and reject unsuitable candidates, with no human reviewing the AI's decisions.

Discuss the ethical implications of this proposal. [8 marks]
How to answer a "Discuss" question (8 marks): A discuss question requires you to present arguments on more than one side and reach a reasoned conclusion. Do not write everything you know about AI bias - select and develop the most relevant points. Structure: introduce the issue, argue the concerns, acknowledge any benefits or mitigations, evaluate and conclude. A one-sided answer cannot reach Level 4.
Level Descriptors - 8 marks
Level 4 (7-8 marks)
A comprehensive, balanced discussion covering multiple ethical dimensions. Both concerns and mitigating arguments are explored with development. The answer reaches a clear, justified conclusion that weighs the evidence rather than just summarising it. Accurate use of relevant terminology.
Level 3 (5-6 marks)
Addresses several relevant ethical concerns with some development. May lack full balance (either one-sided or conclusion absent). Some accurate use of terminology. More than one perspective is evident.
Level 2 (3-4 marks)
Identifies relevant ethical concerns but with limited development. Likely one-sided. May consist of a list of concerns without explanation of why they are ethically significant.
Level 1 (1-2 marks)
Limited or generic points. Little or no development. May address AI bias in general rather than the specific context of job screening without human oversight.
Indicative content (concerns): Bias - if training data reflects historical hiring discrimination, the AI replicates it at scale; candidates from certain groups may be systematically excluded. Transparency - rejected candidates cannot know why, cannot challenge the decision, and have no recourse. Accountability - who is legally responsible if the AI discriminates against a protected characteristic? No human oversight means no safety net. Legal risk - automated rejection without human review may violate equality legislation.

Indicative content (potential benefits / counterarguments): Human hiring decisions also contain bias (unconscious bias against names, appearance, accent); AI may actually be more consistent. Algorithmic decisions can in principle be audited and adjusted in ways that human intuition cannot. Efficiency allows large numbers of applicants to be considered fairly.

Indicative conclusion: The proposal as described - with no human review - is ethically indefensible for a decision of this significance. AI screening as a first filter with mandatory human review of borderline and rejected candidates would address many of the concerns while retaining efficiency benefits.
Common mistakes
X Writing only about why AI screening is bad. A one-sided response cannot exceed Level 2. Push students to steelman the company's position before critiquing it.
X Conclusions that just summarise: "In conclusion, AI screening has advantages and disadvantages." This adds nothing. A good conclusion evaluates which argument is stronger and why.
X Missing the key phrase "no human reviewing." The absence of human oversight is the most important ethical trigger in this question. Students who do not address this specifically are answering a different question.
Examiner note

The Amazon hiring AI (discontinued 2018 after it was found to downgrade CVs containing the word "women") is the most directly relevant real-world example. Credit any accurate real example of automated recruitment and its outcomes.

Discussion prompt

Divide the class: half argue as the company's HR director (defending the AI system), half argue as a rejected candidate's lawyer. After five minutes, swap sides. Debrief: which arguments held up under challenge? Which collapsed? This surfaces the genuinely contested points and the weaker positions students held.

Differentiation
  • Support: Provide students with a writing frame: Concern 1 (bias) ... This matters because ... A counterargument is ... However ...
  • Stretch: Ask higher-ability students to reference specific legal protections (Equality Act 2010) and explain why the company's proposal creates a compliance risk beyond just an ethical one.
Question 8
AI Autonomy Ethics
8 marks
To what extent should AI systems be given the authority to make autonomous decisions that significantly affect people's lives, without human oversight? Discuss. [8 marks]
"To what extent" questions: This phrasing demands a judgement - not just "here are arguments for and against" but a considered position on where the line should be drawn. A strong answer will acknowledge that the answer varies by context (healthcare vs. spam filtering) and by the current state of AI reliability and explainability.
Level Descriptors - 8 marks
Level 4 (7-8 marks)
Nuanced, balanced discussion that distinguishes between contexts (stakes, reversibility, available oversight mechanisms). The extent of acceptable AI autonomy is treated as conditional rather than absolute. A clear, evidenced position is reached. Terminology accurate throughout.
Level 3 (5-6 marks)
Discusses the question from more than one angle with some development. May lack the nuance of treating autonomy as context-dependent, or the conclusion may be asserted rather than argued from the evidence presented.
Level 2 (3-4 marks)
Relevant points made but limited development. Likely treats the question as binary (AI should or should not be autonomous) without engaging with context or conditions.
Level 1 (1-2 marks)
Superficial or generic observations. Little evidence of engagement with the ethical dimensions of autonomy versus oversight.
Indicative content (arguments for limits on autonomy): Accountability - autonomous systems remove the ability to hold a person responsible for consequential decisions; transparency - people have a right to understand and challenge decisions made about them; risk of cascading errors - an autonomous system that makes an error cannot self-correct or recognise that something unprecedented is happening; legal frameworks require human decision-makers in many domains.

Indicative content (arguments for greater autonomy in some contexts): Speed - in some contexts (fraud detection, medical imaging analysis) human review is too slow; consistency - humans are fatigued and inconsistent in ways machines are not; in lower-stakes contexts, the efficiency gain may outweigh the oversight cost.

Indicative conclusion (Level 4 answers will articulate something like this): The appropriate level of AI autonomy should be proportionate to the stakes involved and the current state of AI reliability. High-stakes, irreversible decisions (criminal justice, healthcare, welfare benefits) require meaningful human oversight regardless of AI capability. Lower-stakes decisions may tolerate more automation. As AI explainability improves, the case for expanding autonomy in specific domains strengthens - but only with auditing mechanisms in place. The EU AI Act's prohibition on autonomous high-risk AI decisions without human oversight reflects this principle.
Common mistakes
X Treating the question as "should AI exist?" rather than "how much autonomy is appropriate in what contexts?"
X Failing to distinguish between high-stakes and low-stakes contexts. Students who argue AI should never make autonomous decisions are ignoring the functioning use of AI in spam filtering, recommendation systems, etc., and will cap at Level 2.
X Conclusions that are just a restatement: "So overall, AI autonomy is a complex topic with pros and cons." This earns no additional marks at Level 3+.
Examiner note

The EU AI Act (2024) explicitly prohibits certain autonomous AI applications and mandates human oversight for high-risk decisions. A student who references this accurately and explains the principle behind it (proportionality, stakes-based regulation) would be demonstrating Level 4 thinking. Credit any accurate reference to legal or regulatory frameworks governing AI autonomy.

Discussion prompt

Give students three scenarios: (1) an AI that filters spam email, (2) an AI that decides whether a welfare benefit application is approved, (3) an AI that determines bail conditions for a criminal defendant. Ask: at which point does human oversight become not just preferable but ethically required? What changes between scenario 1 and scenario 3? Use their answers to surface the principles of stakes, reversibility, and accountability.

Differentiation
  • Support: Give students the three scenarios above and ask them to place each on a spectrum from "full AI autonomy acceptable" to "human oversight always required." Then ask them to justify their positioning in writing.
  • Stretch: Ask students to write a one-paragraph summary of what the EU AI Act says about high-risk AI and explain whether they think the Act goes far enough, too far, or about right.
More resources are coming for teachers
The CodeBash teacher platform is in development. When it launches, you will be able to assign these questions to a class, track student responses, override AI-generated feedback, and download printable worksheets for each lesson.
Present lessons in class (full-screen mode)
Printable student worksheets
Set questions as homework
Class response tracking
Written feedback on student answers
Full 6-lesson AI course content
Thank you - we will be in touch when teacher accounts launch.
No spam. Your email will only be used to notify you when CodeBash teacher accounts are available.