Ethics and Law - Lesson 5
Ethics and Law - Lesson 5 of 6

Cultural and Social Impact

Technology reshapes work, relationships, information and power. This lesson explores automation, algorithmic bias, social media's influence and digital surveillance - with two documented real-world cases that show how the effects can be profound and unexpected.

45 - 60 min Automation, algorithmic bias, Facebook experiment, Delivr

In 2014, Facebook ran a secret experiment on 689,003 users. For one week, the platform algorithmically manipulated what appeared in those users' news feeds - showing some users more positive posts and others more negative ones. The users were not informed. The goal was to study emotional contagion: could Facebook make people feel differently by changing what they saw? The answer was yes.

Think about it: Facebook's terms of service permitted research use of data. Did that make this experiment acceptable? What ethical principles did it violate? And if a company can influence how millions of people feel, what else might they choose to influence?
Why this matters in the exam

Social impact questions are "evaluate" heavy. You need to identify both positive and negative effects, use specific examples, and reach a justified conclusion. The most common mistake is listing only negatives or only positives. Examiners reward balance and specificity.

Automation and changing employment

Automation is the use of technology to perform tasks previously done by humans. This is not new - the Industrial Revolution mechanised physical labour; the digital revolution is now automating cognitive and analytical work. The difference is speed and scope.

Job displacement
Roles made redundant by automation
Routine, repetitive and predictable jobs are most at risk: manufacturing assembly, data entry, some aspects of accounting, customer service (chatbots), transport (self-driving vehicles), retail checkout. A 2013 Oxford study estimated 47% of US jobs were at high risk of automation within two decades.
Exam tip: Displacement does not mean elimination. People may retrain and move to new roles. But displacement causes hardship during the transition, and retraining is not equally accessible to all workers.
New job creation
Roles that did not exist before
Every technological wave has created new jobs: software engineers, data scientists, UX designers, social media managers, cybersecurity analysts, AI ethics consultants. Many high-value jobs today did not exist 20 years ago. The challenge is that new jobs often require different skills, education and geographic locations than the jobs displaced.
Exam tip: For a balanced answer: automation displaces existing jobs AND creates new ones. The net effect depends on the economy's ability to support workers in transition. Include both sides.
Remote working
Technology enabling work from anywhere
Digital communication tools (video conferencing, cloud collaboration, instant messaging) enabled an unprecedented shift to home working during COVID-19. Benefits: reduced commuting, better work-life balance for some, access to global job markets. Drawbacks: isolation, blurred boundaries, digital divide exclusions, loss of collaboration for some roles.
Scenario Delivr - when automation replaces drivers

Delivr is a fictional UK delivery company that employs 3,200 drivers. The company announces it will deploy autonomous delivery vehicles on all urban routes within 18 months. The vehicles use AI navigation, lidar sensors and machine learning to operate without a human driver. The company estimates it will reduce 2,400 driver roles.

Delivr argues this will cut costs, reduce road accidents (90% of which involve human error), lower emissions, and allow 24-hour delivery. The 2,400 affected drivers argue they have mortgages, families and specialist skills that do not easily transfer. The local community points out that these jobs were concentrated in areas of higher unemployment where alternatives are limited.

Algorithmic bias, social media and surveillance

Algorithmic bias occurs when an algorithm produces systematically unfair outcomes. This typically happens because the algorithm was trained on data that reflects historical human biases, or because the design itself failed to account for diversity in outcomes.

Algorithmic bias
When an algorithm produces unfair outcomes due to biased training data or design, often disadvantaging particular demographic groups.
Filter bubble
A situation where recommendation algorithms show users only content that aligns with their existing views, reducing exposure to different perspectives.
Deepfake
AI-generated media that convincingly depicts real people saying or doing things they did not say or do. Used to spread misinformation and harm reputations.
Surveillance
Monitoring of individuals or groups using cameras, data collection or tracking tools. Can be governmental, corporate or interpersonal.
Real case Facebook's secret mood manipulation experiment (2014)

In January 2012, Facebook ran an experiment on 689,003 users by manipulating their news feeds for one week. Some users had positive content reduced (seeing more negative posts); others had negative content reduced (seeing more positive posts). The goal was to test whether emotional states could be induced through social media content without direct interaction.

The results, published in 2014 in the Proceedings of the National Academy of Sciences, confirmed that emotional contagion worked. Users who saw more negative content produced more negative posts themselves. Facebook and its academic collaborators had, without anyone's knowledge, changed how over half a million people felt.

When the experiment became public, it generated massive backlash. Critics argued it violated research ethics (no informed consent, no opt-out, potential harm to vulnerable users including those with depression). The Information Commissioner's Office in the UK investigated, but Facebook was not based in the UK and it took place before GDPR, limiting enforcement options. Facebook argued it was covered by their data use policy, which users had agreed to.

Real case Amazon's discriminatory CV screening tool (scrapped 2018)

Amazon developed a machine learning tool to automatically screen job applications. The tool was trained on CVs submitted to Amazon over a ten-year period. However, the technology sector has historically been male-dominated, meaning the training data was overwhelmingly male CVs from successful applicants.

The algorithm learned to replicate historical hiring patterns. It began penalising CVs that included words like "women's" (as in "women's chess club") and downgrading graduates of all-women's colleges. It was effectively teaching itself that women were less suitable candidates - not because of any intentional design, but because it learned from historically biased data.

Amazon discovered the bias in 2015, attempted to fix it, could not guarantee neutrality, and eventually shut the project down in 2018. Reuters reported the story publicly that year. Amazon stated the tool had never been used for actual hiring decisions.

Real case AI deepfakes and the 2024 UK general election

During the 2024 UK general election campaign, AI-generated deepfake audio and video clips circulated widely on social media platforms. One widely shared audio clip falsely depicted Labour leader Sir Keir Starmer verbally abusing staff. Another showed London Mayor Sadiq Khan making inflammatory remarks he never made. Both clips were fabricated using AI voice-cloning and synthesis tools that had become accessible to almost anyone at low cost.

The clips spread rapidly before fact-checkers could respond, reaching millions of people. Platform moderation struggled to keep pace: by the time individual pieces of content were removed, they had often already been downloaded, re-uploaded and shared further. Research by Full Fact and other organisations found that many voters who saw the clips were uncertain whether they were genuine even after seeing debunking articles.

The UK had no specific law against deepfake political misinformation at the time of the election. The Online Safety Act 2023 introduced offences related to sharing intimate deepfakes without consent, but political deepfakes designed to mislead voters remained a legal grey area. The Electoral Commission warned that AI-generated misinformation represented a serious and growing threat to democratic integrity, and called for new legislation ahead of future elections.

Think deeper

Social media platforms use recommendation algorithms to show users content they are likely to engage with. Research suggests this creates "filter bubbles" where users see only content that reinforces their existing beliefs. Is this an ethical problem? Who is responsible - the platform, the user, or both?

It is an ethical problem because it can reinforce extreme views, reduce democratic discourse and make users susceptible to misinformation. The platform bears significant responsibility: it designed the algorithm to maximise engagement (time on site) rather than to promote balanced information, and it profits from the resulting behaviour. However, users also have agency: they choose what to click on and share. The complexity is that the algorithm exploits psychological tendencies (preference for confirmation of existing beliefs) that are difficult for individuals to consciously resist. Regulatory responses are emerging: the EU's Digital Services Act requires platforms to offer users an algorithmic feed option not based on profiling. Some researchers argue that meaningful consent requires transparency about how recommendation algorithms work, which most users currently lack.

Ethical statement explorer

Consider each statement below. Choose your response - then see the analysis of both sides.

Agree or Disagree?
Click your view on each statement - there are no wrong answers, but read the analysis
Lesson 5 Quick Quiz
5 questions - click an option to answer
Question 1
What does "algorithmic bias" mean?
Question 2
Why was Amazon's CV screening tool biased against women?
Question 3
A "filter bubble" occurs when:
Question 4
State one benefit and one drawback of automation in the workforce.
Question 5
What was the main ethical problem with Facebook's 2014 mood experiment?
Lesson 5 complete - head to Lesson 6: Open Source and Exam Technique

Lesson 5 Worksheets

Three worksheets covering social and cultural impact, bias analysis and extended evaluation.

Recall
Key Terms and Impacts
Define algorithmic bias, filter bubble, deepfake and surveillance. Match impact statements to categories (automation, social media, surveillance).
Download PDF
Application
Amazon and Facebook - Analysis
Structured questions on both case studies. Identify ethical issues, relevant laws, and what the companies could/should have done differently.
Download PDF
Exam technique
Delivr - Evaluate Automation
"Discuss the social and economic impact of Delivr automating its delivery fleet." 8-mark question with model answer, mark scheme and examiner commentary.
Download PDF
Flashcard deck
Social impact and automation key terms from all 6 lessons
Open flashcards
Lesson 5 - Ethics and Law
Cultural and Social Impact
Starter activity
Ask students: has technology changed a job that someone in their family or community used to do? What happened to that job? This grounds the abstract concept of automation in students' lived experience before introducing the broader economic arguments.
Lesson objectives
1
Define automation and explain both positive and negative effects on employment.
2
Define algorithmic bias and explain how it arises from training data.
3
Analyse the Facebook mood experiment and Amazon CV tool using ethical frameworks.
4
Write balanced "evaluate" answers that cover both sides and give a justified conclusion.
Key vocabulary
Automation
Technology performing tasks previously done by humans. Creates new jobs but displaces existing ones. Rate of change matters.
Algorithmic bias
Unfair outcomes from biased training data. Amazon CV case: learned historical discrimination from existing (male-dominated) hiring data.
Filter bubble
Recommendation algorithms showing only content aligned with existing beliefs. Reduces exposure to different perspectives, can reinforce extremism.
Informed consent
A key ethical research principle: participants must be told what an experiment involves and freely agree to take part. Facebook violated this.
Discussion questions
If an algorithm is biased because of biased training data, who is responsible - the data, the programmer, the company, or society?
Facebook argues its mood experiment was covered by its terms of service. Is clicking "I agree" on a ToS meaningful consent for psychological research?
Social media companies say recommendation algorithms just show people what they want to see. Is this neutral, or are they shaping public opinion?
Exit tickets
Explain what algorithmic bias is and give one example. [2 marks]
Describe one benefit and one drawback of automation. [2 marks]
"Social media has had a mostly negative impact on society." Evaluate this statement. [6 marks]
Homework suggestion
Students spend 20 minutes watching videos on a topic they are interested in, noting each recommended video. Then: deliberately watch one video on an opposing viewpoint. Note how the recommendations change. Report back on how the recommendation algorithm responded to their changed behaviour.