Cultural and Social Impact
Technology reshapes work, relationships, information and power. This lesson explores automation, algorithmic bias, social media's influence and digital surveillance - with two documented real-world cases that show how the effects can be profound and unexpected.
In 2014, Facebook ran a secret experiment on 689,003 users. For one week, the platform algorithmically manipulated what appeared in those users' news feeds - showing some users more positive posts and others more negative ones. The users were not informed. The goal was to study emotional contagion: could Facebook make people feel differently by changing what they saw? The answer was yes.
Social impact questions are "evaluate" heavy. You need to identify both positive and negative effects, use specific examples, and reach a justified conclusion. The most common mistake is listing only negatives or only positives. Examiners reward balance and specificity.
Automation and changing employment
Automation is the use of technology to perform tasks previously done by humans. This is not new - the Industrial Revolution mechanised physical labour; the digital revolution is now automating cognitive and analytical work. The difference is speed and scope.
Delivr is a fictional UK delivery company that employs 3,200 drivers. The company announces it will deploy autonomous delivery vehicles on all urban routes within 18 months. The vehicles use AI navigation, lidar sensors and machine learning to operate without a human driver. The company estimates it will reduce 2,400 driver roles.
Delivr argues this will cut costs, reduce road accidents (90% of which involve human error), lower emissions, and allow 24-hour delivery. The 2,400 affected drivers argue they have mortgages, families and specialist skills that do not easily transfer. The local community points out that these jobs were concentrated in areas of higher unemployment where alternatives are limited.
Algorithmic bias, social media and surveillance
Algorithmic bias occurs when an algorithm produces systematically unfair outcomes. This typically happens because the algorithm was trained on data that reflects historical human biases, or because the design itself failed to account for diversity in outcomes.
In January 2012, Facebook ran an experiment on 689,003 users by manipulating their news feeds for one week. Some users had positive content reduced (seeing more negative posts); others had negative content reduced (seeing more positive posts). The goal was to test whether emotional states could be induced through social media content without direct interaction.
The results, published in 2014 in the Proceedings of the National Academy of Sciences, confirmed that emotional contagion worked. Users who saw more negative content produced more negative posts themselves. Facebook and its academic collaborators had, without anyone's knowledge, changed how over half a million people felt.
When the experiment became public, it generated massive backlash. Critics argued it violated research ethics (no informed consent, no opt-out, potential harm to vulnerable users including those with depression). The Information Commissioner's Office in the UK investigated, but Facebook was not based in the UK and it took place before GDPR, limiting enforcement options. Facebook argued it was covered by their data use policy, which users had agreed to.
Amazon developed a machine learning tool to automatically screen job applications. The tool was trained on CVs submitted to Amazon over a ten-year period. However, the technology sector has historically been male-dominated, meaning the training data was overwhelmingly male CVs from successful applicants.
The algorithm learned to replicate historical hiring patterns. It began penalising CVs that included words like "women's" (as in "women's chess club") and downgrading graduates of all-women's colleges. It was effectively teaching itself that women were less suitable candidates - not because of any intentional design, but because it learned from historically biased data.
Amazon discovered the bias in 2015, attempted to fix it, could not guarantee neutrality, and eventually shut the project down in 2018. Reuters reported the story publicly that year. Amazon stated the tool had never been used for actual hiring decisions.
During the 2024 UK general election campaign, AI-generated deepfake audio and video clips circulated widely on social media platforms. One widely shared audio clip falsely depicted Labour leader Sir Keir Starmer verbally abusing staff. Another showed London Mayor Sadiq Khan making inflammatory remarks he never made. Both clips were fabricated using AI voice-cloning and synthesis tools that had become accessible to almost anyone at low cost.
The clips spread rapidly before fact-checkers could respond, reaching millions of people. Platform moderation struggled to keep pace: by the time individual pieces of content were removed, they had often already been downloaded, re-uploaded and shared further. Research by Full Fact and other organisations found that many voters who saw the clips were uncertain whether they were genuine even after seeing debunking articles.
The UK had no specific law against deepfake political misinformation at the time of the election. The Online Safety Act 2023 introduced offences related to sharing intimate deepfakes without consent, but political deepfakes designed to mislead voters remained a legal grey area. The Electoral Commission warned that AI-generated misinformation represented a serious and growing threat to democratic integrity, and called for new legislation ahead of future elections.
Social media platforms use recommendation algorithms to show users content they are likely to engage with. Research suggests this creates "filter bubbles" where users see only content that reinforces their existing beliefs. Is this an ethical problem? Who is responsible - the platform, the user, or both?
Ethical statement explorer
Consider each statement below. Choose your response - then see the analysis of both sides.
Lesson 5 Worksheets
Three worksheets covering social and cultural impact, bias analysis and extended evaluation.