THIS IS FAKE NEWS! The entire story is made up. Could you tell this news story is false while you were reading it? Or did you assume it was true? Here are the clues that this is fake news:
Why is the information so vague—which colleges are involved? How many is “several students”?
“A management source”—how do you know there was an actual source for this information?
“Coronavirus can survive on hard surfaces for a month.” Not true—look it up!
How do you know the “international student” named in the story died from COVID-19 that they got at a Toronto-area college—or if that student even exists?
How do you know any students contracted the virus at college before they “closed their doors”? Notice how this is implied without any proof.
How can you be sure any part of this news story is true? There are no individuals quoted or named, no schools named, and the source of this story is unknown.
This “fake news” story is designed to scare you and make you angry without providing any evidence that it is true. Fake news is a common type of disinformation that counts on you to tell your friends and family about it without asking questions. We can’t rely on our emotions when we are reading information online—we need to use our heads! We need to think before we react to information. That is what this module is about.
Media literacy on the internet is vital for understanding how political, social, and economic groups use the online space. This module will look at the new realities of social media and how it can both bring people together and divide them. This includes an examination of “big data” privacy issues, and the online battle over facts and truth—information vs. disinformation. Global citizenship is based on the interconnectedness of people around the world, and understanding the impact of social media is vital for our exploration of social justice and equity.
Let’s ask ourselves some questions. If you read a news item on your social media feed about a celebrity who is abusing drugs, would you believe it? How about a news story linking refugees to terrorism—does that make sense to you? Did you believe the COVID news story that opened this module? How do you know if something’s true? Because it appears in what looks like a professionally produced news story, or maybe you’ve heard someone else talk about it…? We all tend to believe what we want to believe, it’s called bias, and there are people who take advantage of that to manipulate us. Sometimes this manipulation is intended to control our political views. Sometimes it’s just to get our attention online so someone can make money from the pages we visit. When half-truths or falsehoods are presented as facts, this is called disinformation, and social media is the easiest and most effective place to spread it. This module will introduce you to what disinformation is, show you how it works, and give you some tools to spot it and resist its influence on you. Our access to a huge range of information in the online space is a good thing, but we need to be smart about how we read and use that information.
Social Media and Public Opinion
Social media is the new battlefield for
public opinion. The “public” is you and everyone you know. On the internet, many individuals and groups try to shape your opinions, whether it’s advertisers competing with one another to sell you products, or politicians trying to get your support.
Today, social media platforms compete with traditional information providers like newspapers and television as sources for current events. If you use social media to stay in touch with friends or keep up with popular trends, you may wonder why you should care about political content on your favourite site. Let’s start by looking at some statistics on social media use, globally.
Almost half of the world’s population is online, and a significant number of those people use social media. Clearly platforms like Facebook, YouTube, Twitter, and Whatsapp are impacting the opinions and ideologies of citizens around the globe. Given these numbers, it is important to consider who is creating that impact on opinions, and how they are doing it. We should also consider how this is affecting social institutions like democratic government, free speech, and public information.
Social Media and Interest Groups
Social media has a global reach far greater than any other source for information. This makes social media platforms a powerful vehicle for
interest groups. Interest groups use social media to promote their agendas to politicians and the general public. Using social media, interest groups can impact the outcome of national elections and important policy decisions that affect the lives of citizens—including those of us who just use social media for fun. That is why it is vital that you understand the damaging effect of online disinformation.
See the TED Talk by journalist Carole Cadwalladr in Case Study #1 for an example of how England’s “Brexit” vote was impacted by social media.
Case Study #1:
Social Media and Brexit
Major political movements can be influenced by interest groups using social media. A recent example is “Brexit” in the United Kingdom (UK).
In June 2016, the government of the UK asked its citizens to vote on whether to stay in the
European Union (EU) or leave. The results of the
referendum were 54% of the British population voted in favour of leaving the EU. This has created controversy and uncertainty over the economic and social future of the United Kingdom. This decision will have an enormous impact on the lives of both British and European citizens, and the economies of their countries, for years to come. How did the citizens of the UK make their decision? Where did they get their information about it?
Watch the TED Talk by Carole Cadwalladr for insights into the impact social media made on the Brexit referendum (Source: Cadwalladr, 2019).
In this video, Carole Cadwalladr, a Welsh journalist, explains her investigation into why average British people voted “yes” to leaving the European Union. She discovered that Facebook was a huge influence on voters. This is where voters read anti-EU disinformation, purchased by right-wing interest groups. Cadwalladr explains that these interest groups, who posted lies and inaccuracies about the EU, are impossible to trace due to Facebook’s policies.
Do you think the Brexit result could have gone differently if British citizens knew that much of the information they were reading about it on social media was designed to manipulate them to vote “yes”?
In the next section, we’ll do some social analysis on this Brexit case to try and uncover how fake news could have impacted the attitudes and votes of regular people. You’ll see examples of how disinformation could be used on platforms like Facebook to sway British citizens towards voting in favour of leaving the European Union. The examples in the next section were written for this module to demonstrate how fake news can be created around certain key issues like immigration, to trigger the pre-existing fear and biases of viewers. Biases and fears are powerful emotions that make us vulnerable to believing disinformation. In the case of Brexit, fake news that used common fears and biases to nudge British voters in a certain direction may have played a significant role in the outcome of the referendum, as Carole Cadwalladr explains in her TED Talk.
Social Analysis: Disinformation, Ideology and Brexit
Why is disinformation so effective? Because people want to believe it, especially when it confirms their own opinions. Recall from the Ideology module that ideologies include the beliefs, ideas, and values of individuals. The case studies you’ve read in this module are examples of fake news and disinformation aimed at people with “
right-wing” beliefs and values. Those same people can spread that disinformation to other people who think the same way. They are hearing what they want to hear and ignoring the facts. It is important to note that people with any ideology are prone to believe what confirms their own views.
Let’s apply this to the Brexit case study. How could fake news on Facebook influence people to vote in favour of leaving the European Union (EU)? Here are some examples of how this could work:
If you are
nationalistic, fake news about the threat that immigration poses to England’s security could sway you to vote in favour of leaving the EU and closing England’s borders.
If you already believe that England’s traditions and identity are threatened by immigration, then fake news could confirm this—again, you’d vote to leave.
If you were recently unemployed and you were angry about it, then fake news stories could lay the blame on the EU for your job loss—yet again, another vote for Brexit!
Let’s put the points above in some real perspective. A study by King’s College London’s Policy Institute compared what British people believe about immigration with the actual facts about immigration in their country. They asked a group of British citizens if they believed that European migrants to England received more welfare payments than they paid in taxes. In other words, did they take more out of the system than they put in—a typical argument used in favour of Brexit. The answer, from Britain’s Migration Advisory Committee, is:
In 2016/17, “EEA [European] migrants as a whole are estimated to have paid £4.7bn more in taxes than they received in welfare payments and public services.” (Dunt, 2018)
Converting that to Canadian money, European migrants to Britain paid approximately eight billion Canadian dollars more into their tax system than they took out in welfare payments and public services! Yet, of the people polled in this study who voted in favour of Brexit, only 16% got this right. Most wrongly believed that immigrants cost the British government more than they contributed. You can see there is a disconnection between the perception of European immigration to Britain and the reality of it—and this type of misunderstanding can be fueled by disinformation, as it was in the Brexit vote.
When we are emotional on an issue, our bias is firmly in control of our thinking. Disinformation capitalizes on that. It uses our own ideologies to bend and shape our opinions and our votes, regardless of the facts.
Go Deeper
Read the post by Ian Dunt on Politics.co.uk for more examples of how perception did not line up with reality in the Brexit vote. (Source: Dunt, 2018)
Disinformation and Democracy
Carole Cadwalladr’s TED Talk explains how interest groups influenced Brexit voters in Britain using social media. They did this by spreading
disinformation—false information used to deceive or manipulate people.
One form of disinformation is
fake news. This refers to false stories, usually online, that seem like genuine news and can be used to sway the opinion of the viewer. As the Brexit case illustrates, disinformation and fake news can seriously affect the functioning of a democracy. This section will explore what this means for you, as a national and global citizen. We’ll start by defining democracy, then look at the role of information in a democratic society.
What We Mean By Democracy
In this section, we use the word
democracy to mean representative government. This is also how it is commonly used in media. In other words, “democracies” are countries where citizens elect their governments to represent their interests.
However, this common definition of democracy is simplistic. It does not reflect the reality that many groups are not represented by their governments, even in Canada. For example, Indigenous peoples, poor people, and other groups may not be recognized nor served by their elected governments to the same extent as privileged groups within this country. Dominant groups have greater power, even in elected governments.
Citizens need information to fight this
inequity. We also need tools to identify disinformation designed to mislead us. Only then can we build a fairer democracy or even a better system of government. When we explore the “threat to democracy” represented by disinformation in this section, it should not be from a perspective that this system of government is perfect, but that information is vital to fixing it.
Media and Government
Media plays a central role in the democratic system of government. Citizens rely on information from a variety of media sources to make informed decisions. These decisions might include which government
policies to support, which government actions to oppose, and who to vote for in elections. Without accurate information from the media, citizens can’t hold their governments
accountable for their actions, or choose another government to replace them.
As you saw in the first media literacy module, bias in the media cannot be avoided. Even the most reliable sources of journalism select and omit information based on ideology or because they have limited space. The only way citizens in a democracy can get a balanced picture of what is going on in their countries and the world around them is to access a wide variety of media sources. Each source will contribute information that adds to a complete picture of events.
It is important to note that the term “fake news” has been used widely, by people with different definitions of what it means and different motivations for using it. The way it is used in this module, to mean misleading or false information presented as fact, is a definition that aims to distinguish untrustworthy online news from genuine news from reliable sources. As you will see as you read on in this module, this definition of fake news includes disinformation produced for political reasons and also for profit. The term, however, has created backlash—many sources of fake news accuse genuine journalists of the same thing! Donald Trump is famous for making statements that don’t pass fact-checking, yet he is also famous for accusing reputable news agencies like the Washington Post and New York Times of being “fake news” sources whenever they criticize his actions. Accusing an information source of being “fake news” is a weapon that can be used to discredit and undermine. Sometimes the mere accusation is enough to make people believe it, especially when it feeds their own biases. It is important that you make up your own mind about fake news, using critical thinking. For more information about the debate about the term “fake news” and its impact on democracy, read the article from The Conversation in the Go Deeper section of this module.
Go Deeper
Read this article to learn more about how the term “fake news” may be hurting democracy. (Source: Habgood-Coote, 2018)
Democracy and Information
“Voters can keep their governments
accountable only if they are informed about what their governments are doing. In a modern democracy, such information comes mainly through the media” (Kennedy & Prat, 2018).
Democracy and Online News
Democracy depends on free access to information through a variety of media platforms. Most disinformation is spread online, through social media, blogs, and fake-news sites. These sources take advantage of the open nature of the internet. On the internet, anyone can post content for millions of people without the scrutiny and fact-checking (see links in Go Deeper) that traditional media sources undergo. Consider what this means as you look at these statistics from 2019 about online news consumption of Canadians:
The most recent data shows that the internet was the leading media outlet used by Canadians for news, with 77 percent going online for news on a typical weekday compared to just 42 percent reading news in print publications. Further, 59 percent of Canadian consumers use the internet to get the news at least once daily. (Watson, 2019)
The combination of widespread online news consumption with unrestricted disinformation poses a threat to democracies, worldwide. Citizens cannot debate, protest, or make informed voting decisions if their online information is corrupted by disinformation and fake news.
Watch the video on “Disinformation and Democracy” to learn how disinformation threatens democracies. It also describes how the European Union is trying to address this problem (Source: European Parliamentary Research Service, 2018).
This video features Naja Bentzen, a policy analyst for the European Parliamentary Research Service. She explains how disinformation on social media is designed to deceive us for a specific purpose. For example, it may aim to distract us from real issues, make us believe something untrue, or undermine our governments. The European Union is developing tools and policies to stop the spread of disinformation. These include fact-checking units, software to uncover fake photos and videos, and pressure on social media platforms like Facebook to take responsibility for fake news on their sites.
Europe—43,000 years ago. A Neanderthal encounters a new group of people—Homo sapiens—moving across the landscape he calls home. After enduring a visual inspection and a few pokes with a finger, our Neanderthal picks up a stick and draws in the dirt. It’s a picture of a deer. The Homo sapien group instantly recognizes the animal. The Neanderthal points excitedly in the direction where the herd can be found. As the Homo sapiens move off towards the new hunting ground, the Neanderthal watches them, then hurries back to his family. He knows that there are no deer in the direction he pointed…
We will never know who created the first “fake news” story. Perhaps it was a wily Neanderthal protecting his resources. One thing is certain: humans love stories, and that makes us susceptible to untruths.
False stories, particularly
sensationalized ones that generate fear or amazement, have always been effective at rapidly spreading and have sometimes even influenced the course of history. This section will dig deeper into fake news. We’ll look at why and how fake news is made, some of the different forms it can take, and how to spot it.
Go Deeper
Read this article for historical examples of fake news, some of which have had terrible consequences that persist today. (Source: Soll, 2016)
Case Study #2: How and Why Fake News Is Made
What motivates people to produce fake news and disinformation? Why would anyone want to do it? The examples, below, show how fake news can be used for both political purposes and profit.
Example 1: Fake news—real profit
It’s 2016, and somewhere in Eastern Europe, a jobless, tech-savvy student is thinking of a way to make money. The answer may be online. He knows that Google, YouTube and other advertisers will pay him for “views” if he can set up a website that generates interest. He notices that news stories about celebrities have a strong following, so perhaps he can grab some of that web traffic.
Our student searches the Internet and finds that the more outrageous celebrity news stories get more views—who’s had plastic surgery, who has a drug problem, etc. He locates those stories on other sites, makes minor modifications to them like changing the headlines or a few details, and reposts them on his site as original, celebrity news. He even finds celebrity stories on humour websites that are openly fictional and reposts them as genuine news. He mixes real stories in with the fakes—a formula that makes his website look credible. He might also add a scandalous headline like “Tom Hanks Secret Sex Tape” to attract attention. This is known as
clickbait—a headline designed to be so irresistible to viewers that it will get them to follow the link to its source—even if that headline is fake, like this one.
Services like Google write algorithms or computer “rules” to spot plagiarized and stolen content. Our entrepreneur gets around this by modifying copy and by blending content from multiple online sources into “new” material. Google’s algorithms give its users a false sense of security—they begin to believe that Google’s systems ensure that fake news is screened. But our enterprising student has fooled these security codes by changing the content just enough that it gets through undetected.
As his celebrity stories start to get noticed, he opens social media pages that drive even more traffic to his website. As the number of views multiplies, advertisers take notice and start paying him for space. With very little time and investment, he’s making significant money from his celeb-info business. Once views number in the millions, Google, Facebook, etc., have little motivation to remove his content—it makes them money, too. Nor do any of his advertisers require verification or fact-checking on any of his content, for the same reason.
Our online entrepreneur steals news, manufactures news, and repackages news about celebrities with one motivation—to make money. He has no particular interest in celebrities or the truth of his stories about them. His name is nowhere on his sites or any of the posts. If he gets into trouble, he can disappear in the time it takes to press “delete.” All in all, it’s pretty easy money.
Example 2: Trick or tweet—fake news goes viral
A citizen in Texas has been hearing about local protests against Donald Trump. While driving, he notices something that sets off an alarm in his head. He sees a group of unmarked buses arrive close to the location where an anti-Trump event is taking place in his city. He concludes this can’t be a coincidence. The buses must be bringing in anti-Trump protesters to inflate the numbers at the rally. The implications are clear. Local anti-Trump rallies are a deception. The numbers of protesters are made to look greater than they actually are by a hidden organization working against Trump. He posts photos of the buses, along with his theory about them, on Twitter.
In the space of half a day, his tweet is “liked” and reposted thousands of times. This Texas businessman with a Twitter following of 40 people has spawned a
conspiracy theory about a shadowy, anti-Trump organization, and it will go
viral within one day.
Within two days of his initial post, other social media services pick up his tweet and rewrite it, adding to the theory. As the story spreads through the internet from multiple sources, it starts to look less like a local tweet and more like genuine news. Facebook pages, discussion forums, and websites repeat it. This theory about paid, anti-Trump protesters is redistributed thousands of times, particularly among online sites that support Trump. More theories spring up about who is funding these “fake protests” and paying “fake protesters.” Donald Trump, himself, tweets his support of the theory.
Fox News contacts the bus company for comment. The company’s marketing director states clearly that their buses were not involved in the anti-Trump protests. Meanwhile, the Texas businessman who posted the tweet that started it all admits that he has no real evidence that the buses were connected with the protests. It simply seemed suspicious to him that they were in the same area of the city.
Basic fact-checking reveals this conspiracy theory to be false. The buses, it turns out, were bringing people to a computer software convention. But the truth doesn’t spread the way the fake news story did. Thousands of Trump supporters still believe there is an organized, well-funded plan to undermine him using paid fake protesters to make the anti-Trump movement look bigger than it is. They do not see the facts, or choose to ignore them.
The two case studies, above, illustrate several key points about disinformation like fake news:
Much fake news is created for money.
Fake news is also created for political reasons.
Sensational stories are easier to “
monetize” on the internet, as they attract more views and, thus, more advertisers.
People with ideological biases will embrace and spread fake news if it reinforces their views.
Conspiracy theories are easy to create and spread online.
People are willing to believe and spread fake news and conspiracies without checking the facts.
Go Deeper
Read The New York Timesarticle about the actual people and events depicted in Example 2: Trick or tweet—fake news goes viral. (Source: Maheshwari, 2016)
Deepfakes—The Future of “Fake”?
The last section illustrated how simple it is to misinform people through social media. If it’s that easy to fake news and information online, you may find relief in the thought that at least video doesn’t lie—or does it? Video technology is now so sophisticated that virtually anyone can create fake video, or “
deepfakes,” that look convincing.
Deepfake videos can be used to entertain, or harm. It may be funny to see actor Nicholas Cage’s face on Lois Lane’s body. However, it is also disturbing to know that actors’ faces can be dropped into pornographic videos – as can the faces of regular people. In terms of politics, the implications for election disinformation are grave. Candidates can be “deepfaked,” appearing to say things that are highly offensive and damaging to their campaigns. Cybercriminals are also using deepfake technology to defraud businesses for huge sums of money. Their victims include large corporations with sophisticated security systems.
Watch the following videos to see how deepfakes work and the implications of this technology.
Video Description: This video by Bloomberg summarizes how deepfakes are made and the ways they can be used to harm individuals and public figures. It also describes the positive uses of this type of technology, like creating artificial voices for people who can’t speak due to injury or illness.
Video Description: This video by CNBC describes the danger of deepfake videos. This includes how they’ve been used for cybercrimes, political disinformation, and manipulating public opinion. The video explains the measures Google and Facebook are taking to try and identify deepfakes on their social media platforms.
Fake News and Video—Where Do We Go from Here?
By now you may be wondering how to spot fake news and video. Clearly, this type of disinformation can be very convincing. If you aren’t an expert on the topic, how can you tell if news and information you see online is real? What can you do to distinguish a conspiracy theory from a factual report? With their reputations at stake, social media giants like Facebook and Google are hurrying to develop software that will detect fake news, especially malicious and damaging articles and videos. However, relying solely on the platforms that spread fake news to find a solution is not an approach you can count on. Using your own critical thinking skills is the best defense against disinformation of any kind. Your brain is the best tech for spotting fake news. The next section has some tools to help you hone your fake-spotting skills.
Disinformation Online: How to Recognize It
The infographic below, from the International Federation of Library Associations and Institutions, outlines some strategies you can use when confronted with online news, video, or information. You’ll notice that some of these approaches require extra effort, perhaps following links or investigating sources. If you find yourself thinking you don’t have time or you can’t be bothered to do this fact-checking, stay flexible and critical. When you can’t rely on information, rely on yourself. Here are things to keep in mind:
Prioritize: You may not have time to fact-check everything you see on social media, but you can commit to investigating information when it is important to you or has an impact on you.
Be bias-smart: If you have a strong, immediate reaction to an online story, it may be triggering your unconscious biases. Don’t buy in. Refuse to be vulnerable to manipulation of your emotions and beliefs.
Keep a healthy skepticism: Your best defense against fake news is rational doubt. You may not have time to investigate, but you can maintain a position that you simply “don’t know” if something is true or not—a solid stance when you are unsure of the facts.
In many ways, deepfake videos are harder to spot than fake news, especially as the artificial intelligence software that creates them gets better every year.
Watch this video, produced by the US Public Broadcasting Service, for information and tips on spotting deepfakes when you’re watching video (Source: Above The Noise, 2019).
This video from the American Public Broadcasting Service explains how deepfake videos are made and what methods viewers can use to detect them
Go Deeper
This article from The Guardian has more information and tips for spotting deepfake videos. (Source: Sample, 2020)
Activity—Challenge your fake-spotting skills
Take this quizfrom the media learning site Channel One Media to see if you can spot the fake news story. (“Quiz: Can You Spot the Fake News Story?,” n.d.)
Big Data and Disinformation
In the first media module, you learned about “big data” and how it’s used by tech companies, advertisers, and others. Big data isn’t just used to sell you products. It is also used to sell you ideas. Collecting information about people’s behaviours reveals their ideologies and biases—and how to manipulate them. In the Brexit case study, you learned how English Facebook users were targeted for disinformation and fake news about leaving the European Union. Big data can be used to create disinformation for political purposes: to sway your opinions and even your vote.
The most famous case of big data being used for political purposes in recent years is Cambridge Analytica. Read Case Study #3, below, to find out how Facebook leaked massive user information that may have impacted the 2016 US election.
Case Study #3: Cambridge Analytica
How could research that had been used to predict and stop the recruitment of terrorists online influence a US election? In 2016, Cambridge Analytica, a UK political research firm, contributed to the campaign of Donald Trump by
profiling and targeting Facebook users to sway votes. Watch the two videos to understand how this happened and how social media can be used to spread disinformation.
The first video, from The New York Times, explains how Cambridge Analytica used research on Facebook users and their contacts to manipulate their political views without their consent (Source: The New York Times, 2018).
The second video, from the Wall Street Journal, explains how Facebook made it easy for outside parties to misuse user information (Source: Wall Street Journal, 2018). This led to the Cambridge Analytica scandal and the privacy debates that have followed.
Research from Cambridge University showed that you could predict a lot about people, including their political views, using their Facebook pages. One Cambridge professor, Aleksandr Kogan, developed an app to gather this information from Facebook users and their contacts.
When Cambridge Analytica went into business with Kogan, they purchased his information on millions of Facebook users. They then looked for potential pro-Trump voters and targeted them with disinformation that promoted racist views and conspiracy theories. These were designed to make them “vote Trump.” Those Facebook users were unaware they were being politically manipulated.
In an interview for National Public Radio in the US, former research director at Cambridge Analytica Christopher Wylie explains why he risked his own career to expose his company:
They targeted people who were more prone to conspiratorial thinking. They used that data, and they used social media more broadly, to first identify those people, and then engage those people, and really begin to craft what, in my view, was an insurgency [uprising] in the United States. (Gross, 2019)
The actions of Cambridge Analytica created a huge controversy. Some key questions were asked:
Why was Facebook allowed to give away access to the personal information of millions of its users without them knowing about it?
Does social media need to be regulated to protect the privacy of its users?
How can we make social media less vulnerable to disinformation?
How can we make sure social media can’t be used to threaten democracy?
These issues are still being debated around the world.
After Cambridge Analytica, governments in the United States and Britain launched investigations into Facebook’s actions. In 2015, Facebook changed its policies to prevent “
third party” companies from accessing its user profiles without consent. Still, this issue is not settled.
The moment any of us log on to the internet, we are tracked and profiled. Our information is then bought and sold. This kind of “
competitive intelligence” (see below), as it is known in the business world, has blurred the line between what consumers are willing to share about themselves and their private information.
Competitive Intelligence
Competitive intelligence is a type of research done in the business world. The term “intelligence” refers to information businesses gather to better understand their customers, which is often purchased from third parties. Businesses also gather or purchase intelligence about their competition and other factors like the economy, all in an effort to be successful. Just as political parties profile voters by following their social media pages, businesses profile consumers to understand how they can better sell them products.
In the business world, consumer
profiling is considered a necessary practice, especially since the internet has increased competition for sales. The debate about what businesses should be allowed to know about you is detailed in the video from The Guardian called “Big Data: Why Should You Care?” from the first media module. The problem Cambridge Analytica brought to light is privacy on social media. In particular, it poses the question, who owns your information: you or the platform? This is an issue for social media giants like Facebook. These companies wish to keep their platforms open and free for the public by monetizing their sites through advertising and selling information on their users. The downside is that this practice leaves social media open to be used by groups who post damaging disinformation.
Free access versus privacy—this may be the biggest issue facing social media in the 21st century. As yet, the issue of consumer rights and privacy versus the use of consumer information by third parties is still being debated worldwide.
Go Deeper
Read this article from Futurity for an explanation of how “third parties” are watching us on the internet. (Source: Urton-Washington, 2016)
From what you’ve read so far, you may be thinking that spotting fake news, deepfakes or other types of disinformation is not easy. That’s true, but it’s worth doing. A quick Google search of someone you see in a video blog can tell you a lot about who they are and what they stand for—before you believe what they say. You many find out they aren’t very reliable or credible, and be glad you didn’t re-post it! It’s okay to be unsure of information you find online—what’s not okay is to believe something just because it seems to make sense, or because it backs up your suspicions about something. You want to have opinions that are well-informed, not misinformed, and the only way to do that is to be willing to challenge your own assumptions, be willing to adjust or change your perspective, and not believe everything you see, read or hear online. Information that is balanced and accurate is necessary for you to understand your world! None of us can cast a meaningful vote in an election, understand the causes and solutions for social problems, or even communicate with our fellow human beings in a beneficial way if we can’t distinguish truth from lies. Disinformation is disrespect for you, the online, global citizen. Even the three words “I don’t know” will go a long way towards resisting it.
Summary
What do you know about social media and disinformation after reading this module? As great as social media can be for connecting people, it is also a means to influence people—even without their knowledge. We are profiled on social media. That “consumer intelligence” is sold and used. When distorted or untrue information is intentionally placed on social media to disrupt elections and major policy decisions, it threatens democracy. Disinformation, fake news and deepfakes are everywhere online. It’s up to you, the user, to look critically at the content you are seeing, check the facts, and resist being manipulated by your own biases. See the additional materials for ways that social media can be used to give communities a political voice, without the need for disinformation. The Social Action section later in this textbook will also provide more examples of social media being used for social good
Key Concepts
Key Concepts
accountable
To be responsible for actions and decisions and able to explain the reasons for them.
big data
“Extremely large data sets that may be analysed computationally [by computers] to reveal patterns, trends, and associations, especially relating to human behaviour and interactions” (“Big data,” n.d.).
clickbait
A headline designed to grab the attention of viewers and entice them to follow the link to its source.
conspiracy theory
“An attempt to explain harmful or tragic events as the result of the actions of a small, powerful group. Such explanations reject the accepted narrative surrounding those events; indeed, the official version may be seen as further proof of the conspiracy” (Reid, n.d.).
competitive intelligence
“Competitive intelligence, sometimes referred to as corporate intelligence, refers to the ability to gather, analyze, and use information collected on competitors, customers, and other market factors that contribute to a business’s competitive advantage” (Bloomenthal, 2020).
deepfake
“A term for videos and presentations enhanced by artificial intelligence and other modern technology to present falsified results. One of the best examples of deepfakes involves the use of image processing to produce video of celebrities, politicians or others saying or doing things that they never actually said or did” (“Deepfake,” n.d.).
democracy
On a basic level, it is the ability of citizens to participate in fair and open elections to choose their representatives in government. Another perspective argues that democracy must function beyond elections by involving citizens in ongoing government decisions that affect them.
disinformation
“Information that is false and deliberately created to harm a person, social group, organisation or country” (UNESCO, 2021).
European Union (EU)
The EU is an economic and political union involving 28 European countries. It allows free trade, which means goods can move between member countries with fewer restrictions or extra charges. The EU also allows free movement of people, to live and work in whichever EU member country they choose.
fake news
False stories, usually online, that seem like genuine news and can be used to sway the opinion of the viewer.
inequity
Lacking equity; unfair and injust.
interest groups
Associations whose members share similar concerns and try to influence public policy to benefit themselves or their cause. Their goal could be a policy that benefits group members or one part of society (e.g. government subsidies for farmers) or a policy that has a broader public purpose (e.g. improving air quality). They attempt to achieve their goals by lobbying—which means applying pressure to the people who make the policies. Other names for interest groups are special interest groups or pressure groups (Thomas, 2017).
monetize
When applied to social media activity, to monetize is to generate revenue from web content, usually by attracting advertisers to the site.
nationalism
Refers to a set of shared values and myths of a nation or group. Nationalism can be political, cultural or racial. People who support a nationalist ideology believe their nation is superior to others. This can lead them to marginalizing those not belonging to the nation or group. They may even regard others as enemies and go to war or commit genocide under certain circumstances. Nationalists are inward looking and, therefore, opposed to internationalism or globalization unless it is favourable to their interests (Chet Singh, Centennial College).
participatory media
Media platforms where the audience plays an active role in collecting, reporting and sharing information.
policies
“A set of ideas or a plan of what to do in particular situations that has been agreed to officially by a group of people, a business organization, a government, or a political party” (“Policies,” n.d.).
profiling
Online “profiling” is collecting information about internet users by tracking their online behaviour, including which sites they visit, comments they post and purchases they make. This reveals their interests, preferences, opinions and biases, information that is valuable to both advertisers and political interest groups—including those that produce fake news.
public opinion
The opinion or attitude of the majority of people regarding a particular matter (“Public opinion,” 2020).
referendum
“A vote in which all the people in a country or an area are asked to give their opinion about or decide an important political or social question” (“Referendum,” n.d.).
right-wing/left-wing
Right-wing and left-wing represent contrasting approaches to political and social change. Left-wing views welcome change that will create more equitable conditions in society. They support a greater role for government and are collectivist—in other words, they give priority to the group over the individual. Social democrats and feminists would be considered to have left-wing ideologies. Right-wing thinking favours the individual over the group, and it sees equality as undesirable and unattainable. Right-wingers resist change and support the existing social order. They tend to believe in capitalism and that the government should not interfere in people’s lives. Conservatism and neoconservatism are examples of right-wing thinking (Chet Singh, Centennial College).
sensationalism
The use in media of shocking or exciting headlines and content to attract readers, with little or no regard for facts or accuracy. News that is sensationalized is designed to trigger emotion. This will often generate more interest than fact-based news that appeals to reason.
third party
In the world of online data, a third party is a company or organization that gathers or purchases information about online users, often without their knowledge or consent.
viral
“Spreading or becoming popular very quickly through communication from one person to another, especially on the internet” (“Viral,” n.d.).
Global Indigenous Example
IndigenousX is a website based in Australia that has become a hub for Indigenous peoples in the Pacific Rim and around the world. Indigenous Australians have used the online space to form effective partnerships in media and government, and create a unique, online community.
While previous Indigenous media initiatives were unheard, we have grabbed the attention of key democratic institutions and decision-makers, who are becoming increasingly engaged with the proliferation of Indigenous voices enabled by
participatory media. (“Our Story,” 2019)
Check out their website to see how IndigenousX is giving Indigenous Australians a powerful social and political voice (Source: IndigenousX Showcasing & Celebrating Indigenous Diversity, n.d.).
Global Citizenship Example
Watch the video from BBC Monitoring, called The Greta Generation – Youth Activism Around the World, to see how social media has become a powerful tool for young activists to join forces as global citizens (Source: BBC Monitoring, 2019).
The video describes how social media is the main tool young activists are using to bring a wide range of issues to a global audience. The video includes some of the challenges these activists face when they put their cause online.
Social Analysis Example
How to Identify Ideology
A good way to start a social analysis on the ideology behind fake news is to make a list of questions:
Who is producing fake news—is there a pattern in their beliefs or agendas?
Who consumes and spreads fake news—do they have similar values?
What political parties benefit the most from fake news—what do they stand for?
What kinds of topics appear in fake news—do they reveal a bias?
Who benefits from fake news—is it a certain political party, candidate, or agenda?
Who is attacked by fake news—do they tend to share certain identities, cultures, or values?
Why would people believe fake news—how does it confirm beliefs they already have?
Start with a basic search—for example, on ideology and fake news—to get a general overview. Then, start searching more specific questions, like the ones above, to get a deeper understanding. See the sample search, below:
Sources
Licenses
Social Media and Disinformation in Global Citizenship: From Social Analysis to Social Action (2021) by Centennial College, Paula Anderton is licensed under a Creative Commons Attribution Non-Commercial Share-Alike License (CC BY-NC-SA 4.0) unless otherwise stated.
“Fake news is about to get so much more dangerous” was originally published on 6 September 2018 in the opinion section of the Washington Post. It has been republished with the permission of the author. Thomas Kent is president and chief executive of Radio. (2021, February 15). Fake news is about to get so much more dangerous – Thomas Kent. Ethical Journalism Network. https://ethicaljournalismnetwork.org/fake-news-more-dangerous.
Social Media and Disinformation
Paula Anderton