Skip to main content

INSIGHTS BLOG > Information Distortions on the Internet


Information Distortions on the Internet

Written on 07 October 2016

Ruth Fisher, PhD. by Ruth Fisher, PhD

“Honest” Distortions of Information on the Internet

Not-So-Honest Distortions of Information on the Internet

Propagation on the Internet Promotes Distortion of Information

Governments Use the Internet to Spread Propaganda and Misinformation

Defenses Against Information Distortion

Consequences of Information Distortion

 

 

We all know there’s a lot of misinformation on the web. I started reading about this, and I soon discovered that the subject is a lot more complex that I had initially thought. There are two issues that I found particularly interesting:

1. The distinction between “honestly” inaccurate or manipulated information and purposely inaccurate or manipulated information; and

2. The dynamic surrounding how information becomes distorted as it passes from user to user on the Internet.

This analysis discusses (i) each of these two issues, (ii) defenses against being a victim of misinformation, and (iii) consequences of the increasing prevalence of misinformation on the Internet.

I.  “Honest” Distortions of Information on the Internet

I define “honest” distortions of information on the Internet as distortions arising from processes undertaken with positive or innocuous intentions, but with the unintended consequence of creating distorted views of the issues at hand.

A.  Algorithms Choose What We See

Computer algorithms are increasingly responsible for deciding what information we see on the Internet. Algorithms are generally designed to sort through all the massive amounts of information available and provide each user with the information that will be most useful to that particular user. Algorithms tailor information to users by drawing upon users’ browsing histories. While I believe the intent is generally laudatory, the reality is that the resulting information feeds give users a false sense that the majority of information, attitudes, and beliefs present in the world are consistent with those of the user.

In “Algorithmic culture. ‘Culture now has two audiences: people and machines,’” Giuseppe Granieri refers to the phenomenon of having algorithms curate information for users as the “algorithmic culture.” He further notes that due to feedback effects, the algorithmic culture leads to what he calls “personalization,” which is the tendency to reinforce, rather than challenge, current beliefs.

My preferred phrase is “algorithmic culture,” which I use in the first instance to refer to the ways in which computers, running complex mathematical formulae, engage in what’s often considered to be the traditional work of culture: the sorting, classifying, and hierarchizing of people, places, objects, and ideas. The Google example from above illustrates the point, although it’s also the case elsewhere on the internet. Facebook engages in much the same work in determining which of your friends, and which of their posts, will appear prominently in your news feed. The same goes for shopping sites and video or music streaming services, when they offer you products based on the ones you (or someone purportedly like you) have already consumed.

What’s important to note, though, is the way in which algorithmic culture then feeds back to produce new habits of thought, conduct, and expression that likely wouldn’t exist in its absence— a culture of algorithms, as it were. The worry here, pointed out by Eli Pariser and others, is that this culture tends to reinforce more than it challenges one’s existing preferences or ways of doing things. This is what is often called “personalization,” though Pariser calls it a “you loop” instead. By the same token, it is possible for algorithmic systems to introduce you to cultural goods that you might not have encountered otherwise. Today, culture may only be as good as its algorithms.

By curating more and more information so that what is shown to us conforms with our current beliefs, people are lured into a false sense that the scope of information that exists is much narrower than it actually is. NPR emphasizes this point in “The Reason Your Feed Became An Echo Chamber — And What To Do About It”:

What most algorithms are trying to do is to increase engagement, increase the amount of attention you're spending on that platform," he says. And while it's nice that we have an instrument to help us cope with the fire hose of information supplied by the Internet, that algorithm also carries some downsides. "The danger is that increasingly you end up not seeing what people who think differently see and in fact not even knowing that it exists."

It's what Pariser calls a "filter bubble."

B.  Confirmation Bias Narrows Our Scope and Promotes Divisiveness

I have come to believe that one of the stronger psychological forces that shapes our actions is “cognitive dissonance,” which Wikipedia describes as follows.

In psychology, cognitive dissonance is the mental stress or discomfort experienced by an individual who holds two or more contradictory beliefs, ideas, or values at the same time; performs an action that is contradictory to one or more beliefs, ideas, or values; or is confronted by new information that conflicts with existing beliefs, ideas, or values.

Leon Festinger's theory of cognitive dissonance focuses on how humans strive for internal consistency. An individual who experiences inconsistency (dissonance) tends to become psychologically uncomfortable, and is motivated to try to reduce this dissonance—as well as actively avoid situations and information likely to increase it.

Cognitive dissonance leads people to engage in confirmation bias. Wikipedia describes “confirmation bias” as

the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses, while giving disproportionately less consideration to alternative possibilities. It is a type of cognitive bias and a systematic error of inductive reasoning. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position.

In short, people feel anxious and uncomfortable when faced with contradictory information or beliefs. To reduce this anxiety and stress, people search out information that confirms what they already believe. At the same time, they also tend to ignore or discredit information that suggests that their beliefs may be wrong. People’s tendency toward confirmation bias leads them to surround themselves with people and sources who believe the same things they believe. That is, we narrow the scope of information to which we choose to expose ourselves, leading us to dwell in echo chambers. Being caught in echo chambers gives us a false sense of overconfidence that our views represent the dominant, or otherwise correct, views.

There is rampant information available on the dangers of confirmation bias for Internet users.

Philip Kim, in “Confirmation Bias on the Internet: You Are Deceiving Yourself,” iterates that confirmation bias leads people to end up in echo chambers.

But the consequences of confirmation bias extend past the creation of false narratives and a narrow-minded perception of information. Users will often congregate in homogeneous, polarized clusters known as “echo chambers.” These communities reinforce confirmation bias by compromising on the quality of information and proliferating “biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.”

In an extreme scenario, in which everyone has joined a gated community of like-minded people, the global forum no longer exists. Users only leave their echo chambers to diffuse information that supports their individual beliefs.

Tomas Chamorro-Premuzic, in “How the web distorts reality and impairs our judgement skills,” notes that confirmation bias leads people to be more prejudiced and less creative.

Consider the case of confirmation biases, a well-known psychological tendency where individuals unconsciously misperceive or distort new information to support their current beliefs or attitudes on a subject. In the words of Warren Buffett: "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact."

Proving ourselves right

Given that it is impossible to attend to even a fraction of the information that is available on the web, most individuals prioritise information that is congruent with their current values, simply ignoring any discrepant information. Recent studies show that although most people consume information that matches their opinions, being exposed to conflicting views tends to reduce prejudice and enhance creative thinking. Yet the desire to prove ourselves right and maintain our current beliefs trumps any attempt to be creative or more open-minded. And besides, most people see themselves as open-minded and creative, anyway.

In “A Nation Of Echo Chambers: How The Internet Closed Off The World,” Will Leitch describes how confirmation bias has led us to descend into a dystopian world, in which you are on the right side, while anyone who disagrees with you is on the side of the devil.

We now only have to interact with people who agree with us; if I use Twitter as my primary news source, as so many people do, I can carefully curate my feed to exclude anyone who disagrees with me about anything... But this is now accepted public policy. You don't have to find anyone to contradict you, if you don't want to.

This isn't just common practice now: This is how you win. The entire strategy for succeeding at anything, whether it's winning elections, selling a product or attracting visitors for your Website, revolves around pitching yourself as loudly as you can to those people on your side and turning those who disagree with you into the worst version of themselves, demonizing them into something subhuman and venal. Nuance is tossed out, even if you know a situation is desperately nuanced, in favor of quick points and splash; we've all become the New York Post.

This is simply how communication is done now. The idea of unifying anyone on anything is passé, old thinking, a waste of time. A horrible tragedy happens, and your first reaction, rather than taking a moment to mourn or quietly search for some grace and peace, is instead to start screaming and claiming that those with whom you disagree have blood on their hands. You are rewarded with this by the top slot on the news, a video that goes viral, and everyone on your side applauding you. And when you accept that's all you want to do— to turn away from the fundamental complexity at the heart of the human experience—you find you have no reason to return: After all, every time you say something loudly and strongly enough, the people who agree with you tell you how great you are. Those who disagree? Fuck the haters. Sic 'em, guys.

In “How Social Media Created an Echo Chamber for Ideas,” Orion Jones restates the idea expressed by Will Leitch that communication on the Internet has led people to be more extreme and divisive in their views.

… [S]ociologists have concluded that social media often entrench people's ideological positions and even make those positions more extreme. Witness the age of a bitterly divided America.

Harvard law professor Cass Sunstein has studied this phenomenon at length, finding that deliberation among a group of likeminded people moves the group toward a more extreme point of view. 

"The mere discussion of, or deliberation over, a certain matter or opinion in a group may shift the position of the entire group in a more radical direction. The point of view of each group member may even shift to a more extreme version of the viewpoint they entertained before deliberating."

C.  Social Media Promotes Information Distortions

People have a natural tendency to want to show the best of themselves to others. As a result, we tend to choose only the most favorable information and photo-shopped pictures to post on social media. However, by presenting this distorted image of ourselves to others, other people think that we are happier, more beautiful, or more successful than we really are. And since we also have a natural tendency to compare ourselves to others, constantly seeing other people who seem to be happy, beautiful, and successful makes us feel depressed about our own lives.

In “Does Facebook Make You Depressed?” Dr. Perpetua Neo notes

Someone once wrote me that scrolling through Facebook on a Friday afternoon made him feel low throughout the weekend. Everyone else seemed to be having so much fun, it made him “feel like a loser.”

Danielle Chabassol describes this phenomenon in “Using Social Media Distorts Our Perception of Reality.”

When we decide to share something on social media, we naturally want people to see it, so we end up choosing the most flattering / exciting / out- of-this-world photo or video because those will get the most attention.



A similar thing happens when we watch or look at other people’s posts and only see the best version of their lives. We perceive that their lives are better than ours, which further reinforces our discontent with reality.

 

II.  Not-So-Honest Distortions of Information on the Internet

I define “not-so-honest” distortions of information on the Internet as distortions created by information providers with the specific intent of misleading or manipulating readers or otherwise providing a biased view of the issues at hand.

A.  Bias and Propaganda by the Mass Media

US mass media has consolidated to the point that five or six corporations now control 90 of the US mass media. Figure 1 presents the holdings of the mega-conglomerates. Note that CBS was acquired by Viacom in 1999, then split off again in 2005, but the two corporations are currently considering remerging again.

Figure 1

1 big6 media conglomerates

RT News reports on the extent of consolation of the mass media in “How five American companies control what you think”:

Immediately after World War II three out of four US newspapers were independently owned. But the media-control numbers have been shrinking ever since then due to mergers, acquisitions, and other processes. By 1983, 50 corporations controlled 90 percent of US media. But today just five giant conglomerates control 90 percent of what most Americans read, watch, and listen to.

Consolidation of mass media ownership greatly facilitates the ability of corporate heads to coordinate their actions, so as to purposely shape people’s perceptions of world events in certain ways. Consider the following example, taken from RT News:

… [C]onsider Islamic fundamentalists in Afghanistan. For years the US government supported them with weapons and training and portrayed them as 'freedom fighters' against their secular 'socialist government' and the 'Russian occupation'. The media for the most part went along with this narrative.

But then, after 9/11, in the twinkling of an eye, the fundamentalists became (in the eyes of the government and the conglomerates) 'medievalists, 'oppressors of women,' and harborers of 'terrorism' who must be eliminated via a US invasion.

Recently, the US government, unable after ten years of military occupation to eliminate the Taliban resistance, has again changed course, and is seeking negotiations with the Taliban to include them in the Afghani government. And again the five conglomerates have also changed course to follow the government.

Another example of how coordination of media can provide the people with biased reporting is the case of the Brexit Referendum. The coverage of the entire event, before, during, and after the vote, was biased in favor of Remain. Readers were led to believe that those in favor of Remain, the young, urban, educated, and successful portion of the citizenry, were assured of a win. Those in favor of Exit were portrayed as uneducated, xenophobic racists (see, for example, Toby Young, “Voting Remain is an act of heartless snobbery”). In “The Media’s Disgraceful Brexit Meltdown,” Charles C. W. Cooke provides a scathing piece on how the mass media provided hideously partial reporting of the results and aftermath of the Brexit vote.

It has proven difficult to count the number of ways in which the press has blown this story, so I will focus on just two of the many crucial errors that have caught my attention.

The first is the press’s peculiar belief that the “Leave” side won because its voters are stupid and impetuous, and because they don’t know what’s good for themselves…



The press’s second big mistake has been to buy into the absurd idea that the British voted to leave the EU because they hate immigrants or non-white people.

It’s probably no coincidence that as mass media has consolidated, public trust in mass media has dropped, reaching a record low of 32 in 2016 (see Figure 2). From Art Swift, “Americans' Trust in Mass Media Sinks to New Low”:

Americans' trust and confidence in the mass media "to report the news fully, accurately and fairly" has dropped to its lowest level in Gallup polling history, with 32 saying they have a great deal or fair amount of trust in the media. This is down eight percentage points from last year.

Gallup began asking this question in 1972, and on a yearly basis since 1997. Over the history of the entire trend, Americans' trust and confidence hit its highest point in 1976, at 72 ...

Figure 2

B.  Bad Journalism and Contentious Reporting

I mentioned above how the mass media has consolidated tremendously, thereby facilitating the ability of corporate heads to coordinate their actions, so as to shape people’s perceptions of world events. Not only has the media become biased in the eyes of society, however, but the reporting has also become less accurate and more contentious with the advent of the Internet.

i.  Decreased Accuracy

In “Times have changed and technology has changed,” Erin Gray, reports how news content reported on the Internet has become less accurate as a result of sloppy reporting. Inaccurate reporting has become more prevalent as news cycles have decreased and reporters have, accordingly, rushed to post content.

News content on the web versus news content in the paper vary in a way that web content has to try harder to keep a reader’s attention more than print paper does. Since more people are switching to the web for news, the battle no longer lies between just print and web –there is now an additional competition between desktop and mobile. As Freedman said, misinformation can spread far and fast with social media. When millennials and even some older generations look on social media or the web for news updates throughout the day, it puts a pressure on journalists to get breaking news out there first. The idea of accuracy is falling in between the cracks a little but because of this race to get the clicks first.

ii.  Incomplete Reporting

“Close-hold embargoes” are limits placed by an entity announcing information (at a press release) on the sources that attending journalists are allowed to consult and on the timing of information releases by which journalists must abide. Charles Seife provides more information on information embargoes in, “How the FDA Manipulates the Media.”

The embargo is a back-room deal between journalists and the people they cover —their sources. A source grants the journalist access on condition that he or she cannot publish before an agreed-on date and time.

A surprisingly large proportion of science and health stories are the product of embargoes. Most of the major science journals offer reporters advance copies of upcoming articles— and the contact information of the authors—in return for agreeing not to run with the story until the embargo expires.

Journalists agree to the restrictions placed on them by the reporting entities because they want access to the information the entities are providing.

Charles Seife indicates that the use of such embargoes seems to be increasing.

Despite the difficulty of measuring the use of close-hold embargoes, Oransky and Kiernan and other embargo observers agree that they—and other variations of the embargo used to tighten control over the press—appear to be on the rise. And they have been cropping up in other fields of journalism, such as business journalism as well. “More and more sources, including government sources but also corporate sources, are interested in controlling the message, and this is one of the ways they're trying to do it,” says the New York Times' Sullivan.

iii.  Focus on Contentious Information

The more contentious information is, the more likely it is to be shared. This promotes uncivilized and divisive information flows over more civilized and productive discourse. The CPG Grey video cited above provides a wonderful illustration of how this dynamic works in practice. In “The Toxoplasma of Rage,” Scott Alexander further explains the dynamic surrounding how society has become polarized as information flows have become more contentious:

Race relations are at historic lows not because white people and black people disagree on very much, but because the media absolutely worked its tuchus off to find the single issue that white people and black people disagreed over the most and ensure that it was the only issue anybody would talk about. Men’s rights activists and feminists hate each other not because there’s a huge divide in how people of different genders think, but because only the most extreme examples of either side will ever gain traction, and those only when they are framed as attacks on the other side.

People talk about the shift from old print-based journalism to the new world of social media and the sites adapted to serve it. These are fast, responsive, and only just beginning to discover the power of controversy. They are memetic evolution shot into hyperdrive, and the omega point is a well-tuned machine optimized to search the world for the most controversial and counterproductive issues, then make sure no one can talk about anything else. An engine that creates money by burning the few remaining shreds of cooperation, bipartisanship and social trust.

C.  Purposely Manipulated Information

i.  Facebook’s Manipulation of Content

A while back, Facebook was found to have conducted an experiment to see if it could influence users’ moods through the selection of content to which they were exposed. Alex Birkett provides more detail in “Online Manipulation: All The Ways You’re Currently Being Deceived”:

We all (hopefully) remember the Facebook scandal, where Facebook manipulated the content seen by more than 600,000 users in an attempt to see if they could affect their emotional state. They basically skewed the number of positive or negative items on random users’ news feeds and then analyzed these people’s future postings. The result? Facebook can manipulate your emotions.

More recently, Facebook claimed that the news stories it showed its users were selected by algorithms designed to curate information of interest to each particular reader, based on his history of Facebook activities. However, it was later discovered that readers were purposely provided news with a liberal slant. As Michael Nunez reports in “Former Facebook Workers: We Routinely Suppressed Conservative News”:

Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project.



In other words, Facebook’s news section operates like a traditional newsroom, reflecting the biases of its workers and the institutional imperatives of the corporation. Imposing human editorial values onto the lists of topics an algorithm spits out is by no means a bad thing—but it is in stark contrast to the company’s claims that the trending module simply lists “topics that have recently become popular on Facebook.”

ii.  Google’s Manipulation of Search Results

What’s more, it has been discovered that Google has been biasing its search results in favor of Hillary Clinton. Jack Hadfield provides more details on this in, “Report: Google Search Bias Protecting Hillary Clinton Confirmed in Experiment”:

In a report published by Sputnik News, psychologist Robert Epstein reveals evidence that Google is manipulating search results related to Hillary Clinton that may “shift as many as 3 million votes” in the upcoming presidential election.

To his credit, Robert Epstein did indicate

In a postscript, Epstein reveals that he is, in fact, a Clinton supporter himself, but concludes that he does not believe it would be “right for her to win the presidency because of the invisible, large-scale manipulations of a private company. That would make democracy meaningless.”

Imagine how much more power Google and other entities will have to manipulate users when assorted devices from the Internet of Things start streaming all sorts of physical, emotional and product usage data from users that can be used to better manipulate them.

iii.  Journalists’ Manipulation of News Reports

As noted in the section above, Bias and Propaganda by the Mass Media, the media have been introducing bias into their news reports. In “Unbiased computer confirms media bias,” Rachel Ehrenberg describes how reporters introduce bias into their news reports through their choices of which quotes to include:

Scientists developed an algorithm that, after churning through more than 200,000 quotes from 275 news outlets, discovered bias in their quote choice. Creating a graph that grouped media outlets by their selected quotes reveals pockets that pretty accurately reflect the political leanings of the outlets. The research suggests that information about an outlet’s political stripes is embedded in quote choice, surrounding context aside.

“Readers might experience very different personalities of the same politician, depending on what the news outlets they follow choose to quote from that politician’s public speeches,” says Cornell computer scientist Cristian Danescu-Niculescu-Mizil, a coauthor of the study... Even though readers are exposed to the politician’s own words, “the part of the speech the reader has access to changes,” he says.

iv.  Manipulation through Placement of Text

In “Surveillance-based manipulation: How Facebook or Google could tilt elections,” Bruce Schneier describes more generally how companies can use such methods as (i) strategic product placement, (ii) frequency of product views, and (iii) amplification of favorable views while dampening adversarial views to manipulate public opinion. He gives an example of how Facebook might sway an election by strategically placing an “I Voted” button on certain users’ pages.

During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4 nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.

v.  Use of Dark Patterns

From DarkPatterns.org, “Dark Patterns: fighting user deception worldwide”

A Dark Pattern is a user interface that has been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills.

Normally when you think of “bad design”, you think of the creator as being sloppy or lazy but with no ill intent. This type of bad design is known as a “UI anti-pattern.” Dark Patterns are different – they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind. We as designers, founders, UX & UI professionals and creators need to take a stance against Dark Patterns.

Dark Patterns describes 14 different kinds of dark patterns:

  1. Bait and Switch
  2. Disguised Ads
  3. Faraway Bill
  4. Forced Continuity
  5. Forced Disclosure
  6. Friend Spam
  7. Hidden Costs
  8. Misdirection
  9. Price Comparison Prevention
  10. Privacy Zuckering
  11. Roach Motel
  12. Road Block
  13. Sneak in the Basket
  14. Trick Questions

vi.  Use of Blackhat Copyrighting

Alex Birkett provides specific examples of copywriting manipulations.

… [H]ere are three very specific, and not often talked about, copywriting manipulations:

1. Testiphonials [Fake testimonials]

2. False Scarcity [Claiming an item is about to sell out]

3. The Damning Admission [Admitting something bad about your product to provide users with a false sense of integrity]

vii.  Use of Manipulative Defaults

Content providers can easily manipulate users through their choice of default options. People are notorious for picking the first option given to them or by simply not making any selection and thereby defaulting to the default option. Perhaps one of the most insidious practices is for providers to set the default options as “continue to subscribe,” which means users often continue to get billed for products or services they do not want.

 

III.  Propagation on the Internet Promotes Distortion of Information

Due to the nature of information being passed from one Internet user to the next – via postings to social media, commentary via blogs, etc. – much of the misinformation that makes it onto the Internet (that is, information that is either distorted when it was originally posted or becomes distorted as it moves across the Internet) tends to propagate fairly quickly. To the extent that (i) misinformation is more sensational than true information is, and (ii) people tend to pass on more sensational information, then misinformation will propagate faster and further than true information (see the section below on Focus on Contentious Information for support for this claim).

In “How social media can distort and misinform when communicating science,” Jacob Groshek relates how this phenomenon applies in particular to scientific information:

Millions of Americans shape their ideas on complex and controversial scientific questions … based on what they see on social media. Even many traditional news organizations and media outlets report incomplete aspects of scientific studies, or misinterpret the findings and highlight unusual claims. Once these items enter into the social media echo chamber, they’re amplified. The facts become lost in the shuffle of competing information, limited attention or both.

Jacob Groshek goes on to describe how the process of trying to simplify science for the masses can lead to the generation and propagation of misinformation:

University of Alberta law and public health professor Tim Caulfield … actively works to diminish the phenomenon he calls “scienceploitation.” He defines the term as when media reporting takes a legitimate area of science and inaccurately simplifies it for the general public.

Scienceploitation is embodied in especially egregious “click-bait” headlines. Think the Huffington Post erroneously equating a glass of red wine to an hour at the gym, or the viral hoax study that linked eating chocolate with losing weight.

I find the idea of information spreading like a virus, mutating as it spreads, to be a particularly apt one. This idea is presented wonderfully in CGP Grey’s video “This Video Will Make You Angry.” Kristine de Valck also uses the virus analogy in, “Word of mouth on the Internet: Being aware of message distortion”

… [D]igital word of mouth does not simply increase or amplify marketing messages but instead systematically modifies them during their integration. “Existing theories have a truncated view of word of mouth. The concept of viral marketing is based on a misconception, where a message that integrates into a market spreads, intact, like a virus,” says Kristine De Valck. “We have shown that this is not what happens: Bloggers adapt what they say to their own narrative and the community to which they belong; they transform the marketer's message to suit their own identity (or that of the character they have created online) as well as their audience and their network.”

The World Economic Forum uses a similar analogy to describe the spread of misinformation on the Internet, that of a wildfire. Robin Andrews provides more details in, “How Misinformation Spreads On The Internet”:

A claim, whether it is substantiated or not, is given credence in the mind of an individual if the surrounding society deems it acceptable. This is known as confirmation bias, and this study shows that the phenomenon is just as prevalent in online communities as it is in physical ones. In the case of misinformation, this is incredibly dangerous – so much so that the World Economic Forum has declared its online spread, a form of “digital wildfire,” one of the main threats to global society.

 

IV.  Governments Use the Internet to Spread Propaganda and Misinformation

A.  Nationalistic Propaganda

Government dissemination of nationalistic propaganda is nothing new. However, the Internet affords governments unparalleled abilities to propagate such information to its citizens. In “From Britain to Beijing: how governments manipulate the internet,” Maeve Shearlaw provides some specific examples of specific campaigns by governments to influence citizens’ opinions.

B.  Misinformation

Government dissemination of misinformation, particular through “covert” organizations, is also nothing new. Such ploys are part of the standard international geopolitical game. See, for example,

 

V.  Defenses Against Information Distortion

So now that we see all the ways that information we find on the Internet may become distorted, what can we do about it? How do we determine whether or not the story we’re getting is true and accurate? Fortunately, there are, in fact, several defenses we can invoke.

A.  Be Aware and Be Skeptical

Perhaps one of the best defenses against becoming a victim of information manipulations or distortions is to know that they exist and understand how they work. Having this knowledge empowers users to be skeptical about the information they are given.

In “Manipulation of Information,” Media-Youth.org advocates being on alert as to why content providers used certain texts and designs on their website.

When considering how language can manipulate we have to look at the following: organization of the text, fonts, colours, graphic design, use of language (e.g. use of more figurative subjective or emotional expressions vs. use of objective matter-of-fact language). We have to always ask ourselves questions such as: why had the author put this in so small letters and that in letters so much bigger? Reading and thinking a bit more deeply will help you learn reading between the lines and avoid being tricked by manipulated texts.

And David Dunning, in “How misinformation on the internet is making us dumber,” advises people to play devil’s advocate with the information they’re given to test out alternative theories.

... [B]e a skeptic. Psychological research shows that groups designating one or two of its members to play devil’s advocates – questioning whatever conclusion the group is leaning toward – make for better- reasoned decisions of greater quality.

If no one else is around, it pays to be your own devil’s advocate. Don’t just believe what the Internet has to say; question it.

B.  Seek Alternative Viewpoints

Perhaps the most obvious method to use to prevent oneself from falling victim to excessive confirmation bias or otherwise becoming immersed in echo chambers is to purposely seek out perspectives that differ from your own. Pursuing alternative viewpoints and really trying to assess those perspectives, will go a long way towards helping us better understand the merits of our original beliefs. And who knows? Some of us might even change their minds.

From NPR, “The Reason Your Feed Became An Echo Chamber — And What To Do About It”:

"You would think that social media would bridge a bunch of divides, right?" Demby said. "But maybe the ideal way these conversations need to happen is one-on-one with people who are equally vested in the relationship between the two people."

"So for me, one of the best things has been actually seeking out and finding folks who don't think like me who I'm genuinely interested in, as people and thinkers."

From Will Leitch, “A Nation Of Echo Chambers: How The Internet Closed Off The World”:

So one tries to find hope.

I tend to find it outside, where people, you know, are. Because we drop this act during those strange, disorienting times when we find ourselves in mixed company, lo, real life... In regular, everyday life, we accept all the time that those who disagree with us exist; sometimes we even like them.

The world is uncertain and terrifying; life is hard and bewildering and unpredictable. We allow for this in our daily interactions in a way we do not in our virtual conversations.

From Orion Jones, “How Social Media Created an Echo Chamber for Ideas”:

Thus individuals, companies, and other organization looking to understand the world in new ways must develop tools to deliberately include opinions that they do not agree with.

C.  Go Back to the Original Source

The CPG Grey video cited above shows how information can become distorted as it passes from source to source. One good way to assess the accuracy of derivative reporting of information is to go back to the original source and see what it actually said. Susan Kraykowski makes this insightful recommendation in “15 News Organizations Worthy of Respect”:

This brings up a salient point: always go to the primary source. In the recent controversy over whether or not the father of a Sandy Hook victim actually was heckled by gun rights activists during a Newtown Town Council meeting, right-wing and left-wing bloggers threw accusations at each other all over the internet – until it's extremely difficult to decide who's "right" about this. I found the original news report from the original reporter's account of the meeting, with original videotape – from The Connecticut Post.

D.  Understand the History Behind News Events

When it comes to political matter and current events, one can avoid falling victim to manipulated information by seeking out the history of the issue or event from multiple, alternative sources. Having a better understand of the complete history and the relevant issues will help users place any biased information in the proper context. RT News makes this point in “How five American companies control what you think.”

The best advice for anyone seeking to understand current events is to look at the history and realities behind them, and to look at media not controlled by the five conglomerates. Media – including print, television, and internet – is available in multiple languages including English from Russia, China, India, Pakistan, South Africa, the Middle East, Brazil, and other countries. You can easily find this media by internet search. No doubt all media contains bias; but at least your mind will not be shaped solely by the US narrative.

E.  Information Providers Can Put Accuracy and Completeness before Timeliness of Reporting

Another means for reducing the amount of incomplete and inaccurate flows of information on the Internet would be for information providers to prioritize accuracy and completeness of reporting over timeliness of information releases.

The underlying problem here is a prisoner’s dilemma game: journalists would be better off if they could commit to waiting to announce information until their stories were complete. However, everyone ends up releasing inaccurate information too early in an attempt to beat rivals to the punch.

Part of the problem here is too little accountability for accuracy. If reporters were fined for releasing inaccurate information, they would have an incentive to clean up their acts. Unfortunately, (or fortunately) we have the first amendment, which generally enables people to say what they please, regardless of the truth or accuracy of their statements. Alternatively, if more people became aware of the inaccuracies, and, if, as a result, people stopped reading reports by journalists who had reported inaccurate information, journalists would have less incentive to do so.

In “How the FDA Manipulates the Media,” Charles Seife blames the journalists, as much as the embargoers themselves, for sacrificing good journalism in exchange for a timely news release. It is the reporters who choose to publish incomplete information immediately upon expiration of the embargo deadline, rather then delay their news releases until they’re able to provide a more complete story.

As much blame as government and other institutions bear for attempting to control the press through such means, the primary responsibility lies with the journalists themselves. Even a close-hold embargo wouldn't constrain a reporter without the reporter's consent; the reporter can simply wait until the embargo expires and speak to outside sources, albeit at the cost of filing the story a little bit later.

Says Oransky: “We as journalists need to look inward a little bit and think about why all of us feel we absolutely have to publish something at embargo [expiration] when we don't think we have the whole story?” Alas, Kiernan says, there isn't any movement within the journalism community to change things: “I don't know that journalists in general have taken a step back, [looking] from the 50,000-foot view to understand how their work is controlled and shaped by the embargo system.

In “How social media can distort and misinform when communicating science,” Jacob Groshek puts the onus on scientists to clarify any miscommunications to the public.

Kevin Folta places part of the blame for this communication breakdown on the scientists themselves. He stated that among researchers:

There is a disconnected arrogance that turns off the public and does not get them excited about learning more. Social media and the internet are a conduit of bad information. On social media it’s easy to find information that scares you and scientists are not participating in trying to make it right.

Some scientists and agencies are pursuing new modes of communication, such as brief scientific animations to summarize and share research. The goal remains increasing understanding and minimizing potential distortion or oversimplification of scientific findings.

F.  Balance Positive and Negative in Social Media Posts

In “Using Social Media Distorts Our Perception of Reality,” Danielle Chabassol suggests that biases in the postings of only favorable information on social media might be mediated by having people select more balanced (i.e., both bad and good) information to provide to the public.

The power that social media has to distort reality is one of the reasons we called our project Exploring Alternatives. From the start, we knew that we wanted to share our lifestyle experiments with people but we didn’t want to make it seem like our lives were perfect. In addition to the blog name, we try to balance the positives with the negatives by sharing posts like: our top 3 travel nightmare stories and the challenges of living a nomadic lifestyle; and we’ve seen bloggers like Shelby & Simon do the same with their What’s it really like to live in a van? post.

Our goal is to look at alternative lifestyles realistically so that it’s easier to know what we’re getting into when we make a lifestyle change. There are inevitably going to be ups, downs, and imperfections with every option but if we take the time to choose a lifestyle that genuinely aligns with our interests, priorities, and values (and with the darn budget) then we’re already ahead of the social media game.

Given human nature – peoples’ desire to show others the best in themselves – I don’t think the majority of people will follow through on posting negative images of themselves.

 

VI.  Consequences of Information Distortion

A.  Good Judgment Will Become More Valuable

An important consequence of the increasing pervasiveness of manipulated and distorted information on the Internet is that people who are able to sift through all the information and be able to ultimately discern the truth will become more valuable to society. Tomas Chamorro-Premuzic makes this point in “How the web distorts reality and impairs our judgement skills”:

As for the long-term consequences, it is conceivable that individuals' capacity to evaluate and produce original knowledge will matter more than the actual acquisition of knowledge. Good judgment and decision-making will be more in demand than sheer expertise or domain-specific knowledge. Just like computing, then, the human mind may evolve into a smaller repository, but faster processor, of information – so long as we still have access to the cloud.

B.  Organizations Will Emerge to Defend Against Information Manipulations

Another consequence of the increasing pervasiveness of manipulated and distorted information on the Internet is that organizations will emerge with the purpose of rooting out accurate information.

Snopes is one such organization, which specializes in determining the accuracy of rumors on the Internet.

The snopes.com website was founded by David Mikkelson, who lives and works in the Los Angeles area. What he began in 1995 as an expression of his interest in researching urban legends has since grown into what is widely regarded by folklorists, journalists, and laypersons alike as one of the World Wide Web's essential resources...

With over 20 years' experience as a professional researcher and writer, David has created in snopes.com what has come to be regarded as an online touchstone of rumor research…



The snopes.com web site is (and always has been) a completely independent, self-sufficient entity wholly owned by its operators and funded through advertising revenues. Neither the site nor its operators has ever received monies from (or been engaged in any business or editorial relationship with), any sponsor, investor, partner, political party, religious group, business organization, government agency, or any other outside group or organization.

Another organization that has emerged to assess fairness and accuracy in reporting is FAIR.org:

FAIR, the national media watch group, has been offering well-documented criticism of media bias and censorship since 1986. We work to invigorate the First Amendment by advocating for greater diversity in the press and by scrutinizing media practices that marginalize public interest, minority and dissenting viewpoints. As an anti-censorship organization, we expose neglected news stories and defend working journalists when they are muzzled.

C.  Important Information May Become Overlooked

One of the more noteworthy consequences of information distortions on the Internet is that important information may not end up being communicated. This could occur for several reasons.

i. Readers who are caught up in an echo chamber or filter bubble may simply not be exposed to the information.

This happens a lot in politics, where people are exposed to all the good things their party has done and all the bad things the opposing party has done, but they are never exposed to the bad things their own party has done. This gives people a false sense that their side is overwhelmingly good, while the other side is overwhelmingly evil. Differences become over-emphasized while similarities become lost in the shuffle.

ii. Readers may be presented a distorted view of the information, and as a result the true significance is not communicated or understood.

A good example of this phenomenon is the use of “scienceploitation” discussed earlier. People end up with a false or otherwise misinformed idea about the true significance of a new discovery.

iii. Readers may be purposely guided by information providers in such as way that readers do not see the important information.

This happens when, for example, people get caught up in unimportant events that are sensationalized, but then miss important information that gets overshadowed by the hype. And often, this is precisely the intent on the information providers (i.e., government, mass media, etc.). Tyler Durden provides a wonderful example of this misdirection in “5 Stories The Mainstream Media Ignored While Reporting On Kim Kardashian's Robbery.”

One of the saving graces of the ailing corporate media - for the folks setting the agenda, anyhow - is its relentless ability to hyper-focus the public’s attention on altogether meaningless events.

Take, for instance, an armed robbery that sees property stolen but no one harmed. Such an event is unfortunate, yes, but such is life. People get robbed. It certainly isn’t something that should consume the news cycle — particularly when developments of actual importance are unfolding around the world.

Yet that’s exactly what happened this week after reality TV star Kim Kardashian was robbed at gunpoint in Paris on Monday morning... But then, if she were an average citizen, the media wouldn’t have covered the story in the first place.

And that’s the point.

In truth, the Kim Kardashian incident is precisely the type of filler story the corporate media has used time and again to keep the celebrity-obsessed masses distracted from reality... there were events taking place in the world that the masses following Kim Kardashian should be informed of…

1. DIPLOMACY BETWEEN RUSSIA AND THE US IN SYRIA IS DEAD

2. THE PENTAGON DUMPED HALF A BILLION DOLLARS INTO MAKING PROPAGANDA FOR TERRORISTS

3. THE PUBLIC GOT A RAW LOOK AT HILLARY CLINTON

4. MILLIONS OF RUSSIAN CITIZENS PREPARE FOR NUCLEAR WAR

5. BEES PUT ON ENDANGERED SPECIES LIST FOR FIRST TIME EVER

D.  Reputation Will Become More Important

Merriam-Webster defines “reputation” as

a :  overall quality or character as seen or judged by people in general

b :  recognition by other people of some characteristic or ability

Reputation is an important means of conveying information to others about an entity’s quality or character. News and information sources already have reputations for progressive or conservative biases. And as we saw above, mass media sources also have reputations regarding the extent to which they “report the news fully, accurately and fairly" (see Figure 2).

The use of reputation and ratings systems has been an exceedingly important tool for enabling transactions on the Internet between people who don’t know each other. As the nature of information distortions and the extent of misinformation on the Internet become better understood, reputation should come to play an ever larger and more important role in helping people assess the likely accuracy of various sources of information.