U.S. Copyright Law
(title 17 of U.S. code)
governs the reproduction
and redistribution of
copyrighted material.
Downloading this
document for the
purpose of
redistribution is
prohibited.
FOR RELEASE OCTOBER 19, 2017
BY Janna Anderson and Lee Rainie
FOR MEDIA OR OTHER INQUIRIES:
Lee Rainie, Director, Internet, Science and
Technology research
Janna Anderson, Director, Imagining the Internet
Center, Elon University
Tom Caiazza, Communications Manager
202.419.4372
www.pewresearch.org
RECOMMENDED CITATION
Pew Research Center, October, 2017, “The
Future of Truth and Misinformation Onlineâ€
1
PEW RESEARCH CENTER
www.pewresearch.org
About Pew Research Center
Pew Research Center is a nonpartisan fact tank that informs the public about the issues,
attitudes and trends shaping America and the world. It does not take policy positions. The
center conducts public opinion polling, demographic research, content analysis and other
data-driven social science research. It studies U.S. politics and policy; journalism and media;
internet, science and technology; religion and public life; Hispanic trends; global attitudes
and trends; and U.S. social and demographic trends. All of the Center’s reports are available
at www.pewresearch.org. Pew Research Center is a subsidiary of The Pew Charitable Trusts,
its primary funder.
For this project, Pew Research Center worked with Elon University’s Imagining the Internet
Center, which helped conceive the research and collect and analyze the data.
© Pew Research Center 2017
2
PEW RESEARCH CENTER
www.pewresearch.org
The Future of Truth and Misinformation Online
In late 2016, Oxford Dictionaries selected “post-truth†as the word of the year, defining it as
“relating to or denoting circumstances in which objective facts are less influential in shaping
public opinion than appeals to emotion and personal belief.â€
The 2016 Brexit vote in the United Kingdom and the tumultuous U.S. presidential election
highlighted how the digital age has affected news and cultural narratives. New information
platforms feed the ancient instinct people have to find information that syncs with their
perspectives: A 2016 study that analyzed 376 million Facebook users’ interactions with over
900 news outlets found that people tend to seek information that aligns with their views.
This makes many vulnerable to accepting and acting on misinformation. For instance, after
fake news stories in June 2017 reported Ethereum’s founder Vitalik Buterin had died in a car
crash its market value was reported to have dropped by $4 billion.
When BBC Future Now interviewed a panel of 50 experts in early 2017 about the “grand
challenges we face in the 21st century†many named the breakdown of trusted information
sources. “The major new challenge in reporting news is the new shape of truth,†said Kevin
Kelly, co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is
networked by peers. For every fact there is a counterfact and all these counterfacts and facts
look identical online, which is confusing to most people.â€
Americans worry about that: A Pew Research Center study conducted just after the 2016
election found 64% of adults believe fake news stories cause a great deal of confusion and
23% said they had shared fabricated political stories themselves – sometimes by mistake and
sometimes intentionally.
The question arises, then: What will happen to the online information environment in the
coming decade? In summer 2017, Pew Research Center and Elon University’s Imagining the
3
PEW RESEARCH CENTER
www.pewresearch.org
Internet Center conducted a large canvassing of technologists, scholars, practitioners,
strategic thinkers and others, asking them to react to this framing of the issue:
The rise of “fake news†and the proliferation of doctored
narratives that are spread by humans and bots online are
challenging publishers and platforms. Those trying to stop the
spread of false information are working to design technical and
human systems that can weed it out and minimize the ways in
which bots and other schemes spread lies and misinformation.
The question: In the next 10 years, will trusted methods emerge to
block false narratives and allow the most accurate information to
prevail in the overall information ecosystem? Or will the quality
and veracity of information online deteriorate due to the spread
of unreliable, sometimes even dangerous, socially destabilizing
ideas?
Respondents were then asked to choose one of the following answer options:
The information environment will improve – In the next 10 years, on balance, the
information environment will be IMPROVED by changes that reduce the spread of
lies and other misinformation online.
The information environment will NOT improve – In the next 10 years, on balance,
the information environment will NOT BE improved by changes designed to reduce
the spread of lies and other misinformation online.
Some 1,116 responded to this nonscientific canvassing: 51% chose the option that the
information environment will not improve, and 49% said the information environment will
improve. (See “About this canvassing of experts†for details about this sample.) Participants
were next asked to explain their answers. This report concentrates on these follow-up
responses.
Their reasoning revealed a wide range of opinions about the nature of these threats and the
most likely solutions required to resolve them. But the overarching and competing themes
were clear: Those who do not think things will improve felt that humans mostly shape
technology advances to their own, not-fully-noble purposes and that bad actors with bad
motives will thwart the best efforts of technology innovators to remedy today’s problems.
4
PEW RESEARCH CENTER
www.pewresearch.org
And those who are most hopeful believed that technological fixes can be implemented to
bring out the better angels guiding human nature.
More specifically, the 51% of these experts who expect things will not improve generally cited
two reasons:
The fake news ecosystem preys on some of our deepest human instincts:
Respondents said humans’ primal quest for success and power – their “survival†instinct –
will continue to degrade the online information environment in the next decade. They
predicted that manipulative actors will use new digital tools to take advantage of humans’
inbred preference for comfort and convenience and their craving for the answers they find in
reinforcing echo chambers.
Our brains are not wired to contend with the pace of technological change: These
respondents said the rising speed, reach and efficiencies of the internet and emerging online
applications will magnify these human tendencies and that technology-based solutions will
not be able to overcome them. They predicted a future information landscape in which fake
information crowds out reliable information. Some even foresaw a world in which
widespread information scams and mass manipulation cause broad swathes of public to
simply give up on being informed participants in civic life.
The 49% of these experts who expect things to improve generally inverted that reasoning:
Technology can help fix these problems: These more hopeful experts said the rising
speed, reach and efficiencies of the internet, apps and platforms can be harnessed to rein in
fake news and misinformation campaigns. Some predicted better methods will arise to create
and promote trusted, fact-based news sources.
It is also human nature to come together and fix problems: The hopeful experts in
this canvassing took the view that people have always adapted to change and that this current
wave of challenges will also be overcome. They noted that misinformation and bad actors
have always existed but have eventually been marginalized by smart people and processes.
They expect well-meaning actors will work together to find ways to enhance the information
environment. They also believe better information literacy among citizens will enable people
to judge the veracity of material content and eventually raise the tone of discourse.
The majority of participants in this canvassing wrote detailed elaborations on their views.
Some chose to have their names connected to their answers; others opted to respond
5
PEW RESEARCH CENTER
www.pewresearch.org
anonymously. These findings do not represent all possible points of view, but they do reveal a
wide range of striking observations.
Respondents collectively articulated several major themes tied to those insights and
explained in the sections below the following graphic. Several longer additional sets of
responses tied to these themes follow that summary.
The following section presents an overview of the themes found among the written
responses, including a small selection of representative quotes supporting each point. Some
comments are lightly edited for style or length.
6
PEW RESEARCH CENTER
www.pewresearch.org
Major themes on the future of the online information environment
THINGS
WILL NOT
IMPROVE
Theme 1 The information environment will not improve: The problem is human nature
 More people = more problems. The internet’s continuous growth and accelerating innovation
allow more people and artificial intelligence (AI) to create and instantly spread manipulative
narratives
ï‚· Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that
which seems familiar
ï‚· In existing economic, political and social systems, the powerful corporate and government
leaders most able to improve the information environment profit most when it is in turmoil
ï‚· Human tendencies and infoglut drive people apart and make it harder for them to agree on
“common knowledge.†That makes healthy debate difficult and destabilizes trust. The fading of
news media contributes to the problem
ï‚· A small segment of society will find, use and perhaps pay a premium for information from reliable
sources. Outside of this group “chaos will reign†and a worsening digital divide will develop
Theme 2 The information environment will not improve because technology will create new challenges
that can’t or won’t be countered effectively and at scale
ï‚· Those generally acting for themselves and not the public good have the advantage, and they are
likely to stay ahead in the information wars
ï‚· Weaponized narratives and other false content will be magnified by social media, online filter
bubbles and AI
 The most effective tech solutions to misinformation will endanger people’s dwindling privacy options,
and they are likely to limit free speech and remove the ability for people to be anonymous online
THINGS
WILL IMPROVE
Theme 3 The information environment will improve because technology will help label, filter or ban
misinformation and thus upgrade the public’s ability to judge the quality and veracity of content
ï‚· Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plugins and the implementation of “trust ratingsâ€
ï‚· Regulatory remedies could include software liability law, required identities and the unbundling
of social networks like Facebook
Theme 4 The information environment will improve, because people will adjust and make things better
ï‚· Misinformation has always been with us and people have found ways to lessen its impact. The
problems will become more manageable as people become more adept at sorting through material
ï‚· Crowdsourcing will work to highlight verified facts and block those who propagate lies and
propaganda. Some also have hopes for distributed ledgers (blockchain)
MAJOR
PROGRAMS
ARE
NECESSARY
Theme 5 Tech can’t win the battle. The public must fund and support the production of objective,
accurate information. It must also elevate information literacy to be a primary goal of education
ï‚· Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public
press
ï‚· Elevate information literacy: It must become a primary goal at all levels of education
PEW RESEARCH CENTER, ELON UNIVERSITY’S IMAGINING THE INTERNET CENTER
7
PEW RESEARCH CENTER
www.pewresearch.org
Most respondents who expect the environment to worsen said human nature is at fault. For
instance, Christian H. Huitema, former president of the Internet Architecture Board,
commented, “The quality of information will not improve in the coming years, because
technology can’t improve human nature all that much.â€
These experts predicted that the problem of misinformation will be amplified because the
worst side of human nature is magnified by bad actors using advanced online tools at
internet speed on a vast scale.
Tom Rosenstiel, author, director of the American Press Institute and senior fellow at the
Brookings Institution, commented, “Whatever changes platform companies make, and
whatever innovations fact checkers and other journalists put in place, those who want to
deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a
social condition, like crime, that you must constantly monitor and adjust to. Since as far back
as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world
before the truth gets its pants on.’â€
Michael J. Oghia, an author, editor and journalist based in Europe, said he expects a
worsening of the information environment due to five things: “1) The spread of
misinformation and hate; 2) Inflammation, sociocultural conflict and violence; 3) The
breakdown of socially accepted/agreed-upon knowledge and what constitutes ‘fact.’ 4) A new
digital divide of those subscribed (and ultimately controlled) by misinformation and those
who are ‘enlightened’ by information based on reason, logic, scientific inquiry and critical
thinking. 5) Further divides between communities, so that as we are more connected we are
farther apart. And many others.â€
Leah Lievrouw, professor in the department of information studies at the University of
California, Los Angeles, observed, “So many players and interests see online information as a
uniquely powerful shaper of individual action and public opinion in ways that serve their
economic or political interests (marketing, politics, education, scientific controversies,
community identity and solidarity, behavioral ‘nudging,’ etc.). These very diverse players
would likely oppose (or try to subvert) technological or policy interventions or other attempts
to insure the quality, and especially the disinterestedness, of information.â€
8
PEW RESEARCH CENTER
www.pewresearch.org
Subtheme: More people = more problems. The internet’s continuous growth and
accelerating innovation allow more people and artificial intelligence (AI) to create and
instantly spread manipulative narratives
While propaganda and the manipulation of the public via falsehoods is a tactic as old as the
human race, many of these experts predicted that the speed, reach and low cost of online
communication plus continuously emerging innovations will magnify the threat level
significantly. A professor at a Washington, D.C.-area university said, “It is nearly
impossible to implement solutions at scale – the attack surface is too large to be defended
successfully.â€
Jerry Michalski, futurist and founder of REX, replied, “The trustworthiness of our
information environment will decrease over the next decade because: 1) It is inexpensive and
easy for bad actors to act badly; 2) Potential technical solutions based on strong ID and
public voting (for example) won’t quite solve the problem; and 3) real solutions based on
actual trusted relationships will take time to evolve – likely more than a decade.â€
An institute director and university professor said, “The internet is the 21st century’s
threat of a ‘nuclear winter,’ and there’s no equivalent international framework for
nonproliferation or disarmament. The public can grasp the destructive power of nuclear
weapons in a way they will never understand the utterly corrosive power of the internet to
civilized society, when there is no reliable mechanism for sorting out what people can believe
to be true or false.â€
Bob Frankston, internet pioneer and software innovator, said, “I always thought that ‘Mein
Kampf’ could be countered with enough information. Now I feel that people will tend to look
for confirmation of their biases and the radical transparency will not shine a cleansing light.â€
David Harries, associate executive director for Foresight Canada, replied, “More and more,
history is being written, rewritten and corrected, because more and more people have the
ways and means to do so. Therefore there is ever more information that competes for
attention, for credibility and for influence. The competition will complicate and intensify the
search for veracity. Of course, many are less interested in veracity than in winning the
competition.â€
Glenn Edens, CTO for technology reserve at PARC, a Xerox company, commented,
“Misinformation is a two-way street. Producers have an easy publishing platform to reach
9
PEW RESEARCH CENTER
www.pewresearch.org
wide audiences and those audiences are flocking to the sources. The audiences typically are
looking for information that fits their belief systems, so it is a really tough problem.â€
Subtheme: Humans are by nature selfish, tribal, gullible convenience seekers
who put the most trust in that which seems familiar
The respondents who supported this view noted that people’s actions – from consciously
malevolent and power-seeking behaviors to seemingly more benign acts undertaken for
comfort or convenience – will work to undermine a healthy information environment.
An executive consultant based in North America wrote, “It comes down to motivation:
There is no market for the truth. The public isn’t motivated to seek out verified, vetted
information. They are happy hearing what confirms their views. And people can gain more
creating fake information (both monetary and in notoriety) than they can keeping it from
occurring.â€
Serge Marelli, an IT professional who works on and with the Net, wrote, “As a group,
humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said,
‘Never underestimate the power of stupid people in large groups.’ Then, you have
Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom
of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he
calls you foolish.’â€
Starr Roxanne Hiltz, distinguished professor of information systems and co-author of the
visionary 1970s book “The Network Nation,†replied, “People on systems like Facebook are
increasingly forming into ‘echo chambers’ of those who think alike. They will keep
unfriending those who don’t, and passing on rumors and fake news that agrees with their
point of view. When the president of the U.S. frequently attacks the traditional media and
anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in
reliable and trustworthy facts circulating in social media.â€
Nigel Cameron, a technology and futures editor and president of the Center for Policy on
Emerging Technologies, said, “Human nature is not EVER going to change (though it may, of
course, be manipulated). And the political environment is bad.â€
Ian O’Byrne, assistant professor at the College of Charleston, replied, “Human nature will
take over as the salacious is often sexier than facts. There are multiple information streams,
public and private, that spread this information online. We can also not trust the businesses
10
PEW RESEARCH CENTER
www.pewresearch.org
and industries that develop and facilitate these digital texts and tools to make changes that
will significantly improve the situation.â€
Greg Swanson, media consultant with ITZonTarget, noted, “The sorting of reliable versus
fake news requires a trusted referee. It seems unlikely that government can play a
meaningful role as this referee. We are too polarized. And we have come to see the television
news teams as representing divergent points of view, and, depending on your politics, the
network that does not represent your views is guilty of ‘fake news.’ It is hard to imagine a fair
referee that would be universally trusted.â€
Richard Lachmann, professor of sociology at the State University of New York at Albany,
replied, “Even though systems [that] flag unreliable information can and will be developed,
internet users have to be willing to take advantage of those warnings. Too many Americans
will live in political and social subcultures that will champion false information and
encourage use of sites that present such false information.â€
There were also those among these expert respondents who said inequities, perceived and
real, are at the root of much of the misinformation being produced.
A professor at MIT observed, “I see this as problem with a socioeconomic cure: Greater
equity and justice will achieve much more than a bot war over facts. Controlling ‘noise’ is less
a technological problem than a human problem, a problem of belief, of ideology. Profound
levels of ungrounded beliefs about things both sacred and profane existed before the
branding of ‘fake news.’ Belief systems – not ‘truths’ – help to cement identities, forge
relationships, explain the unexplainable.â€
Julian Sefton-Green, professor of new media education at Deakin University in Australia,
said, “The information environment is an extension of social and political tensions. It is
impossible to make the information environment a rational, disinterested space; it will
always be susceptible to pressure.â€
A respondent affiliated with Harvard University’s Berkman Klein Center for
Internet & Society wrote, “The democratization of publication and consumption that the
networked sphere represents is too expansive for there to be any meaningful improvement
possible in terms of controlling or labeling information. People will continue to cosset their
own cognitive biases.â€
11
PEW RESEARCH CENTER
www.pewresearch.org
Subtheme: In existing economic, political and social systems,
the powerful corporate and government leaders most able to improve the information
environment profit most when it is in turmoil
A large number of respondents said the interests of the most highly motivated actors,
including those in the worlds of business and politics, are generally not motivated to “fix†the
proliferation of misinformation. Those players will be a key driver in the worsening of the
information environment in the coming years and/or the lack of any serious attempts to
effectively mitigate the problem.
Scott Shamp, a dean at Florida State University, commented, “Too many groups gain
power through the proliferation of inaccurate or misleading information. When there is value
in misinformation, it will rule.â€
Alex “Sandy†Pentland, member of the U.S. National Academy of Engineering and the
World Economic Forum, commented, “We know how to dramatically improve the situation,
based on studies of political and similar predictions. What we don’t know is how to make it a
thriving business. The current [information] models are driven by clickbait, and that is not
the foundation of a sustainable economic model.â€
Stephen Downes, researcher with the National Research Council of Canada, wrote,
“Things will not improve. There is too much incentive to spread disinformation, fake news,
malware and the rest. Governments and organizations are major actors in this space.â€
An anonymous respondent said, “Actors can benefit socially, economically, politically by
manipulating the information environment. As long as these incentives exist, actors will find
a way to exploit them. These benefits are not amenable to technological resolution as they are
social, political and cultural in nature. Solving this problem will require larger changes in
society.â€
A number of respondents mentioned market capitalism as a primary obstacle to improving
the information environment. A professor based in North America said, “[This] is a
capitalist system. The information that will be disseminated will be biased, based on
monetary interests.â€
Seth Finkelstein, consulting programmer and winner of the Electronic Freedom
Foundation’s Pioneer Award, commented, “Virtually all the structural incentives to spread
misinformation seem to be getting worse.â€
12
PEW RESEARCH CENTER
www.pewresearch.org
A data scientist based in Europe wrote, “The information environment is built on the
top of telecommunication infrastructures and services developed following the free-market
ideology, where ‘truth’ or ‘fact’ are only useful as long as they can be commodified as market
products.â€
Zbigniew Åukasiak, a business leader based in Europe, wrote, “Big political players have
just learned how to play this game. I don’t think they will put much effort into eliminating it.â€
A vice president for public policy at one of the world’s foremost entertainment
and media companies commented, “The small number of dominant online platforms do
not have the skills or ethical center in place to build responsible systems, technical or
procedural. They eschew accountability for the impact of their inventions on society and have
not developed any of the principles or practices that can deal with the complex issues. They
are like biomedical or nuclear technology firms absent any ethics rules or ethics training or
philosophy. Worse, their active philosophy is that assessing and responding to likely or
potential negative impacts of their inventions is both not theirs to do and even shouldn’t be
done.â€
Patricia Aufderheide, professor of communications and founder of the Center for Media
and Social Impact at American University, said, “Major interests are not invested enough in
reliability to create new business models and political and regulatory standards needed for
the shift. … Overall there are powerful forces, including corporate investment in surveillancebased business models, that create many incentives for unreliability, ‘invisible handshake’
agreements with governments that militate against changing surveillance models,
international espionage at a governmental and corporate level in conjunction with mediocre
cryptography and poor use of white hat hackers, poor educational standards in major
industrial countries such as the U.S., and fundamental weaknesses in the U.S.
political/electoral system that encourage exploitation of unreliability. It would be wonderful
to believe otherwise, and I hope that other commentators will be able to convince me
otherwise.â€
James Schlaffer, an assistant professor of economics, commented, “Information is curated
by people who have taken a step away from the objectivity that was the watchword of
journalism. Conflict sells, especially to the opposition party, therefore the opposition news
agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a
way to further control narrative and propagandize the population.â€
13
PEW RESEARCH CENTER
www.pewresearch.org
Subtheme: Human tendencies and infoglut drive people apart and make it harder for
them to agree on “common knowledge.†That makes healthy debate difficult and
destabilizes trust. The fading of news media contributes to the problem
Many respondents expressed concerns about how people’s struggles to find and apply
accurate information contribute to a larger social and political problem: There is a growing
deficit in commonly accepted facts or some sort of cultural “common ground.†Why has this
happened? They cited several reasons:
ï‚§ Online echo chambers or silos divide people into separate camps, at times even inciting
them to express anger and hatred at a volume not seen in previous communications
forms.
 Information overload crushes people’s attention spans. Their coping mechanism is to
turn to entertainment or other lighter fare.
ï‚§ High-quality journalism has been decimated due to changes in the attention economy.
They said these factors and others make it difficult for many people in the digital age to
create and come to share the type of “common knowledge†that undergirds better and moreresponsive public policy. A share of respondents said a lack of commonly shared knowledge
leads many in society to doubt the reliability of everything, causing them to simply drop out
of civic participation, depleting the number of active and informed citizens.
Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “The power and
diversity of very low-cost technologies allowing unsophisticated users to create believable
‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not
necessarily to create consistent and believable alternative facts, but to create plausible levels
of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on
the ability to get people to believe the *wrong* thing as it is on the ability to get people to
*doubt* the right thing. The success of Donald Trump will be a flaming signal that this
strategy works, alongside the variety of technologies now in development (and early
deployment) that can exacerbate this problem. In short, it’s a successful strategy, made
simpler by more powerful information technologies.â€
Philip J. Nickel, lecturer at Eindhoven University of Technology in the Netherlands, said,
“The decline of traditional news media and the persistence of closed social networks will not
change in the next 10 years. These are the main causes of the deterioration of a public
domain of shared facts as the basis for discourse and political debate.â€
14
PEW RESEARCH CENTER
www.pewresearch.org
Kenneth Sherrill, professor emeritus of political science at Hunter College, City University
of New York, predicted, “Disseminating false rumors and reports will become easier. The
proliferation of sources will increase the number of people who don’t know who or what they
trust. These people will drop out of the normal flow of information. Participation will decline
as more and more citizens become unwilling/unable to figure out which information sources
are reliable.â€
What is truth? What is a fact? Who gets to decide? And can most people agree to trust
anything as “common knowledge� A number of respondents challenged the idea that any
individuals, groups or technology systems could or should “rate†information as credible,
factual, true or not.
An anonymous respondent observed, “Whatever is devised will not be seen as impartial;
some things are not black and white; for other situations, facts brought up to come to a
conclusion are different that other facts used by others in a situation. Each can have real
facts, but it is the facts that are gathered that matter in coming to a conclusion; who will
determine what facts will be considered or what is even considered a fact.â€
A research assistant at MIT noted, “‘Fake’ and ‘true’ are not as binary as we would like,
and – combined with an increasingly connected and complex digital society – it’s a challenge
to manage the complexity of social media without prescribing a narrative as ‘truth.’â€
An internet pioneer and longtime leader at ICANN said, “There is little prospect of a
forcing factor that will emerge that will improve the ‘truthfulness’ of information in the
internet.â€
A vice president for stakeholder engagement said, “Trust networks are best
established with physical and unstructured interaction, discussion and observation.
Technology is reducing opportunities for such interactions and disrupting human discourse,
while giving the ‘feeling’ that we are communicating more than ever.â€
Subtheme: A small segment of society will find, use and perhaps pay
a premium for information from reliable sources. Outside
of this group “chaos will reign†and a worsening digital divide will develop
Some respondents predicted that a larger digital divide will form. Those who pursue moreaccurate information and rely on better-informed sources will separate from those who are
not selective enough or who do not invest either the time or the money in doing so.
15
PEW RESEARCH CENTER
www.pewresearch.org
Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime
internet policy leader, observed, “Overall, at least a part of society will value trusted
information and find ways to keep a set of curated, quality information resources. This will
use a combination of organizational and technological tools but above all, will require a
sharpened sense of good judgment and access to diverse, including rivalrous, sources.
Outside this, chaos will reign.â€
Alexander Halavais, associate professor of social technologies at Arizona State University,
said, “As there is value in accurate information, the availability of such information will
continue to grow. However, when consumers are not directly paying for such accuracy, it will
certainly mean a greater degree of misinformation in the public sphere. That means the
continuing bifurcation of haves and have-nots, when it comes to trusted news and
information.â€
An anonymous editor and publisher commented, “Sadly, many Americans will not pay
attention to ANY content from existing or evolving sources. It’ll be the continuing dumbing
down of the masses, although the ‘upper’ cadres (educated/thoughtful) will read/see/know,
and continue to battle.â€
An anonymous respondent said, “There will be a sort of ‘gold standard’ set of sources,
and there will be the fringe.â€
Many who see little hope for improvement of the information environment said technology
will not save society from distortions, half-truths, lies and weaponized narratives. An
anonymous business leader argued, “It is too easy to create fake facts, too laborintensive to check and too easy to fool checking algorithms.’’ And this response of an
anonymous research scientist based in North America echoed the view of many
participants in this canvassing: “We will develop technologies to help identify false and
distorted information, BUT they won’t be good enough.â€
Paul N. Edwards, Perry Fellow in International Security at Stanford University,
commented, “Many excellent methods will be developed to improve the information
environment, but the history of online systems shows that bad actors can and will always find
ways around them.â€
16
PEW RESEARCH CENTER
www.pewresearch.org
Vian Bakir, professor in political communication and journalism at Bangor University in
Wales, commented, “It won’t improve because of 1) the evolving nature of technology –
emergent media always catches out those who wish to control it, at least in the initial phase
of emergence; 2) online social media and search engine business models favour
misinformation spreading; 3) well-resourced propagandists exploit this mix.â€
Many who expect things will not improve in the next decade said that “white hat†efforts will
never keep up with “black hat†advances in information wars. A user-experience and
interaction designer said, “As existing channels become more regulated, new unregulated
channels will continue to emerge.â€
Subtheme: Those generally acting for themselves and not the public good
have the advantage, and they are likely to stay ahead in the information wars
Many of those who expect no improvement of the information environment said those who
wish to spread misinformation are highly motivated to use innovative tricks to stay ahead of
the methods meant to stop them. They said certain actors in government, business and other
individuals with propaganda agendas are highly driven to make technology work in their
favor in the spread of misinformation, and there will continue to be more of them.
A number of respondents referred to this as an “arms race.†David Sarokin of Sarokin
Consulting and author of “Missed Information,†said, “There will be an arms race between
reliable and unreliable information.†And David Conrad, a chief technology officer, replied,
“In the arms race between those who want to falsify information and those who want to
produce accurate information, the former will always have an advantage.â€
Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute,
commented, “The information environment will continue to change but the pressures of
politics, advertising and stock-return-based capitalism rewards those who find ways to
manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’
and those trying to manipulate the system.â€
John Markoff, retired journalist and former technology reporter for The New York Times,
said, “I am extremely skeptical about improvements related to verification without a solution
to the challenge of anonymity on the internet. I also don’t believe there will be a solution to
the anonymity problem in the near future.â€
17
PEW RESEARCH CENTER
www.pewresearch.org
Scott Spangler, principal data scientist at IBM Watson Health, said technologies now exist
that make fake information almost impossible to discern and flag, filter or block. He wrote,
“Machine learning and sophisticated statistical techniques will be used to accurately simulate
real information content and make fake information almost indistinguishable from the real
thing.â€
Jason Hong, associate professor at the School of Computer Science at Carnegie Mellon
University, said, “Some fake information will be detectable and blockable, but the vast
majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text,
find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances
or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot
of rich and unethical people, politicians, non-state actors and state actors who are strongly
incentivized to get fake information out there to serve their selfish purposes.â€
A research professor of robotics at Carnegie Mellon University observed,
“Defensive innovation is always behind offensive innovation. Those wanting to spread
misinformation will always be able to find ways to circumvent whatever controls are put in
place.â€
A research scientist for the Computer Science and Artificial Intelligence
Laboratory at MIT said, “Problems will get worse faster than solutions can address, but
that only means solutions are more needed than ever.â€
Subtheme: Weaponized narratives and other false content will be magnified by social
media, online filter bubbles and AI
Some respondents expect a dramatic rise in the manipulation of the information
environment by nation-states, by individual political actors and by groups wishing to spread
propaganda. Their purpose is to raise fears that serve their agendas, create or deepen silos
and echo chambers, divide people and set them upon each other, and paralyze or confuse
public understanding of the political, social and economic landscape.
This has been referred to as the weaponization of public narratives. Social media platforms
such as Facebook, Reddit and Twitter appear to be prime battlegrounds. Bots are often
employed, and AI is expected to be implemented heavily in the information wars to magnify
the speed and impact of messaging.
18
PEW RESEARCH CENTER
www.pewresearch.org
A leading internet pioneer who has worked with the FCC, the UN’s International
Telecommunication Union (ITU), the General Electric Co. (GE) and other major
technology organizations commented, “The ‘internet-as-weapon’ paradigm has
emerged.â€
Dean Willis, consultant for Softarmor Systems, commented, “Governments and political
groups have now discovered the power of targeted misinformation coupled to personalized
understanding of the targets. Messages can now be tailored with devastating accuracy. We’re
doomed to living in targeted information bubbles.â€
An anonymous survey participant noted, “Misinformation will play a major role in
conflicts between nations and within competing parties within nation states.â€
danah boyd, principal researcher at Microsoft Research and founder of Data & Society,
wrote, “What’s at stake right now around information is epistemological in nature.
Furthermore, information is a source of power and thus a source of contemporary warfare.â€
Peter Lunenfeld, a professor at UCLA, commented, “For the foreseeable future, the
economics of networks and the networks of economics are going to privilege the
dissemination of unvetted, unverified and often weaponized information. Where there is a
capitalistic incentive to provide content to consumers, and those networks of distribution
originate in a huge variety of transnational and even extra-national economies and political
systems, the ability to ‘control’ veracity will be far outstripped by the capability and
willingness to supply any kind of content to any kind of user.â€
These experts noted that the public has turned to social media – especially Facebook – to get
its “news.†They said the public’s craving for quick reads and tabloid-style sensationalism is
what makes social media the field of choice for manipulative narratives, which are often
packaged to appear like news headlines. They note that the public’s move away from moretraditional mainstream news outlets, which had some ethical standards, to consumption of
social newsfeeds has weakened mainstream media organizations, making them lower-budget
operations that have been forced to compete for attention by offering up clickbait headlines
of their own.
An emeritus professor of communication for a U.S. Ivy League university noted,
“We have lost an important social function in the press. It is being replaced by social media,
where there are few if any moral or ethical guidelines or constraints on the performance of
informational roles.â€
19
PEW RESEARCH CENTER
www.pewresearch.org
A project leader for a science institute commented, “We live in an era where most
people get their ‘news’ via social media and it is very easy to spread fake news. The existence
of clickbait sites make it easy for conspiracy theories to be rapidly spread by people who do
not bother to read entire articles, nor look for trusted sources. Given that there is freedom of
speech, I wonder how the situation can ever improve. Most users just read the headline,
comment and share without digesting the entire article or thinking critically about its content
(if they read it at all).â€
Subtheme: The most-effective tech solutions to misinformation will endanger people’s
dwindling privacy options, and they are likely to limit free speech and remove the
ability for people to be anonymous online
The rise of new and highly varied voices with differing agendas and motivations might
generally be considered to be a good thing. But some of these experts said the recent major
successes by misinformation manipulators have created a threatening environment in which
many in the public are encouraging platform providers and governments to expand
surveillance. Among the technological solutions for “cleaning up†the information
environment are those that work to clearly identify entities operating online and employ
algorithms to detect misinformation. Some of these experts expect that such systems will act
to identify perceived misbehaviors and label, block, filter or remove some online content and
even ban some posters from further posting.
An educator commented, “Creating ‘a reliable, trusted, unhackable verification system’
would produce a system for filtering and hence structuring of content. This will end up being
a censored information reality.â€
An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’
information as valid or invalid is inherently biased.†And a professor and researcher
noted, “In an open society, there is no prior determination of what information is genuine or
fake.â€
In fact, a share of the respondents predicted that the online information environment will not
improve in the next decade because any requirement for authenticated identities would take
away the public’s highly valued free-speech rights and allow major powers to control the
information environment.
A distinguished professor emeritus of political science at a U.S. university wrote,
“Misinformation will continue to thrive because of the long (and valuable) tradition of
20
PEW RESEARCH CENTER
www.pewresearch.org
freedom of expression. Censorship will be rejected.†An anonymous respondent wrote,
“There is always a fight between ‘truth’ and free speech. But because the internet cannot be
regulated free speech will continue to dominate, meaning the information environment will
not improve.â€
But another share of respondents said that is precisely why authenticated identities – which
are already operating in some places, including China – will become a larger part of
information systems. A professor at a major U.S. university replied, “Surveillance
technologies and financial incentives will generate greater surveillance.†A retired
university professor predicted, “Increased censorship and mass surveillance will tend to
create official ‘truths’ in various parts of the world. In the United States, corporate filtering of
information will impose the views of the economic elite.â€
The executive director of a major global privacy advocacy organization argued
removing civil liberties in order to stop misinformation will not be effective, saying,
“‘Problematic’ actors will be able to game the devised systems while others will be overregulated.â€
Several other respondents also cited this as a major flaw of this potential remedy. They
argued against it for several reasons, including the fact that it enables even broader
government and corporate surveillance and control over more of the public.
Emmanuel Edet, head of legal services at the National Information Technology
Development Agency of Nigeria, observed, “The information environment will improve but at
a cost to privacy.â€
Bill Woodcock, executive director of the Packet Clearing House, wrote, “There’s a
fundamental conflict between anonymity and control of public speech, and the countries that
don’t value anonymous speech domestically are still free to weaponize it internationally,
whereas the countries that do value anonymous speech must make it available to all, [or] else
fail to uphold their own principle.â€
James LaRue, director of the Office for Intellectual Freedom of the American Library
Association, commented, “Information systems incentivize getting attention. Lying is a
powerful way to do that. To stop that requires high surveillance – which means government
oversight which has its own incentives not to tell the truth.â€
21
PEW RESEARCH CENTER
www.pewresearch.org
Tom Valovic, contributor to The Technoskeptic magazine and author of “Digital
Mythologies,†said encouraging platforms to exercise algorithmic controls is not optimal. He
wrote: “Artificial intelligence that will supplant human judgment is being pursued
aggressively by entities in the Silicon Valley and elsewhere. Algorithmic solutions to
replacing human judgment are subject to hidden bias and will ultimately fail to accomplish
this goal. They will only continue the centralization of power in a small number of companies
that control the flow of information.â€
Most of the respondents who gave hopeful answers about the future of truth online said they
believe technology will be implemented to improve the information environment. They noted
their faith was grounded in history, arguing that humans have always found ways to innovate
to overcome problems. Most of these experts do not expect there will be a perfect system –
but they expect advances. A number said information platform corporations such as Google
and Facebook will begin to efficiently police the environment to embed moral and ethical
thinking in the structure of their platforms. They hope this will simultaneously enable the
screening of content while still protecting rights such as free speech.
Larry Diamond, senior fellow at the Hoover Institution and the Freeman Spogli Institute
(FSI) at Stanford University, said, “I am hopeful that the principal digital information
platforms will take creative initiatives to privilege more authoritative and credible sources
and to call out and demote information sources that appear to be propaganda and
manipulation engines, whether human or robotic. In fact, the companies are already
beginning to take steps in this direction.â€
An associate professor at a U.S. university wrote, “I do not see us giving up on seeking
truth.†And a researcher based in Europe said, “Technologies will appear that solve the
trust issues and reward logic.â€
Adam Lella, senior analyst for marketing insights at comScore Inc., replied, “There have
been numerous other industry-related issues in the past (e.g., viewability, invalid traffic
detection, cross-platform measurement) that were seemingly impossible to solve, and yet
major progress was made in the past few years. If there is a great amount of pressure from
the industry to solve this problem (which there is), then methodologies will be developed and
22
PEW RESEARCH CENTER
www.pewresearch.org
progress will be made to help mitigate this issue in the long run. In other words, if there’s a
will, there’s way.â€
Subtheme: Likely tech-based solutions include adjustments to algorithmic
filters, browsers, apps and plug-ins and the implementation of “trust ratingsâ€
Many respondents who hope for improvement in the information environment mentioned
ways in which new technological solutions might be implemented.
Bart Knijnenburg, researcher on decision-making and recommender systems and
assistant professor of computer science at Clemson University, said, “Two developments will
help improve the information environment: 1) News will move to a subscription model (like
music, movies, etc.) and subscription providers will have a vested interest in culling down
false narratives; 2) Algorithms that filter news will learn to discern the quality of a news item
and not just tailor to ‘virality’ or political leaning.â€
Laurel Felt, lecturer at the University of Southern California, “There will be mechanisms for
flagging suspicious content and providers and then apps and plugins for people to see the
‘trust rating’ for a piece of content, an outlet or even an IP address. Perhaps people can even
install filters so that, when they’re doing searches, hits that don’t meet a certain trust
threshold will not appear on the list.â€
A longtime U.S. government researcher and administrator in communications
and technology sciences said, “The intelligence, defense and related U.S. agencies are
very actively working on this problem and results are promising.â€
Amber Case, research fellow at Harvard University’s Berkman Klein Center for Internet &
Society, suggested withholding ad revenue until veracity has been established. She wrote,
“Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by
creating an article that causes enough outrage that advertising money will follow. … In order
to reduce the spread of fake news, we must deincentivize it financially. If an article bursts
into collective consciousness and is later proven to be fake, the sites that control or host that
content could refuse to distribute advertising revenue to the entity that created or published
it. This would require a system of delayed advertising revenue distribution where ad funds
are held until the article is proven as accurate or not. A lot of fake news is created by a few
people, and removing their incentive could stop much of the news postings.â€
23
PEW RESEARCH CENTER
www.pewresearch.org
Andrea Matwyshyn, a professor of law at Northeastern University who researches
innovation and law, particularly information security, observed, “Software liability law will
finally begin to evolve. Market makers will increasingly incorporate security quality as a
factor relevant to corporate valuation. The legal climate for security research will continue to
improve, as its connection to national security becomes increasingly obvious. These changes
will drive significant corporate and public sector improvements in security during the next
decade.â€
Larry Keeley, founder of innovation consultancy Doblin, predicted technology will be
improved but people will remain the same, writing, “Capabilities adapted from both
bibliometric analytics and good auditing practices will make this a solvable problem.
However, non-certified, compelling-but-untrue information will also proliferate. So the new
divide will be between the people who want their information to be real vs. those who simply
want it to feel important. Remember that quote from Roger Ailes: ‘People don’t want to BE
informed, they want to FEEL informed.’ Sigh.â€
Anonymous survey participants also responded:
 “Filters and algorithms will improve to both verify raw data, separate ‘overlays’ and to
correct for a feedback loop.â€
ï‚§ “Semantic technologies will be able to cross-verify statements, much like meta-analysis.â€
ï‚§ “The credibility history of each individual will be used to filter incoming information.â€
 “The veracity of information will be linked to how much the source is perceived as
trustworthy – we may, for instance, develop a trust index and trust will become more
easily verified using artificial-intelligence-driven technologies.â€
 “The work being done on things like verifiable identity and information sharing through
loose federation will improve things somewhat (but not completely). That is to say, things
will become better but not necessarily good.â€
 “AI, blockchain, crowdsourcing and other technologies will further enhance our ability to
filter and qualify the veracity of information.â€
 “There will be new visual cues developed to help news consumers distinguish between
trusted news sources and others.â€
Subtheme: Regulatory remedies could include software liability law,
required identities, unbundling of social networks like Facebook
A number of respondents believe there will be policy remedies that move beyond whatever
technical innovations emerge in the next decade. They offered a range of suggestions, from
24
PEW RESEARCH CENTER
www.pewresearch.org
regulatory reforms applied to the platforms that aid misinformation merchants to legal
penalties applied to wrongdoers. Some think the threat of regulatory reform via government
agencies may force the issue of required identities and the abolition of anonymity protections
for platform users.
Sonia Livingstone, professor of social psychology at the London School of Economics and
Political Science, replied, “The ‘wild west’ state of the internet will not be permitted to
continue by those with power, as we are already seeing with increased national pressure on
providers/companies by a range of means from law and regulation to moral and consumer
pressures.â€
Willie Currie, a longtime expert in global communications diffusion, wrote, “The apparent
success of fake news on platforms like Facebook will have to be dealt with on a regulatory
basis as it is clear that technically minded people will only look for technical fixes and may
have incentives not to look very hard, so self-regulation is unlikely to succeed. The excuse
that the scale of posts on social media platforms makes human intervention impossible will
not be a defense. Regulatory options may include unbundling social networks like Facebook
into smaller entities. Legal options include reversing the notion that providers of content
services over the internet are mere conduits without responsibility for the content. These
regulatory and legal options may not be politically possible to affect within the U.S., but they
are certainly possible in Europe and elsewhere, especially if fake news is shown to have an
impact on European elections.â€
Sally Wentworth, vice president of global policy development at the Internet Society,
warned against too much dependence upon information platform providers in shaping
solutions to improve the information environment. She wrote: “It’s encouraging to see some
of the big platforms beginning to deploy internet solutions to some of the issues around
online extremism, violence and fake news. And yet, it feels like as a society, we are
outsourcing this function to private entities that exist, ultimately, to make a profit and not
necessarily for a social good. How much power are we turning over to them to govern our
social discourse? Do we know where that might eventually lead? On the one hand, it’s good
that the big players are finally stepping up and taking responsibility. But governments, users
and society are being too quick to turn all of the responsibility over to internet platforms.
Who holds them accountable for the decisions they make on behalf of all of us? Do we even
know what those decisions are?â€
A professor and chair in a department of educational theory, policy and
administration commented, “Some of this work can be done in private markets. Being
25
PEW RESEARCH CENTER
www.pewresearch.org
banned from social media is one obvious one. In terms of criminal law, I think the important
thing is to have penalties/regulations be domain-specific. Speech can be regulated in certain
venues, but obviously not in all. Federal (and perhaps even international) guidelines would
be useful. Without a framework for regulation, I can’t imagine penalties.â€
Many of those who expect the information environment to improve anticipate that
information literacy training and other forms of assistance will help people become more
sophisticated consumers. They expect that users will gravitate toward more reliable
information – and that knowledge providers will respond in kind.
Frank Kaufmann, founder and director of several international projects for peace activism
and media and information, commented, “The quality of news will improve, because things
always improve.†And Barry Wellman, virtual communities expert and co-director of the
NetLab Network, said, “Software and people are becoming more sophisticated.â€
One hopeful respondent said a change in economic incentives can bring about desired
change. Tom Wolzien, chairman of The Video Call Center and Wolzien LLC, said, “The
market will not clean up the bad material, but will shift focus and economic rewards toward
the reliable. Information consumers, fed up with false narratives, will increasingly shift
toward more-trusted sources, resulting in revenue flowing toward those more trusted sources
and away from the junk. This does not mean that all people will subscribe to either scientific
or journalistic method (or both), but they will gravitate toward material the sources and
institutions they find trustworthy, and those institutions will, themselves, demand methods
of verification beyond those they use today.â€
A retired public official and internet pioneer predicted, “1) Education for veracity will
become an indispensable element of secondary school. 2) Information providers will become
legally responsible for their content. 3) A few trusted sources will continue to dominate the
internet.â€
Irene Wu, adjunct professor of communications, culture and technology at Georgetown
University, said, “Information will improve because people will learn better how to deal with
masses of digital information. Right now, many people naively believe what they read on
social media. When the television became popular, people also believed everything on TV
26
PEW RESEARCH CENTER
www.pewresearch.org
was true. It’s how people choose to react and access to information and news that’s
important, not the mechanisms that distribute them.â€
Charlie Firestone, executive director at the Aspen Institute Communications and Society
Program, commented, “In the future, tagging, labeling, peer recommendations, new
literacies (media, digital) and similar methods will enable people to sift through information
better to find and rely on factual information. In addition, there will be a reaction to the
prevalence of false information so that people are more willing to act to assure their
information will be accurate.â€
Howard Rheingold, pioneer researcher of virtual communities, longtime professor and
author of “Net Smart: How to Thrive Online,†noted, “As I wrote in ‘Net Smart’ in 2012, some
combination of education, algorithmic and social systems can help improve the signal-tonoise ratio online – with the caveat that misinformation/disinformation versus verified
information is likely to be a continuing arms race. In 2012, Facebook, Google and others had
no incentive to pay attention to the problem. After the 2016 election, the issue of fake
information has been spotlighted.â€
Subtheme: Misinformation has always been with us and people have found ways to
lessen its impact. The problems will become more manageable as people become
more adept at sorting through material
Many respondents agree that misinformation will persist as the online realm expands and
more people are connected in more ways. Still, the more hopeful among these experts argue
that progress is inevitable as people and organizations find coping mechanisms. They say
history validates this. Furthermore, they said technologists will play an important role in
helping filter out misinformation and modeling new digital literacy practices for users.
Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and
public policy advisor with 16 years of experience at the BBC and as a digital consultant,
wrote, “Our information environment has been immeasurably improved by the
democratisation of the means of publication since the creation of the web nearly 25 years
ago. We are now seeing the downsides of that transformation, with bad actors manipulating
the new freedoms for antisocial purposes, but techniques for managing and mitigating those
harms will improve, creating potential for freer, but well-governed, information
environments in the 2020s.â€
27
PEW RESEARCH CENTER
www.pewresearch.org
Jonathan Grudin, principal design researcher at Microsoft, said, “We were in this position
before, when printing presses broke the existing system of information management. A new
system emerged and I believe we have the motivation and capability to do it again. It will
again involve information channeling more than misinformation suppression; contradictory
claims have always existed in print, but have been manageable and often healthy.â€
Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society
and founder of the Sociable Media Group at the MIT Media Lab, wrote, “‘Fake news’ is not
new. The Weekly World News had a circulation of over a million for its mostly fictional news
stories that are printed and sold in a format closely resembling a newspaper. Many readers
recognized it as entertainment, but not all. More subtly, its presence on the newsstand
reminded everyone that anything can be printed.â€
Joshua Hatch, president of the Online News Association, noted, “I’m slightly optimistic
because there are more people who care about doing the right thing than there are people
who are trying to ruin the system. Things will improve because people – individually and
collectively – will make it so.â€
Many of these respondents said the leaders and engineers of the major information platform
companies will play a significant role. Some said they expect some other systematic and
social changes will alter things.
John Wilbanks, chief commons officer at Sage Bionetworks, replied, “I’m an optimist, so
take this with a grain of salt, but I think as people born into the internet age move into
positions of authority they’ll be better able to distill and discern fake news than those of us
who remember an age of trusted gatekeepers. They’ll be part of the immune system. It’s not
that the environment will get better, it’s that those younger will be better fitted to survive it.â€
Danny Rogers, founder and CEO of Terbium Labs, replied, “Things always improve. Not
monotonically, and not without effort, but fundamentally, I still believe that the efforts to
improve the information environment will ultimately outweigh efforts to devolve it.â€
Bryan Alexander, futurist and president of Bryan Alexander Consulting, replied, “Growing
digital literacy and the use of automated systems will tip the balance towards a better
information environment.â€
A number of these respondents said information platform corporations such as Google and
Facebook will begin to efficiently police the environment through various technological
28
PEW RESEARCH CENTER
www.pewresearch.org
enhancements. They expressed faith in the inventiveness of these organizations and
suggested the people of these companies will implement technology to embed moral and
ethical thinking in the structure and business practices of their platforms, enabling the
screening of content while still protecting rights such as free speech.
Patrick Lambe, principal consultant at Straits Knowledge, commented, “All largescale
human systems are adaptive. When faced with novel predatory phenomena, counter-forces
emerge to balance or defeat them. We are at the beginning of a largescale negative impact
from the undermining of a social sense of reliable fact. Counter-forces are already emerging.
The presence of largescale ‘landlords’ controlling significant sections of the ecosystem (e.g.,
Google, Facebook) aids in this counter-response.â€
A professor in technology law at a West-Coast-based U.S. university said,
“Intermediaries such as Facebook and Google will develop more-robust systems to reward
legitimate producers and punish purveyors of fake news.â€
A longtime director for Google commented, “Companies like Google and Facebook are
investing heavily in coming up with usable solutions. Like email spam, this problem can
never entirely be eliminated, but it can be managed.â€
Sandro Hawke, technical staff at the World Wide Web Consortium, predicted, “Things are
going to get worse before they get better, but humans have the basic tools to solve this
problem, so chances are good that we will. The biggest risk, as with many things, is that
narrow self-interest stops people from effectively collaborating.â€
Anonymous respondents shared these remarks:
 “Accurate facts are essential, particularly within a democracy, so this will be a high,
shared value worthy of investment and government support, as well as private-sector
initiatives.â€
 “We are only at the beginning of drastic technological and societal changes. We will learn
and develop strategies to deal with problems like fake news.â€
 “There is a long record of innovation taking place to solve problems. Yes, sometimes
innovation leads to abuses, but further innovation tends to solve those problems.â€
ï‚§ Consumers have risen up in the past to block the bullshit, fake ads, fake investment
scams, etc., and they will again with regard to fake news.â€
 “As we understand more about digital misinformation we will design better tools, policies
and opportunities for collective action.â€
29
PEW RESEARCH CENTER
www.pewresearch.org
 “Now that it is on the agenda, smart researchers and technologists will develop
solutions.â€
 “The increased awareness of the issue will lead to/force new solutions and regulation that
will improve the situation in the long-term even if there are bound to be missteps such as
flawed regulation and solutions along the way.â€
Subtheme: Crowdsourcing will work to highlight verified facts and block those who
propagate lies and propaganda. Some also have hopes for distributed ledgers
(blockchain)
A number of these experts said solutions such as tagging, flagging or other labeling of
questionable content will continue to expand and be of further use in the future in tackling
the propagation of misinformation.
J. Nathan Matias, a postdoctoral researcher at Princeton University and previously a
visiting scholar at MIT’s Center for Civic Media, wrote, “Through ethnography and largescale
social experiments, I have been encouraged to see volunteer communities with tens of
millions of people work together to successfully manage the risks from inaccurate news.â€
A researcher of online harassment working for a major internet information
platform commented, “If there are nonprofits keeping technology in line, such as an ACLUesque initiative, to monitor misinformation and then partner with spaces like Facebook to
deal with this kind of news spam, then yes, the information environment will improve. We
also need to move away from clickbaity-like articles, and not algorithmically rely on
popularity but on information.â€
An engineer based in North America replied, “The future will attach credibility to the
source of any information. The more a given source is attributed to ‘fake news,’ the lower it
will sit in the credibility tree.â€
Micah Altman, director of research for the Program on Information Science at MIT,
commented, “Technological advances are creating forces pulling in two directions: It is
increasingly easy to create real-looking fake information; and it is increasingly easy to
crowdsource the collection and verification of information. In the longer term, I’m optimistic
that the second force will dominate – as transaction cost-reduction appears to be relatively in
favor of crowds versus concentrated institutions.â€
30
PEW RESEARCH CENTER
www.pewresearch.org
A past chairman of a major U.S. scientific think tank and former CEO replied,
“[The information environment] should improve because there are many techniques that can
be brought to bear both human-mediated – such as collective intelligence via user voting and
rating – and technological responses that are either very early in their evolution or not or not
deployed at all. See spam as an analog.â€
Some predicted that digital distributed ledger technologies, known as blockchain, may
provide some answers. A longtime technology editor and columnist based in Europe,
commented, “The blockchain approach used for Bitcoin, etc., could be used to distribute
content. DECENT is an early example.†And an anonymous respondent from Harvard
University’s Berkman Klein Center for Internet & Society said, “They will be
cryptographically verified, with concepts.â€
But others were less confident that blockchain will work. A leading researcher studying
the spread of misinformation observed, “I know systems like blockchain are a start, but
in some ways analog systems (e.g., scanned voting ballots) can be more resilient to outside
influence than digital solutions such as increased encryption. There are always potential
compromises when our communication networks are based on human-coded technology and
hardware; this [is] less the case with analog-first, digital-second systems.â€
A professor of media and communication based in Europe said, “Right now, reliable
and trusted verification systems are not yet available; they may become technically available
in the future but the arms race between corporations and hackers is never ending. Blockchain
technology may be an option, but every technological system needs to be built on trust, and
as long as there is no globally governed trust system that is open and transparent, there will
be no reliable verification systems.â€
There was common agreement among many respondents – whether they said they expect to
see improvements in the information environment in the next decade or not – that the
problem of misinformation requires significant attention. A share of these respondents urged
action in two areas: A bolstering of the public-serving press and an expansive,
comprehensive, ongoing information literacy education effort for people of all ages.
31
PEW RESEARCH CENTER
www.pewresearch.org
A sociologist doing research on technology and civic engagement at MIT said,
“Though likely to get worse before it gets better, the 2016-2017 information ecosystem
problems represent a watershed moment and call to action for citizens, policymakers,
journalists, designers and philanthropists who must work together to address the issues at
the heart of misinformation.â€
Michael Zimmer, associate professor and privacy and information ethics scholar at the
University of Wisconsin, Milwaukee commented, “This is a social problem that cannot be
solved via technology.â€
Subtheme: Funding and support must be directed to the restoration of a well-fortified,
ethical and trusted public press
Many respondents noted that while the digital age has amplified countless information
sources it has hurt the reach and influence of the traditional news organizations. These are
the bedrock institutions much of the public has relied upon for objective, verified, reliable
information – information undergirded by ethical standards and a general goal of serving the
common good. These respondents said the information environment can’t be improved
without more, well-staffed, financially stable, independent news organizations. They believe
that material can rise above misinformation and create a base of “common knowledge†the
public can share and act on.
Susan Hares, a pioneer with the National Science Foundation Network (NSFNET) and
longtime internet engineering strategist, now a consultant, said, “Society simply needs to
decide that the ‘press’ no longer provides unbiased information, and it must pay for unbiased
and verified information.â€
Christopher Jencks, a professor emeritus at Harvard University, said, “Reducing ‘fake
news’ requires a profession whose members share a commitment to getting it right. That, in
turn, requires a source of money to pay such professional journalists. Advertising used to
provide newspapers with money to pay such people. That money is drying up, and it seems
unlikely to be replaced within the next decade.â€
Rich Ling, professor of media technology at the School of Communication and Information
at Nanyang Technological University, said, “We have seen the consequences of fake news in
the U.S. presidential election and Brexit. This is a wake-up call to the news industry, policy
makers and journalists to refine the system of news production.â€
32
PEW RESEARCH CENTER
www.pewresearch.org
Maja Vujovic, senior copywriter for the Comtrade Group, predicted, “The information
environment will be increasingly perceived as a public good, making its reliability a universal
need. Technological advancements and civil-awareness efforts will yield varied ways to
continuously purge misinformation from it, to keep it reasonably reliable.â€
An author and journalist based in North America said, “I believe this era could spawn
a new one – a flight to quality in which time-starved citizens place high value on verified
news sources.â€
A professor of law at a major U.S. state university commented, “Things won’t get
better until we realize that accurate news and information are a public good that require notfor-profit leadership and public subsidy.â€
Marc Rotenberg, president of the Electronic Privacy Information Center, wrote, “The
problem with online news is structural: There are too few gatekeepers, and the internet
business model does not sustain quality journalism. The reason is simply that advertising
revenue has been untethered from news production.â€
With precarious funding and shrinking audiences, healthy journalism that serves the
common good is losing its voice. Siva Vaidhyanathan, professor of media studies and
director of the Center for Media and Citizenship at the University of Virginia, wrote, “There
are no technological solutions that correct for the dominance of Facebook and Google in our
lives. These incumbents are locked into monopoly power over our information ecosystem and
as they drain advertising money from all other low-cost commercial media they impoverish
the public sphere.â€
Subtheme: Elevate information literacy: It must become a primary goal at all levels of
education
Many of these experts said the flaws in human nature and still-undeveloped norms in the
digital age are the key problems that make users susceptible to false, misleading and
manipulative online narratives. One potential remedy these respondents suggested is a
massive compulsory crusade to educate all in digital-age information literacy. Such an effort,
some said, might prepare more people to be wise in what they view/read/believe and
possibly even serve to upgrade the overall social norms of information sharing.
Karen Mossberger, professor and director of the School of Public Affairs at Arizona State
University, wrote, “The spread of fake news is not merely a problem of bots, but part of a
33
PEW RESEARCH CENTER
www.pewresearch.org
larger problem of whether or not people exercise critical thinking and information-literacy
skills. Perhaps the surge of fake news in the recent past will serve as a wake-up call to address
these aspects of online skills in the media and to address these as fundamental educational
competencies in our education system. Online information more generally has an almost
limitless diversity of sources, with varied credibility. Technology is driving this issue, but the
fix isn’t a technical one alone.â€
Mike DeVito, graduate researcher at Northwestern University, wrote, “These are not
technical problems; they are human problems that technology has simply helped scale, yet
we keep attempting purely technological solutions. We can’t machine-learn our way out of
this disaster, which is actually a perfect storm of poor civics knowledge and poor information
literacy.â€
Miguel Alcaine, International Telecommunication Union area representative for Central
America, commented, “The boundaries between online and offline will continue to blur. We
understand online and offline are different modalities of real life. There is and will be a
market (public and private providers) for trusted information. There is and will be space for
misinformation. The most important action societies can take to protect people is education,
information and training.â€
An early internet developer and security consultant commented, “Fake news is not a
product of a flaw in the communications channel and cannot be fixed by a fix to the channel.
It is due to a flaw in the human consumers of information and can be repaired only by
education of those consumers.â€
An anonymous respondent from the Harvard University’s Berkman Klein
Center for Internet & Society noted, “False information – intentionally or inadvertently
so – is neither new nor the result of new technologies. It may now be easier to spread to more
people more quickly, but the responsibility for sifting facts from fiction has always sat with
the person receiving that information and always will.â€
An internet pioneer and rights activist based in the Asia/Pacific region said, “We
as a society are not investing enough in education worldwide. The environment will only
improve if both sides of the communication channel are responsible. The reader and the
producer of content, both have responsibilities.â€
34
PEW RESEARCH CENTER
www.pewresearch.org
Deirdre Williams, retired internet activist, replied, “Human beings are losing their
capability to question and to refuse. Young people are growing into a world where those skills
are not being taught.â€
Julia Koller, a learning solutions lead developer, replied, “Information is only as reliable as
the people who are receiving it. If readers do not change or improve their ability to seek out
and identify reliable information sources, the information environment will not improve.â€
Ella Taylor-Smith, senior research fellow at the School of Computing at Edinburgh Napier
University, noted, “As more people become more educated, especially as digital literacy
becomes a popular and respected skill, people will favour (and even produce) better quality
information.â€
Constance Kampf, a researcher in computer science and mathematics, said, “The answer
depends on socio-technical design – these trends of misinformation versus verifiable
information were already present before the internet, and they are currently being amplified.
The state and trends in education and place of critical thinking in curricula across the world
will be the place to look to see whether or not the information environment will improve –
cyberliteracy relies on basic information literacy, social literacy and technological literacy.
For the environment to improve, we need substantial improvements in education systems
across the world in relation to critical thinking, social literacy, information literacy, and
cyberliteracy (see Laura Gurak’s book ‘Cyberliteracy’).â€
Su Sonia Herring, an editor and translator, commented, “Misinformation and fake news
will exist as long as humans do; they have existed ever since language was invented. Relying
on algorithms and automated measures will result in various unwanted consequences.
Unless we equip people with media literacy and critical-thinking skills, the spread of
misinformation will prevail.â€
This section features responses by several of the top analysts who participated in this
canvassing. Following this wide-ranging set of comments is a much more expansive set of
quotations directly tied to the five primary themes identified in this report.
Ignorance breeds frustration and ‘a growing fraction of the population has neither the
skills nor the native intelligence to master growing complexity’
35
PEW RESEARCH CENTER
www.pewresearch.org
Mike Roberts, pioneer leader at ICANN and Internet Hall of Fame member, replied, “There
are complex forces working both to improve the quality of information on the net, and to
corrupt it. I believe the outrage resulting from recent events will, on balance, lead to a net
improvement, but viewed with hindsight, the improvement may be viewed as inadequate.
The other side of the complexity coin is ignorance. The average man or woman in America
today has less knowledge of the underpinnings of his or her daily life than they did 50 or a
hundred years ago. There has been a tremendous insertion of complex systems into many
aspects of how we live in the decades since World War II, fueled by a tremendous growth in
knowledge in general. Even among highly intelligent people, there is a significant growth in
personal specialization in order to trim the boundaries of expected expertise to manageable
levels. Among educated people, we have learned mechanisms for coping with complexity. We
use what we know of statistics and probability to compartment uncertainty. We adopt ‘most
likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing
fraction of the population has neither the skills nor the native intelligence to master growing
complexity, and in a competitive social environment, obligations to help our fellow humans
go unmet. Educated or not, no one wants to be a dummy – all the wrong connotations. So
ignorance breeds frustration, which breeds acting out, which breeds antisocial and
pathological behavior, such as the disinformation, which was the subject of the survey, and
many other undesirable second order effects. Issues of trustable information are certainly
important, especially since the technological intelligentsia command a number of tools to
combat untrustable info. But the underlying pathology won’t be tamed through technology
alone. We need to replace ignorance and frustration with better life opportunities that restore
confidence – a tall order and a tough agenda. Is there an immediate nexus between
widespread ignorance and corrupted information sources? Yes, of course. In fact, there is a
virtuous circle where acquisition of trustable information reduces ignorance, which leads to
better use of better information, etc.â€
The truth of news is murky and multifaceted
Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society
and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Yes, trusted
methods will emerge to block false narratives and allow accurate information to prevail, and,
yes, the quality and veracity of information online will deteriorate due to the spread of
unreliable, sometimes even dangerous, socially destabilizing ideas. Of course, the definition
of ‘true’ is sometimes murky. Experimental scientists have many careful protocols in place to
assure the veracity of their work, and the questions they ask have well-defined answers – and
still there can be controversy about what is true, what work was free from outside influence.
The truth of news stories is far murkier and multi-faceted. A story can be distorted,
36
PEW RESEARCH CENTER
www.pewresearch.org
disproportional, meant to mislead – and still, strictly speaking, factually accurate. … But a
pernicious harm of fake news is the doubt it sows about the reliability of all news. Donald
Trump’s repeated ‘fake news’ smears of The New York Times, Washington Post, etc., are
among his most destructive non-truths.â€
‘Algorithms weaponize rhetoric,’ influencing on a mass scale
Susan Etlinger, industry analyst at Altimeter Research, said, “There are two main
dynamics at play: One is the increasing sophistication and availability of machine learning
algorithms and the other is human nature. We’ve known since the ancient Greeks and
Romans that people are easily persuaded by rhetoric; that hasn’t changed much in two
thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence
people on a mass scale. There are many people working on ways to protect the integrity and
reliability of information, just as there are cybersecurity experts who are in a constant arms
race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as
‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out
differently in different parts of the world.â€
There’s no technical solution for the fact that ‘news’ is a social bargain
Clay Shirky, vice provost for educational technology at New York University, replied,
“‘News’ is not a stable category – it is a social bargain. There’s no technical solution for
designing a system that prevents people from asserting that Obama is a Muslim but allows
them to assert that Jesus loves you.â€
‘Strong economic forces are incentivizing the creation and spread of fake news’
Amy Webb, author and founder of the Future Today Institute, wrote, “In an era of social,
democratized media, we’ve adopted a strange attitude. We’re simultaneously skeptics and
true believers. If a news story reaffirms what we already believe, it’s credible – but if it rails
against our beliefs, it’s fake. We apply that same logic to experts and sources quoted in
stories. With our limbic systems continuously engaged, we’re more likely to pay attention to
stories that make us want to fight, take flight or fill our social media accounts with links. As a
result, there are strong economic forces incentivizing the creation and spread of fake news. In
the digital realm, attention is currency. It’s good for democracy to stop the spread of
misinformation, but it’s bad for business. Unless significant measures are taken in the
present – and unless all the companies in our digital information ecosystem use strategic
foresight to map out the future – I don’t see how fake news could possibly be reduced by
2027.â€
37
PEW RESEARCH CENTER
www.pewresearch.org
Propagandists exploit whatever communications channels are available
Ian Peter, internet pioneer, historian and activist, observed, “It is not in the interests of
either the media or the internet giants who propagate information, nor of governments, to
create a climate in which information cannot be manipulated for political, social or economic
gain. Propaganda and the desire to distort truth for political and other ends have always been
with us and will adapt to any form of new media which allows open communication and
information flows.â€
Expanding information outlets erode opportunities for a ‘common narrative’
Kenneth R. Fleischmann, associate professor at the School of Information at the
University of Texas, Austin, wrote, “Over time, the general trend is that a proliferation of
information and communications technologies (ICTs) has led to a proliferation of
opportunities for different viewpoints and perspectives, which has eroded the degree to
which there is a common narrative – indeed, in some ways, this parallels a trend away from
monarchy toward more democratic societies that welcome a diversity of perspectives – so I
anticipate the range of perspectives to increase, rather than decrease, and for these
perspectives to include not only opinions but also facts, which are inherently reductionist and
can easily be manipulated to suit the perspective of the author, following the old aphorism
about statistics Mark Twain attributed to Benjamin Disraeli [‘There are three kinds of lies:
lies, damned lies and statistics.’], which originally referred to experts more generally.â€
‘Broken as it might be, the internet is still capable of routing around damage’
Paul Saffo, longtime Silicon-Valley-based technology forecaster, commented, “The
information crisis happened in the shadows. Now that the issue is visible as a clear and
urgent danger, activists and people who see a business opportunity will begin to focus on it.
Broken as it might be, the internet is still capable of routing around damage.â€
It will be impossible to distinguish between fake and real video, audio, photos
Marina Gorbis, executive director of the Institute for the Future, predicted, “It’s not going
to be better or worse but very different. Already we are developing technologies that make it
impossible to distinguish between fake and real video, fake and real photographs, etc. We
will have to evolve new tools for authentication and verification. We will probably have to
evolve both new social norms as well as regulatory mechanisms if we want to maintain online
environment as a source of information that many people can rely on.â€
38
PEW RESEARCH CENTER
www.pewresearch.org
A ‘Cambrian explosion’ of techniques will arise to monitor the web and non-web
sources
Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures, said, “The rapid rise of
AI will lead to a Cambrian explosion of techniques to monitor the web and non-web media
sources and social networks and rapidly identifying and tagging fake and misleading
content.â€
Well, there’s good news and bad news about the information future …
Jeff Jarvis, professor at the City University of New York’s Graduate School of Journalism,
commented, “Reasons for hope: Much attention is being directed at manipulation and
disinformation; the platforms may begin to recognize and favor quality; and we are still at the
early stage of negotiating norms and mores around responsible civil conversation. Reasons
for pessimism: Imploding trust in institutions; institutions that do not recognize the need to
radically change to regain trust; and business models that favor volume over value.â€
A fear of the imposition of pervasive censorship
Jim Warren, an internet pioneer and open-government/open-records/open-meetings
advocate, said, “False and misleading information has always been part of all cultures
(gossip, tabloids, etc.). Teaching judgment has always been the solution, and it always will be.
I (still) trust the longstanding principle of free speech: The best cure for ‘offensive’ speech is
MORE speech. The only major fear I have is of massive communications conglomerates
imposing pervasive censorship.â€
People have to take responsibility for finding reliable sources
Steven Miller, vice provost for research at Singapore Management University, wrote, “Even
now, if one wants to find reliable sources, one has no problem doing that, so we do not lack
reliable sources of news today. It is that there are all these other options, and people can
choose to live in worlds where they ignore so-called reliable sources, or ignore a multiplicity
of sources that can be compared, and focus on what they want to believe. That type of
situation will continue. Five or 10 years from now, I expect there to continue to be many
reliable sources of news, and a multiplicity of sources. Those who want to seek out reliable
sources will have no problems doing so. Those who want to make sure they are getting a
multiplicity of sources to see the range of inputs, and to sort through various types of inputs,
will be able to do so, but I also expect that those who want to be in the game of influencing
perceptions of reality and changing the perceptions of reality will also have ample means to
do so. So the responsibility is with the person who is seeking the news and trying to get
39
PEW RESEARCH CENTER
www.pewresearch.org
information on what is going on. We need more individuals who take responsibility for
getting reliable sources.â€
40
PEW RESEARCH CENTER
www.pewresearch.org
About this canvassing of experts
The expert predictions reported here about the impact of the internet over the next 10 years
came in response to a question asked by Pew Research Center and Elon University’s
Imagining the Internet Center in an online canvassing conducted between July 2 and Aug. 7,
2017. This is the eighth “Future of the Internet†study the two organizations have conducted
together. For this project, we invited more than 8,000 experts and members of the interested
public to share their opinions on the likely future of the internet. Overall, 1,116 people
responded and answered this question:
The rise of “fake news†and the proliferation of doctored
narratives that are spread by humans and bots online are
challenging publishers and platforms. Those trying to stop the
spread of false information are working to design technical and
human systems that can weed it out and minimize the ways in
which bots and other schemes spread lies and misinformation.
The question: In the next 10 years, will trusted methods emerge to
block false narratives and allow the most accurate information to
prevail in the overall information ecosystem? Or will the quality
and veracity of information online deteriorate due to the spread
of unreliable, sometimes even dangerous, socially-destabilizing
ideas?
Respondents were then asked to choose one of the following answers and follow up by
answering a series of six questions allowing them to elaborate on their thinking:
The information environment will improve – In the next 10 years, on balance, the
information environment will be IMPROVED by changes that reduce the spread of
lies and other misinformation online.
The information environment will NOT improve – In the next 10 years, on balance,
the information environment will NOT BE improved by changes designed to reduce
the spread of lies and other misinformation online.
The web-based instrument was first sent directly to a list of targeted experts identified and
accumulated by Pew Research Center and Elon University during the previous seven “Future
of the Internet†studies, as well as those identified across 12 years of studying the internet
realm during its formative years. Among those invited were people who are active in the
global internet policy community and internet research activities, such as the Internet
41
PEW RESEARCH CENTER
www.pewresearch.org
Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers
(ICANN), Internet Society (ISOC), International Telecommunications Union (ITU),
Association of Internet Researchers (AoIR), and Organization for Economic Cooperation and
Development (OECD). We also invited a large number of professionals, innovators and policy
people from technology businesses; government, including the National Science Foundation,
Federal Communications Commission and the European Union; the media and mediawatchdog organizations; and think tanks and interest networks (for instance, those that
include professionals and academics in anthropology, sociology, psychology, law, political
science and communications), as well as globally located people working with
communications technologies in government positions; top universities’
engineering/computer science departments, business/entrepreneurship faculties, and
graduate students and postgraduate researchers; plus many who are active in civil society
organizations such as the Association for Progressive Communications (APC), the Electronic
Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF) and Access
Now; and those affiliated with newly emerging nonprofits and other research units
examining ethics and the digital age. Invitees were encouraged to share the canvassing
questionnaire link with others they believed would have an interest in participating, thus
there was a “snowball†effect as the invitees were joined by those they invited to weigh in.
Since the data are based on a nonrandom sample, the results are not projectable to any
population other than the individuals expressing their points of view in this sample.
The respondents’ remarks reflect their personal positions and are not the positions of their
employers; the descriptions of their leadership roles help identify their background and the
locus of their expertise.
About 74% of respondents identified themselves as being based in North America; the others
hail from all corners of the world. When asked about their “primary area of internet interest,â€
39% identified themselves as research scientists; 7% as entrepreneurs or business leaders;
10% as authors, editors or journalists; 10% as advocates or activist users; 11% as futurists or
consultants; 3% as legislators, politicians or lawyers; and 4% as pioneers or originators. An
additional 22% specified their primary area of interest as “other.â€
More than half the expert respondents elected to remain anonymous. Because people’s level
of expertise is an important element of their participation in the conversation, anonymous
respondents were given the opportunity to share a description of their internet expertise or
background, and this was noted where relevant in this report.
42
PEW RESEARCH CENTER
www.pewresearch.org
Here are some of the key respondents in this report (note, position titles and
organization names were provided by respondents at the time of this canvassing and
may not be current):
Bill Adair, Knight Professor of Journalism and Public Policy at Duke University; Daniel
Alpert, managing partner at Westwood Capital; Micah Altman, director of research for the
Program on Information Science at MIT; Robert Atkinson, president of the Information
Technology and Innovation Foundation; Patricia Aufderheide, professor of
communications at American University; Mark Bench, former executive director of World
Press Freedom Committee; Walter Bender, senior research scientist with MIT/Sugar Labs;
danah boyd, founder of Data & Society; Stowe Boyd, futurist, publisher and editor-inchief of Work Futures; Tim Bray, senior principal technologist at Amazon; Marcel
Bullinga, trend watcher and keynote speaker; Eric Burger, research professor of computer
science and director of the Georgetown Center for Secure Communication; Jamais Cascio,
distinguished fellow at the Institute for the Future; Barry Chudakov, founder and principal
at Sertain Research and StreamFuzion Corp.; David Conrad, well-known CTO; Larry
Diamond, senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at
Stanford University; Judith Donath, Harvard University’s Berkman Klein Center for
Internet & Society; Stephen Downes, researcher at the National Research Council of
Canada; Johanna Drucker, professor of information studies at the University of
California, Los Angeles; Andrew Dwyer, expert in cybersecurity and malware at the
University of Oxford; Esther Dyson, entrepreneur, former journalist and founding chair at
ICANN; Glenn Edens, CTO for Technology Reserve at PARC, a Xerox company; Paul N.
Edwards, fellow in international security at Stanford University; Mohamed Elbashir,
senior manager for internet regulatory policy at Packet Clearing House; Susan Etlinger,
industry analyst at Altimeter Research; Bob Frankston, internet pioneer and software
innovator; Oscar Gandy, professor emeritus of communication at the University of
Pennsylvania; Mark Glaser, publisher and founder of MediaShift.org; Marina Gorbis,
executive director at the Institute for the Future; Jonathan Grudin, principal design
researcher at Microsoft; Seth Finkelstein, consulting programmer and EFF Pioneer Award
winner; Susan Hares, a pioneer with the NSFNET and longtime internet engineering
strategist; Jim Hendler, professor of computing sciences at Rensselaer Polytechnic
Institute; Starr Roxanne Hiltz, author of “Network Nation†and distinguished professor of
information systems; Helen Holder, distinguished technologist at Hewlett Packard (HP);
Jason Hong, associate professor at the School of Computer Science at Carnegie Mellon
University; Christian H. Huitema, past president of the Internet Architecture Board;
Alan Inouye, director of public policy for the American Library Association; Larry Irving,
CEO of The Irving Group; Brooks Jackson of FactCheck.org; Jeff Jarvis, a professor at
43
PEW RESEARCH CENTER
www.pewresearch.org
the City University of New York’s Graduate School of Journalism; Christopher Jencks, a
professor emeritus at Harvard University; Bart Knijnenburg, researcher on decisionmaking and recommender systems at Clemson University; James LaRue, director of the
Office for Intellectual Freedom of the American Library Association; Jon Lebkowsky, web
consultant, developer and activist; Mark Lemley, professor of law at Stanford University;
Peter Levine, professor and associate dean for research at Tisch College of Civic Life; Mike
Liebhold, senior researcher and distinguished fellow at the Institute for the Future; Sonia
Livingstone, professor of social psychology at the London School of Economics; Alexios
Mantzarlis, director of the International Fact-Checking Network; John Markoff, retired
senior technology writer at The New York Times; Andrea Matwyshyn, a professor of law at
Northeastern University; Giacomo Mazzone, head of institutional relations for the World
Broadcasting Union; Jerry Michalski, founder at REX; Riel Miller, team leader in futures
literacy for UNESCO; Andrew Nachison, founder at We Media; Gina Neff, professor at
the Oxford Internet Institute; Alex ‘Sandy’ Pentland, member of the U.S. National
Academy of Engineering and the World Economic Forum; Ian Peter, internet pioneer,
historian and activist; Justin Reich, executive director at the MIT Teaching Systems Lab;
Howard Rheingold, pioneer researcher of virtual communities and author of “Net Smartâ€;
Mike Roberts, Internet Hall of Fame member and first president and CEO of ICANN;
Michael Rogers, author and futurist at Practical Futurist; Tom Rosenstiel, director of
the American Press Institute; Marc Rotenberg, executive director of EPIC; Paul Saffo,
longtime Silicon-Valley-based technology forecaster; David Sarokin, author of “Missed
Information: Better Information for Building a Wealthier, More Sustainable Futureâ€;
Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia
University; Jack Schofield, longtime technology editor and now columnist at The
Guardian; Clay Shirky, vice provost for educational technology at New York University;
Ben Shneiderman, professor of computer science at the University of Maryland; Ludwig
Siegele, technology editor at The Economist; Evan Selinger, professor of philosophy at
Rochester Institute of Technology; Scott Spangler, principal data scientist at IBM Watson
Health; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Richard
D. Titus, CEO for Andronik; Joseph Turow, professor of communication at the University
of Pennsylvania; Stuart A. Umpleby, professor emeritus at George Washington University;
Siva Vaidhyanathan, professor of media studies and director of the Center for Media and
Citizenship at the University of Virginia; Tom Valovic, The Technoskeptic magazine; Hal
Varian, chief economist for Google; Jim Warren, longtime technology entrepreneur and
activist; Amy Webb, futurist and CEO at the Future Today Institute; David Weinberger,
senior researcher at Harvard University’s Berkman Klein Center for Internet & Society;
Kevin Werbach, professor of legal studies and business ethics at the Wharton School at the
University of Pennsylvania; John Wilbanks, chief commons officer at Sage Bionetworks;
44
PEW RESEARCH CENTER
www.pewresearch.org
and Irene Wu, adjunct professor of communications, culture and technology at George
Washington University.
A brief selection of institutions at which respondents work or have affiliations:
Adroit Technologies, Altimeter Group, Amazon, American Press Institute, Asia-Pacific
Network Information Centre (APNIC), AT&T, BrainPOP, Brown University, BuzzFeed,
Carnegie Mellon University, Center for Advanced Communications Policy, Center for Civic
Design, Center for Democracy/Development/Rule of Law (CDDRL), Center for Media
Literacy, Cesidian Root, Cisco, City University of New York’s Graduate School of Journalism,
Cloudflare, CNRS, Columbia University, comScore, Comtrade Group, Craigslist, Data &
Society, Deloitte, DiploFoundation, Electronic Frontier Foundation, Electronic Privacy
Information Center, Farpoint Group, Federal Communications Commission (FCC),
Fundación REDES, Future Today Institute, George Washington University, Google,
Hackerati, Harvard University’s Berkman Klein Center for Internet & Society, Harvard
Business School, Hewlett Packard (HP), Hyperloop, IBM Research, IBM Watson Health,
ICANN, Ignite Social Media, Institute for the Future, International Fact-Checking Network,
Internet Engineering Task Force, Internet Society, International Telecommunication Union
(ITU), Karlsruhe Institute of Technology, Kenya Private Sector Alliance, KMP Global,
LearnLaunch, LMU Munich, Massachusetts Institute of Technology (MIT), Mathematica
Policy Research, MCNC, MediaShift.org, Meme Media, Microsoft, Mimecast, Nanyang
Technological University, National Academies of Sciences/Engineering/Medicine, National
Research Council of Canada, National Science Foundation, Netapp, NetLab Network,
Network Science Group of Indiana University, Neural Archives Foundation, New York Law
School, New York University, OpenMedia, Oxford University, Packet Clearing House,
Plugged Research, Princeton University, Privacy International, Qlik, Quinnovation, RAND
Corporation, Rensselaer Polytechnic Institute, Rochester Institute of Technology, RoseHulman Institute of Technology, Sage Bionetworks, Snopes.com, Social Strategy Network,
Softarmor Systems, Stanford University, Straits Knowledge, Syracuse University, Tablerock
Network, Telecommunities Canada, Terebium Labs, Tetherless Access, UNESCO, U.S.
Department of Defense, University of California (Berkeley, Davis, Irvine and Los Angeles
campuses), University of Michigan, University of Milan, University of Pennsylvania,
University of Toronto, Way to Wellville, We Media, Wikimedia Foundation, Worcester
Polytechnic Institute, World Broadcasting Union, W3C, Xerox’s PARC, Yale Law.
Complete sets of for-credit and anonymous responses can be found here:
ï‚· http://www.elon.edu/e-web/imagining/surveys/2017_survey/
future_of_the_information_environment.xhtml
45
PEW RESEARCH CENTER
www.pewresearch.org
ï‚· http://www.elon.edu/e-web/imagining/surveys/2017_survey/
future_of_information_environment_anon.xhtml
ï‚· http://www.elon.edu/e-web/imagining/surveys/2017_survey/
future_of_information_environment_credit.xhtml
46
PEW RESEARCH CENTER
www.pewresearch.org
Theme 1: The information environment will not improve.
The problem is human nature
Misinformation and “fake news†have been around for as long as people have communicated.
But today’s instant, low-budget, far-reaching communications capabilities have the potential
to make the problem orders of magnitude more dangerous than in the past.
As Frederic Filloux explains: “‘Misinformation’ – a broader concept that encompasses
intentional deception, low-quality information and hyperpartisan news – is seen as a serious
threat to democracies. … The Dark Web harbours vast and inexpensive resources to take
advantage of the social loudspeaker. For a few hundred bucks, anyone can buy thousands of
social media accounts that are old enough to be credible, or millions of email addresses. Also,
by using Mechanical Turk or similar cheap crowdsourcing services widely available on the
open web, anyone can hire legions of ‘writers’ who will help to propagate any message or
ideology on a massive scale. That trade is likely to grow and flourish with the emergence of
what experts call the ‘weaponized artificial intelligence propaganda,’ a black magic that
leverages microtargeting where fake news stories (or hyperpartisan ones) will be tailored
down to the individual level and distributed by a swarm of bots. What we see unfolding right
before our eyes is nothing less than Moore’s Law applied to the distribution of
misinformation: An exponential growth of available technology coupled with a rapid collapse
of costs.â€
Roughly half the experts in this canvassing generally agreed with Filloux’s description of how
technologies are emerging to enable misinformation distribution, and they worry about what
may come next. Many expressed deep concerns about people’s primal traits, behaviors and
cognitive responses and how they play out in new digital spaces. They said digital platforms
are often amplifying divisions and contentiousness, driving users to mistrust those not in
their “tribe.â€
As William L. Schrader, a former CEO with PSINet, wrote, “Mankind has always lied, and
always will; which is why the winners of wars get to write the history their way and others
have no say, but with the internet, the losers have a say! So which is better? Both sides, or
just the winner? We have both sides today.â€
Respondents discussed the scale of the problem and how difficult it can be to assess and
weed out bad information, saying that even sophisticated information consumers are likely to
struggle in the coming information environment and credulous consumers may have little
chance of working their way to true information. Nathaniel Borenstein, chief scientist at
47
PEW RESEARCH CENTER
www.pewresearch.org
Mimecast, commented, “Internet technologies permit anyone to publish anything. Any
attempt to improve the veracity of news must be done by some authority, and people don’t
trust the same authorities, so they will ultimately get the news that their preferred authority
wants them to have. There is nothing to stop them choosing an insane person as their
authority.â€
More people = more problems. The internet’s continuous growth and accelerating
innovation allow more people and artificial intelligence (AI) to create and instantly
spread manipulative narratives
Some experts argued that the scale of the problem – too much bad information too easily
disseminated – is their major concern. The internet facilitates too many information actors
with divergent motives to allow for consistent identification of reliable information and
effective strategies to flag false information.
Andrew Odlyzko, professor of math and former head of the University of Minnesota’s
Supercomputing Institute, observed, “‘What is truth has almost always been a contentious
issue. Technological developments make it possible for more groups to construct their
‘alternate realities,’ and the temptation to do it is likely to be irresistible.â€
Andrew Nachison, author, futurist and founder of WeMedia, noted, “Technology will not
overcome malevolence. Systems built to censor communication, even malevolent
communication, will be countered by people who circumvent them.â€
David Weinberger, writer and senior researcher at Harvard University’s Berkman Klein
Center for Internet & Society, noted, “It is an urgent problem, so it will be addressed
urgently, and imperfectly.â€
Jan Schaffer, executive director of J-Lab, said, “There are so many people seeking to
disseminate fake news and produce fake videos in which officials appear to be talking that it
will be impossible to shut them all down. Twitter and Facebook and other social media
players could play a stronger role. Only a few national news organizations will be trusted
sources – if they can manage to survive.â€
Brian Cute, longtime internet executive and ICANN participant, said, “I am not optimistic
that humans will collectively develop the type of rigorous habits that can positively impact
the fake news environment. Humans have to become more effective consumers of
information for the environment to improve. That means they have to be active and effective
48
PEW RESEARCH CENTER
www.pewresearch.org
‘editors’ of the information they consume. And that means they have to be active and
effective editors of the information they share on the internet, because poorly researched
information feeds the fake news cycle.â€
Rajnesh Singh, Asia-Pacific director for a major internet policy and standards
organization, observed, “The issue will be how to cope with the volume of information that is
generated and the proportion of it that is inaccurate or fake.â€
Steve Axler, a user-experience researcher, replied, “Social media and the web are on too
large a scale to control content.â€
A software engineer referred to the human quest for power and authority as the
underlying problem, writing, “Automation, control and monopolization of information
sources and distribution channels will expand, with a goal to monetize or obfuscate.â€
Allan Shearer, associate professor at the University of Texas, Austin, observed, “The
problem is the combination of the proliferation of platforms to post news and an increasing
sense of agency in each person that his/her view matter, and the blurring of facts and
opinions.â€
A vice president for stakeholder engagement said, “With a deluge of data, people look
for shortcuts to determine what they believe, making them susceptible to filter bubbles and
manipulation.â€
Jens Ambsdorf, CEO at The Lighthouse Foundation, based in Germany, replied, “The
variability of information will increase. The amount of ‘noise’ and retweeted stuff will
increase and without skills and tools it will become more difficult for citizens to sort out
reliable from unreliable sources.â€
A professor at Harvard Business School wrote, “The vast majority of new users and a
majority of existing users are not sophisticated readers of news facts, slants or content, nor
should we expect them to be. Meanwhile, the methods for manipulation are getting better.â€
Diana Ascher, information scholar at the University of California, Los Angeles, observed,
“Fake news, misinformation, disinformation and propaganda are not new; what’s new is the
algorithmic propagation of such information. In my research, I call this the new yellow
journalism.â€
49
PEW RESEARCH CENTER
www.pewresearch.org
Axel Bender, a group leader for Defence Science and Technology (DST) Group of Australia,
said, “The veracity of information is unlikely to improve as 1) there will be an increase in the
number and heterogeneity of (mis)information sources; and 2) artificially intelligent
misinformation detectors will not be smart enough to recognise semantically sophisticated
misinformation.â€
Adrian Schofield, an applied research manager based in Africa, commented, “The passive
majority remains blissfully unaware of the potential (and real) threats posed by malicious
operators in the ICT [information and communications technology] space. As fast as the
good guys develop barriers … the bad guys will devise ways to leapfrog the barriers. It’s cheap
and it’s borderless.â€
Collette Sosnowy, a respondent who shared no additional personal details, wrote, “The
sources of information and the speed with which they are spread are so numerous I don’t see
how they could effectively be curtailed.â€
Monica Murero, a professor and researcher based in Europe, wrote, “The information
environment will not improve easily, in part because of the technical nature of digitalized
information and the tendency of re-elaborating and sharing information by anyone able to
act in a prosumeristic fashion. For example, fake news (or unreliable information) is easy to
produce thanks to the technical nature of digital information (duplicable, easy to modify, free
of costs, durable over time, etc.) and the availability of programs [software] and tools (predesigned format for elaborating images and contents) are widely available to anyone at an
easy reach (a few words on any search engine). In the next 10 years I foresee disparities
among countries in terms of improvements and deteriorations of the information
environment (depending on country and their regulation, i.e., China, Europe, North Korea,
U.S., etc.).â€
Sebastian Benthall, junior research scientist, New York University Steinhardt, responded,
“The information environment is getting more complex. This complexity provides more
opportunities for production and consumption of misinformation.â€
Tiffany Shlain, Filmmaker & Founder, The Webby Award, wrote, “I am concerned that as
artificial intelligences advance, distinguishing between what is written by a human and what
is generated by a bot will become more difficult.â€
Matt Moore, a business leader, observed, “The pressures driving the creation of ‘fake news’
will only increase – political partisanship, inter-state rivalry, plus the technologies needed to
50
PEW RESEARCH CENTER
www.pewresearch.org
create and disseminate fake news will also increase in power and decrease in cost. New
verification tools will emerge but these will not be sufficient to counter these other forces.â€
Jon Lebkowsky, web consultant/developer, author and activist, commented, “Given the
complexity of the evolving ecosystem, it will be hard to get a handle on it. The
decentralization of education is another difficult aspect: universal centralized digital literacy
education could potentially mitigate the problem, but we could be moving away from
universal standard educational systems.â€
The executive director of a major global privacy advocacy organization said,
“What’s essentially happening today is basic human behaviour and powerful systems at play.
It is only out-of-touch advocates and politicians who believe we can somehow constrain these
results.â€
Veronika Valdova, managing partner at Arete-Zoe, noted, “Rogue regimes like Russia will
continue exploiting the information environment to gain as much power and influence as
possible. Jurisdictional constraints will make intervention less practicable. Also, whilst the
overall information environment in English-speaking countries might improve due to the
employment of artificial intelligence and easier neutralization of bots, this may not
necessarily be the case for small nations in Europe where the environment is compartmented
by language.â€
Joel Reidenberg, chair and professor of law at Fordham University, wrote, “The
complexity of the information ecosystem and the public’s preference for filter bubbles will
make improvements very difficult to achieve at scale.â€
Garrett A. Turner, a vice president for global engineering, predicted, “The information
environment will not improve because [promotion of misinformation] has proven to be very
effective and it is also extremely time-consuming to validate or police. In the transmission of
information online it is difficult to decipher factual news from entertainment.â€
An author and journalist based in North America wrote, “Fragmenting social groups
and powerful economic interests have the motive and means to create their own narratives.
Who is the status quo that can defeat this in a modern society that likes to define itself as
disruptive, countercultural, rebel, radical – choose the term that fits your tribe.â€
51
PEW RESEARCH CENTER
www.pewresearch.org
Anonymous respondents also commented:
ï‚§ “There is just too much information and the environment has become so fragmented.â€
ï‚§ “The sheer volume of information and communication is too much.â€
 “Many users seem to be indifferent or confused about objectively accurate information,
which is difficult to confirm in an environment of information overload.â€
Humans are by nature selfish, tribal, gullible convenience seekers who put the most
trust in that which seems familiar
A share of these respondents supported a view articulated by Peter Eckart, director of
information technology at the Illinois Public Health Institute. He argued, “The problem isn’t
with the sources of information, but with the hearers of it. If we don’t increase our collective
ability to critically analyze the information before us, all of the expert systems in the world
won’t help us.†People believe what they want to believe, these experts argued, and now have
new ways to disseminate the things they believe to others.
David Sarokin, writer, commented, “People spread the information they want to spread,
reliable or not. There’s no technology that will minimize that tendency.â€
Helen Holder, distinguished technologist at Hewlett Packard (HP), said, “People have a
strong tendency to believe things that align with their existing understanding or views.
Unreliable information will have a substantial advantage wherever it reinforces biases,
making it difficult to discredit or correct. Also, people are more inclined to believe
information received from more than one source, and the internet makes it trivial to
artificially simulate multiple sources and higher levels of popular support or belief.â€
Bill Jones, chairman of Global Village Ltd., predicted, “Trust can be so easily abused that
it’s our collective ability to discern false from true, which ultimately is the key, but that is
fraught with challenges. No one can do it for us.
A futurist/consultant based in North America said, “The toxicity of the modern
information landscape is as much attributable to vulnerabilities in human neurobiology as it
is to anything embedded in software systems. Many of us, including those with the most
control over the information environment, badly want things to improve, but it’s unclear to
me that purely technical methods can solve these problems.â€
52
PEW RESEARCH CENTER
www.pewresearch.org
Cliff Cook, planning information manager for the City of Cambridge, Massachusetts, noted,
“Fake news and related problems thrive when they have a receptive audience. The underlying
problem is not one of fake news – rumors were no doubt a problem in ancient Rome and the
court of King Henry VIII – but the presence of a receptive audience. Until a means is found
to heal the fundamental breakdown in trust among Americans, I do not see matters
improving, no matter what the technical fix.â€
An anonymous respondent wrote, “Google and Facebook are focusing money and
attention on the problem of false information. … We have not yet reached a societal tipping
point where facts are valued, however.â€
Matt Armstrong, an independent research fellow working with King’s College and former
executive director of the U.S. Advisory Commission on Public Diplomacy, replied, “The
influence of bad information will not change until people change. At present, there is little
indication that people will alter their consumption habits. When ‘I heard it on the internet’ is
a mark of authority rather than derision as it was, we are in trouble. This is coupled with the
disappointing reality that we are now in a real war of words where many consumers do not
check whether the words are/were/will be supported by actions or facts. The words of now
are all that matter to too many audiences.â€
An assistant professor of political science wrote, “Improving information
environments does little to address demand for misinformation by users.â€
An anonymous research scientist observed, “False narratives are not new to the
internet, but authority figures are now also beginning to create them.â€
A former journalism professor and author of a book on the future of news
commented, “The information superhighway’s very speed and ease have made people
sloppier thinkers, not more discerning.â€
A researcher based in Europe replied, “The problem with fake news is not a
technological one, but one related to human nature, fear, ignorance and power. … In
addition, as new tools are developed to fight fake news, those interested in spreading them
will also become more savvy and sophisticated.â€
Walter Bender, a senior research scientist at MIT, wrote, “I don’t think the problem is
technological. It is social, and it is not much different from the American Aurora of 1800 in
Philadelphia [a one-sided and largely discredited publication in American Revolution times].
53
PEW RESEARCH CENTER
www.pewresearch.org
People want to believe what reinforces their current positions, and there will be ‘publishers’
willing to accommodate them.â€
Many respondents mentioned distrust in authority as a motivating factor behind the uptick
in the spread of misinformation, and some said political polarization and the destruction of
trust are feeding the emergence of more misinformation.
Daniel Kreiss, associate professor of communication at University of North Carolina,
Chapel Hill, commented, “Misinformation/fake news/ideological/identity media is a political
problem. They are the outcome, not the cause, of political polarization.â€
A senior fellow at a center focusing on democracy and the rule of law wrote,
“Many people do not care about the veracity of the news they consume and circulate to
others, and these people will continue spreading false information; those who do so from
within established democracies can be punished/penalized, but many will remain in nondemocracies where access to reliable information will deteriorate. My prediction is that in
parts of the world things will improve, in others they will deteriorate. On average things will
not improve.â€
Anonymous respondents also wrote:
 “To really solve this issue we need to look deeper at what truth means and who cares
about it. It will take more than a decade to sort that out and implement solutions.â€
 “Collective-action problems require a collective-action response, and I don’t think we’ll
manage that in the international environment.â€
 “The information environment reflects society at its best or worst; changes in human
behavior, not technology, will impact on the information environment.â€
 “At best, the definition of ‘lie’ will simply change and official disinformation will be called
information anyway.â€
 “I have yet to see any evidence that the most-active political media consumers want more
facts and less opinion.â€
 “There has never been a wholly truthful human environment, and there are too many
vested interests in fantasy, fiction and untruths.â€
 “I do not think technology can keep up with people’s creativity or appetite for information
they find congenial to their pre-existing beliefs.â€
ï‚§ “As long as people want to believe a lie, the lie will spread.â€
 “From propaganda to humour, the natural drive to share information will overcome any
obstacles that hinder it.â€
54
PEW RESEARCH CENTER
www.pewresearch.org
 “It will be a constant game of whack-a-mole, and polarization has now come to facts. It’s
almost like facts are a philosophy class exercise now – what is truth?â€
In existing economic, political and social systems, the powerful corporate and
government leaders most able to improve the information environment profit most
when it is in turmoil
A number of these experts predicted that little will change as long as social media platforms
favor content that generates lots of clicks – and therefore ad dollars – whether the
information is true or not. A typical version of this view came from Jonathan Brewer,
consulting engineer for Telco2. He commented, “The incentives for social media providers
are at odds with stemming the spread of misinformation. Outrageous claims and hyperbole
will always generate more advertising revenue than measured analysis of an issue.â€
Gina Neff, professor at the Oxford Internet Institute, said, “The economic stakes are simply
too high to rein in an information ecosystem that allows false information to spread. Without
the political commitment of major social media platforms to address the problem, the
technical challenges to solving this problem will never be met.â€
Ari Ezra Waldman, associate professor of law at the New York Law School, wrote, “The
spread of misinformation will only improve if platforms take responsibility for their role in
the process. So far, although intermediaries like Facebook have nodded toward doing
something about ‘fake news’ and cyberharassment and other forms of misleading or harmful
speech, they simultaneously continue to maintain that they are merely neutral conduits and,
therefore, uneasy about maintaining any sort of control over information flow. The ‘neutral
conduit’ canard is a socio-legal strategy that is little more than a fancy way of absolving
themselves of responsibility for their essential role in the spread of misinformation and the
decay of discourse.â€
Joseph Turow, professor of communication at the University of Pennsylvania, commented,
“The issues of ‘fake’ and ‘weaponized’ news are too complex to be dealt with through
automated, quantitative or algorithmic means. These activities have always existed under one
label or another, and their rapid distribution by activist groups, companies and governments
as a result of new technologies will continue. One reason is that the high ambiguity of these
terms makes legislating against them difficult without infringing on speech and the press.
Another reason is that the people sending out such materials will be at least as creative as
those trying to stop them.â€
55
PEW RESEARCH CENTER
www.pewresearch.org
A professor of legal issues and ethics at one of the pre-eminent graduate schools
of business in the United States said, “The basic incentive structure that promotes
untrustworthy information flow won’t change, and the bad guys will improve their
approaches faster than the good guys.â€
Dave Burstein, editor of FastNet.news, said, “Speaking of reports on policy and technology,
the important thoroughly misleading information usually comes from the government and
especially lobbyists and their shills. All governments lie, I.F. Stone taught us, and I can
confirm that’s been true of both Obama’s people and the Republicans this century I have
reported. Corporate advocates with massive budgets – Verizon and AT&T in the hundreds of
billions – bamboozle reporters and governments into false claims. The totally outnumbered
public-interest advocates often go over the line sometimes as well.â€
Johanna Drucker, professor of information studies at the University of California, Los
Angeles, commented, “The constructedness (sic) of discourse removes news from the
frameworks in which verification can occur. Responsible journalism will continue on the
basis of ethical accountability, but nothing will prevent other modes of discourse from
proliferating. No controls can effectively legislate for accuracy or verity. It is a structural
impossibility to suture language and the lived.â€
Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, commented, “Fake
news spreads faster than genuine news. It is more attractive and ‘hot.’ We do not see
corresponding efforts from genuine news peddlers to give factual information that is timely
and interesting. On the contrary, reporters have become lazy, lifting articles off social media
and presenting only obvious facts. Fake news peddlers have invested resources (domains and
bots) to propagate their agenda. There isn’t a corresponding effort by genuine news
reporters. People will get so used to being ‘duped’ that they will treat everything they read
with skepticism, even real news. It will no longer be financially viable to invest in real news as
the readership may go down. In such an environment, it is likely fake news will continue to
thrive.â€
A professor of media and communication based in Europe said, “The online
information environment will not improve if its architectural design, operation and control is
left to five big companies alone. If they do not open up their algorithms, data governance and
business models to allow for democratic and civic participation (in other words, if there is
only an economic driver to rule the information environment) the platform ecosystem will
not improve its conditions to facilitate an open and democratic online world.â€
56
PEW RESEARCH CENTER
www.pewresearch.org
A leading researcher studying the spread of misinformation observed, “The payoffs
for actors who are able to set the agenda in the emerging information environment are rising
quickly. Our collective understanding of and ability to monitor these threats and establish
ground rules across disparate communities, geographies and end devices will be challenged.â€
A research scientist at Oxford University commented, “Misinformation and
disinformation and motivated reasoning are integral to platform capitalism’s business
model.â€
Rick Hasen, professor of law and political science at the University of California, Irvine,
said, “By 2027 there will be fewer mediating institutions such as acceptable media to help
readers/viewers ferret out truth. And there will be more deliberate disinformation from
people in and out of the U.S.â€
Raymond Hogler, professor of management at Colorado State University, replied,
“Powerful state actors … will continue to disseminate false, misleading and ideologically
driven narratives posing as ‘news.’â€
A member of the Internet Architecture Board said, “The online advertising ecosystem
is very resistant to change, and it powers the fake news ‘industry.’ Parties that could do
something about it (e.g., makers of browsers) don’t have a strong incentive to do so.â€
A professor of law at a state university replied, “Powerful incentives will continue for
irresponsible politicians and others in the political industry (paid or not) to spread false
information and for publications to allow it to circulate: attention, clicks, ad revenue,
political power. Meanwhile the First Amendment will protect [sharing of all information]
powerfully inside the United States as the overall moral and ethical character of the country
continues to be debased.â€
An author/editor/journalist wrote, “Confirmation bias, plus corporate manipulation, will
not allow an improvement in the information environment.â€
An internet pioneer and principal architect in computing science replied, “Clicks
will remain paramount, and whether those clicks are on pages containing disinformation or
not will be irrelevant.â€
57
PEW RESEARCH CENTER
www.pewresearch.org
Edward Kozel, an entrepreneur and investor, predicted, “Although trusted sources (e.g.,
The New York Times) will remain or new ones will emerge, the urge for mass audience and
advertising revenue will encourage widespread use of untrusted information.â€
David Schultz, professor of political science at Hamline University, said, “The social media
and political economic forces that are driving the fragmentation of truth will not significantly
change in the next 10 years, meaning the forces that drive misinformation will continue.â€
Paul Gardner-Stephen, senior lecturer at the College of Science & Engineering at Flinders
University, noted, “Increasing technical capability and automation, combined with the
demonstrated dividends that can be obtained from targeted fake news makes an arms race
inevitable. Governments and political parties are the major players. This is Propaganda 2.0.â€
Peter Levine, associate dean and professor at the Tisch College of Civic Life at Tufts
University, observed, “I don’t think there is a big enough market for the kinds of institutions,
such as high-quality newspapers, that can counter fake news, plus fake news pays.â€
A postdoctoral scholar at a major university’s center for science, technology and
society predicted, “Some advances will be made in automatically detecting and filtering
‘fake news’ and other misinformation online. However, audience attention and therefore the
financial incentives are not aligned to make these benefits widespread. Even if some online
services implement robust filtering and detection, others will happily fill the void they leave,
pandering to a growing audience willing to go to ‘alternative’ sites to hear what they want to
hear.â€
David Brake, a researcher and journalist, pointed out, “The production and distribution of
inaccurate information has lower cost and higher incentives than its correction does.â€
Mark Lemley, a professor of law at Stanford University, wrote, “Technology cannot easily
distinguish truth from falsehood, and private technology companies don’t necessarily have
the incentive to try.â€
Darel Preble, president and executive director at the Space Solar Power Institute,
commented, “Even the technical media … is substituting ad hominem attacks (or volume)
and repetition for technical accuracy to complex problems. Few people are familiar with or
want to risk their paycheck to see these problems fixed, so these problems will continue
growing for now.â€
58
PEW RESEARCH CENTER
www.pewresearch.org
Amali De Silva-Mitchell, a futurist, replied, “There is political and commercial value in
misinformation. Absolutely ethical societies have never existed. Disclosures are critical and it
will be important to state the source of news as being human or machine, with the legal
obligation remaining with the human controller of the data.â€
Some said the information environment is impossible to fully tame due to the human drive to
continually innovate, competing to upgrade, monetize and find new ways to assert power.
Alan D. Mutter, media consultant and faculty at the graduate school of journalism at the
University of California, Berkeley, replied, “The internet is, by design, an open and
dynamically evolving platform. It’s the Wild West, and no one is in charge.â€
Anonymous respondents commented:
ï‚§ “‘Fake news’ is just the latest incarnation of propaganda in late capitalism.â€
 “The profit motive will be put in front of value. The reliance of corporations on
algorithms that allow them to do better targeting leads to greater fragmentation and
greater possibility for misinformation.â€
 “People have to use platforms for internet communication. The information environment
is managed by the owners of these platforms who may not be so interested in ethical
issues.â€
 “We cannot undo the technology and economics of the current information environment,
nor can we force those who are profiting from misinformation to forego their monetary
gains.â€
Human tendencies and infoglut drive people apart and make it harder for them to
agree on ‘common knowledge.’ That makes healthy debate difficult and destabilizes
trust. The fading of news media contributes to the problem
Many of these experts said one of the most serious problems caused by digital
misinformation and the disruption of public support of traditional news media models is the
shrinkage of the kind of commonly embraced facts that are the foundation of civil debate – a
consensus understanding of the world. An anonymous respondent predicted, “The
ongoing fragmentation of communities and the lack of common voice will lead to the lower
levels of trust.â€
A professor of education policy commented, “Since there is no center around which to
organize truth claims (fragmented political parties, social groups, identity groups,
59
PEW RESEARCH CENTER
www.pewresearch.org
institutional affiliations, fragmentation of work environments, increasing economic
precarity, etc.) … there are likely to be more, not fewer, resources directed at destabilizing
truth claims in the next 10 years.â€
An historian and former legislative staff person based in North America observed,
“A major issue here is that what one side believes is true, is not the same as what the other
side believes. Example: What Yankees and Confederates believed about the Civil War has
never been the same, and there are differing social and cultural norms in different ages,
times, regions and religions that have different ‘takes’ on what is right and proper behavior.
We are facing an almost existential question here of ‘what is truth?’â€
Daniel Wendel, a research associate at MIT, said, “Trust is inherently personal. While
central authorities can verify the identity of a particular website or person, consumers are
less likely to trust a ‘trusted’ centralized fact checker [than the sources that express the same
belief system as they and their friends]. For example, Snopes.com has already been
discounted by right-wing pundits as being too ‘liberal.’ Trust must come from networks
rather than authorities, but the ideas behind that are nascent and the technologies do not yet
exist.â€
Philip Rhoades, retired IT consultant and biomedical researcher with the Neural Archives
Foundation, said, “The historical trend is for information to be less reliable and for people to
care less.â€
A professor of rhetoric and communication noted, “People can easily stay in their own
media universe and never have to encounter ideas that conflict with their own. Also, the
meshing of video and images with text creates powerful effects that appeal to the more
rudimentary parts of the brain. It will take a long time for people to adapt to the new media
environment.â€
A professor of journalism at New York University observed, “The fragmentation of
the sources of media – and increasing audience participation – meant that it was no longer
just canonical sources that could get their voices amplified.â€
A number of respondents challenged the idea that any individuals, groups or technology
systems could or should “rate†information as credible or not.
A professor of political economy at a U.S. university wrote, “I don’t think there is a
clear, categorical distinction between ‘false’ news and the other kind. Some falsehoods have
60
PEW RESEARCH CENTER
www.pewresearch.org
been deliberately fostered by elites for purposes of political management – the scope has
widened dramatically in recent years.â€
Greg Shatan, partner at Bortstein Legal Group based in New York, replied, “Unfortunately,
the incentives for spreading false information, along with the incentives for destabilizing
trust in internet-based information, will continue to incentivize the spread of ‘fake news.’
Perversely, heightened concerns about privacy and anonymity are counterproductive to
efforts to increase trust and validation.â€
A project manager for the U.S. government responded, “It is going to get much worse
before it gets better. There is no sign that people are willing to work at what we agree on,
most would prefer to be divisive and focus on differences.â€
An anonymous research scientist said, “I do not buy the assumption that information,
‘accurate’ or not, is the basis of political or – in fact – any action. I actually think it never has
been. Yes, this is the story we like to tell when justifying actions vis-a-vis everyone else. It
helps us present ourselves as rational, educated and considerate human beings. But no, in
practice we do and say and write and report whatever seems reasonable in the specific
situation for the specific purposes at hand. And that is OK, as long as others have the
opportunity to challenge and contest our claims.â€
Some respondents noted that trust has to be in place before people can establish any sort of
shared knowledge or begin to debate and decide the facts on which decisions can be based.
An anonymous internet activist/user based in Europe commented, “Who can
determine what is or is not fake news?â€
A principal research scientist based in North America commented, “The
trustworthiness of information is a subjective measure as seen by the consumer of that
information.â€
An anonymous futurist/consultant said, “Technology and platform design is only one
part of the problem. Building trust and spreading information-quality skills takes time and
coordination.â€
A director with a digital learning research unit at a major university on the U.S.
West Coast said, “As the technology evolves, we will find ways (technologically) and also
culturally to become savvier about the way in which we manage and define ‘trustworthiness.’â€
61
PEW RESEARCH CENTER
www.pewresearch.org
A small segment of society will find, use and perhaps pay a premium for information
from reliable, quality sources. Outside of this group ‘chaos will reign’ and a worsening
digital divide will develop
A deeper digital divide was predicted by some respondents who said that 10 years from now
those who value accurate information and are willing to spend the time and/or money to get
it will separate from those who do not. Alex ‘Sandy’ Pentland, member of the U.S.
National Academy of Engineering and the World Economic Forum, predicted of the
information environment, “Things will improve, but only for the minority willing to pay
subscription prices.â€
An anonymous journalist observed, “One of today’s most glaring class divides is between
those who are internet-savvy and so skilled at evaluating different sources and information
critically that it’s almost instinctive/automatic, and those who have very limited skills in that
department. This divide is usually glaringly obvious in anyone’s Facebook feed now that such
a large portion of the population is on Facebook, and the lack of ability to evaluate sources
online critically is most common in older persons with limited education and/or limited
internet proficiency – and can sometimes also be observed in young people with the same
attributes (limited education/internet proficiency).â€
Garland McCoy, president of the Technology Education Institute, predicted, “As most of us
know there is the public internet, which operates as a ‘best effort’ platform and then there are
private internets that command a premium because they offer much more reliable service. So
it will be with the ‘news’ and information/content on the internet. Those who have the
resources and want fact checking and vetting will pay for news services, which exist today,
that charge a subscription and provide, for the most part, vetted/authenticated facts ‘news.’
Those who do not have the resources or who don’t see the ‘market value’ will take their
chances exploring the world of uncensored, unfiltered and uncontrolled human mental
exertion.â€
A professor whose research is focused on this topic wrote, “I can envisage [several]
scenarios – trusted networks (where false information is pointed out), or the wild unbounded
morass. It may well be that one will have to pay to join such a trusted network because those
who can provide trusted information will be paid to do so.â€
Meamya Christie, user-experience designer with Style Maven Linx, replied, “There will be
a division in how information is consumed. It will be like a fork in the road. People will have
62
PEW RESEARCH CENTER
www.pewresearch.org
a choice to go through one portal or another based on their own set of core values, beliefs and
truths.â€
A strategist for an institute replied, “The trust in 2027 will be only for the elites who can
pay, or for the most-educated people.â€
A fellow at a UK-based university said, “I don’t think a technological or top-down
solution can ‘fix’ the information environment without addressing a range of root issues
relating to democratic disenfranchisement, deteriorating education and anti-intellectualism.â€
A senior research fellow working for the positive evolution of the information
environment said, “Only a small fraction of the population (aged, educated, affluent – i.e.,
ready to pay for news) will have good, balanced, fair accurate, timely, contextualized,
information.â€
63
PEW RESEARCH CENTER
www.pewresearch.org
Theme 2: The information environment will not improve
because technology will create new challenges that
can’t or won’t be countered effectively and at scale
Many respondents who expect no improvement in the information environment argue that
certain actors in government,business and other individuals with propaganda agendas and
special interests are turning technology to their favor in the spread of misinformation. There
are too many of them and they are clever enough that they will continue to infect the online
information environment, according to these experts.
A clear articulation of this view came from Howard Greenstein, adjunct professor of
management studies at Columbia University. He argued, “This is an asymmetric problem. It
is much easier for single actors and small groups to create things that are spread widely, and
once out, are hard to ‘take back.’†Moreover, the process of distinguishing between legitimate
information and questionable material is very difficult, those who support this line of
reasoning said.
An anonymous respondent wrote, “Whack-a-mole seems to be our future. There is an
inability to prevent new ways of disrupting our information systems. New pathways will
emerge as old ones are closed.â€
Those generally acting for themselves and not the public good have the advantage,
and they are likely to stay ahead in the information wars
Eric Burger, research professor of computer science and director of the Georgetown Center
for Secure Communications in Washington, D.C., replied, “Distinguishing between fake
news, humor, strange-but-true news or unpopular news is too hard for humans to figure out,
no less a computer.â€
Wendell Wallach, a transdisciplinary scholar focused on the ethics and governance of
emerging technologies at The Hastings Center, wrote, “While means will be developed to
filter out existing forms of misinformation, the ability to undermine core values will continue
to be relatively easy while steps to remediate destructive activities will be much harder and
more costly. Furthermore, a gap will expand as technological possibilities speed ahead of
their ethical-legal oversight. Those willing to exploit this gap for ideological purposes and
personal gain will continue to do so.â€
64
PEW RESEARCH CENTER
www.pewresearch.org
Justin Reich, assistant professor of comparative media studies at MIT, noted, “Strategies to
label fake news will require algorithmic or crowd-sourced approaches. Purveyors of fake
news are quite savvy at reverse engineering and gaming algorithms, and equally adept at
mobilizing crowds to apply ‘fake’ labels to their positions and ‘trusted’ labels to their
opponents.â€
Sean Goggins, an associate professor and sociotechnical data scientist, wrote, “Our
technical capacity to manipulate information will continue to grow. With investment tilted
toward for-profit enterprise and the intelligence community and away from public-sector
research like that sponsored by the National Science Foundation, it’s doubtful that
technology for detecting misinformation will keep up with technologies designed to spread
misinformation.â€
An associate professor of communication studies at a Washington-based
university said, “The fake news problem is not one that can be fixed with engineering or
technological intervention short of a total reimagination of communication network
architecture.â€
Fredric Litto, professor emeritus at the University of São Paulo in Brazil, wrote, “The
incredibly complex nature of contemporary information technology will inevitably make for a
continuing battle to reduce (note: I dare not say eliminate) false and undesirable ‘news’ and
other information permeating electronic media. Without a foolproof method of truly
eliminating the possibility of anonymity – and I cannot see this really happening by 2027 –
there will be no end to the malicious use of most, if not all, modes of communication.â€
Michel Grossetti, research director at CNRS (French National Center for Scientific
Research), commented, “It is the old story of the bullet and the cuirass. Improvement on one
side, improvement on the other.â€
Daniel Berleant, author of the book “The Human Race to the Future,†predicted, “Digital
and psychological technologies for the spreading of misinformation will continue to improve,
and there will always be actors motivated to use it. Ways to prevent it will develop as well but
will be playing catch-up rather than taking the lead.â€
John Lazzaro, a retired electrical engineering and computing sciences professor at the
University of California, Berkeley, wrote, “I don’t think society can reach a consensus on what
constitutes misinformation, and so trying to automate the removal of misinformation won’t
be possible.â€
65
PEW RESEARCH CENTER
www.pewresearch.org
Andreas Birkbak, assistant professor at Aalborg University in Copenhagen, said, “The
information environment will not improve because there is no way to automate fact checking.
Facts are context-dependent.â€
A North American program officer wrote, “While technology may stop bots from
spreading fake news, I don’t think it will be that easy to stop people who want to believe the
fake news and/or make up the fake news.â€
A researcher based in North America said, “News aggregators such as Facebook will get
better at removing low-information content from their news feeds but the amount of
mis/disinformation will continue to increase.â€
Joseph Konstan, distinguished professor of computer science and engineering at the
University of Minnesota, observed, “Those trying to manipulate the public have great
resources and ingenuity. While there are technologies that can help identify reliable
information, I have little confidence that we are ready for widespread adoption of these
technologies (and the censorship risks that relate to them).â€
A former software systems architect replied, “Bad actors will always find ways to work
around technical measures. In addition, it is always going to be human actors involved in the
establishment of trust relationships and those can be gamed. I do not envision media
organizations being willing participants.â€
Can technology detect and flag trustworthy information? A North American research
scientist said the idea of basing likely veracity on people’s previous information-sharing
doesn’t always work, writing, “People don’t just share information because they think it’s
true. They share to mark identity. Truth-seeking algorithms, etc. don’t address this crucial
component.â€
A vice president for an online information company wrote, “It is really hard to
automatically determine that some assertion is fake news or false. Using social media and
‘voting’ is overcome by botnets for example.â€
J. Cychosz, a content manager and curator for a scientific research organization,
commented, “False information has always been around and will continue to remain,
technology will emerge that will help identify falsehoods and culture will shift, but there will
always be those who find a path around.â€
66
PEW RESEARCH CENTER
www.pewresearch.org
Philippa Smith, research manager and senior lecturer in new media at Auckland
University of Technology, noted, “Efforts to keep pace with technology and somehow
counteract the spread of misinformation or fake news may be more difficult than we imagine.
I have concerns that the horse has bolted when it comes to trying to improve the information
environment.â€
Frank Odasz, president of Lone Eagle Consulting, observed, “Having watched online scams
of all kinds evolve to be increasingly insidious, I expect this trend will continue and our best
cybersecurity will forever be catching up with, not eradicating [it]. The battle between good
and evil is accelerated digitally.â€
Ed Terpening, an industry analyst with the Altimeter Group, replied, “Disinformation will
accelerate, as trust in institutions we’ve thought of as unbiased widen polarization through
either hiding or interpreting facts that fulfill an agenda.â€
Basavaraj Patil, principal architect at AT&T, wrote, “The rapid pace of technological
change and the impact of false information on a number of aspects of life are key drivers.â€
Bradford W. Hesse, chief of the health communication and informatics research branch of
the U.S. National Cancer Institute, said, “Communication specialists have been dealing with
the consequences of propaganda, misinformation and misperceived information from before
and throughout the Enlightenment. What has changed is the speed with which new
anomalies are detected and entered into the public discourse. The same accelerated capacity
will help move the needle on social discourse about the problem, while experimenting with
new solutions.â€
Liam Quin, an information specialist at the World Wide Web Consortium, said the
information environment is unlikely to be improved because “human nature won’t change in
such a short time, and people will find ways around technology.â€
Alan Inouye, director of public policy for the American Library Association, commented,
“New technologies will continue to provide bountiful opportunities for mischief. We’ll be in
the position of playing defense as new abuses or attacks arise.†However, he also added, “This
will be a future that is, on balance, not worse than today’s situation.â€
A distinguished engineer for a major provider of IT solutions and hardware
warned that any sort of filtering system will flag, filter or delete useful content along with the
67
PEW RESEARCH CENTER
www.pewresearch.org
misinformation, “It’s not possible to censor the untrustworthy news without filtering some
trustworthy news. That struggle means the situation is unlikely to improve.â€
Weaponized narratives and other false content will be magnified by social media,
online filter bubbles and AI
Some respondents noted that the people best served by the manipulation of public
sentiment, arousing fear and anger and obfuscating reality, are encouraged by their success
now and that gives them plenty of incentive to make things worse in the next decade. As a
professor and author based in the United States put it, “Too many people have
realized that lying helps their cause.â€
An anonymous respondent based in Asia/Southeast Asia replied, “We are being
‘gamed,’ simply put.â€
Alexis Rachel, user researcher and consultant, said, “The logical progression of things at
this point (unless something radical occurs) is that there will be increasingly more ‘sources’
of information that are unverified and vetted – a gift from the internet and the ubiquitous
publishing platform it is. All it takes is something outrageous and plausible enough to go
viral, and once out there, it becomes exceedingly difficult to extinguish – fact or fiction.â€
Martin Shelton, a security researcher with a major technology company, said, “Just as it’s
now straightforward to alter an image, it’s already becoming much easier to manipulate and
alter documents, audio, and video, and social media users help these fires spread much faster
than we can put them out.â€
Matt Stempeck, a director of civic technology, noted, “The purveyors of disinformation will
outpace fact-checking groups in both technology and compelling content unless social media
platforms are able to stem the tide.â€
Alf Rehn, chair of management and organization studies at Ã…bo Akademi University,
commented, “Better algorithms will sort out some of the chaff [and may improve the overall
information environment] but at the same time the weaponization of fake news will develop.
As strange as it seems, we may enter a time of less, but ‘better’ [more effective] fake news.â€
An anonymous respondent, wrote, “Distrust of academics and scientists is so high it’s
hard to imagine how to construct a fact-checking body that would trusted by the broader
population.â€
68
PEW RESEARCH CENTER
www.pewresearch.org
The most-effective tech solutions to misinformation will endanger people’s dwindling
privacy options, and they are likely to limit free speech and remove the ability for
people to be anonymous online
While some people believe more surveillance and requirements for identity authentication
are go-to solutions for reining in the negative impacts of misinformation, a number of these
experts said bad actors will evade these measures and platform providers, governments and
others taking these actions will expand unwanted surveillance and curtail civil liberties.
Fred Davis, a futurist based in North America, wrote, “Automated efforts to reduce fake
news will be gamed, just like search is. That’s 20 years of gaming the system – search engine
optimization and other things that corrupt the information discovery process have been in
place for over 20 years, and the situation is still bad. Also, it may be difficult to implement
technology because it could also be used for mass censorship. Mass censorship would have a
very negative effect on free speech and society in general.â€
Adam Powell, project manager at the Internet of Things Emergency Response Initiative at
the University of Southern California, said, “The democratization of the internet, and of
information on the internet, means just that: Everyone has and will have access to receiving
and creating information, just as at a watercooler. Not only won’t the internet suddenly
become ‘responsible,’ it shouldn’t, because that is how totalitarian regimes flourish (see:
Firewall, Great, of China).â€
An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’
information as valid or invalid is inherently biased.†And a professor and researcher
noted, “In an open society, there is no prior determination of what information is genuine or
fake.â€
The owner of a consultancy replied, “We’re headed to a world where most people will use
sources white-listed (explicitly or not) by third parties (e.g., Facebook, Apple, etc.).â€
A distinguished professor emeritus of political science at a U.S. university wrote,
“Misinformation will continue to thrive because of the long (and valuable) tradition of
freedom of expression. Censorship will be rejected.â€
A professor at a major U.S. university replied, “Surveillance technologies and financial
incentives will generate greater surveillance.†A retired university professor predicted,
“Increased censorship and mass surveillance will tend to create official ‘truths’ in various
69
PEW RESEARCH CENTER
www.pewresearch.org
parts of the world. In the United States, corporate filtering of information will impose the
views of the economic elite.â€
Among the respondents to this canvassing who recommended the removal of anonymity was
Romella Janene El Kharzazi, a content producer and entrepreneur, who said, “One
obvious solution is required authentication; fake news is spread anonymously and if that is
taken away, then half of the battle is fought and won.†A research scientist based in
Europe predicted, “The different actors will take appropriate measures – including efficient
interfaces for reporting and automatic detection – and implement efficient decision
mechanisms for the censorship of such content.â€
A senior researcher and distinguished fellow for a major futures consultancy
observed, “Reliable fact checking is possible. Google in particular has both the computational
resources and talent to successfully launch a good service. Facebook may also make progress,
perhaps in a public consortium including Google. Twitter is problematic and would need
major re-structuring including a strict, true names policy for accounts – which is
controversial among some privacy sectors.â€
A retired consultant and strategist for U.S. government organizations replied,
“Regardless of technological improvements, the change agents here are going to have to be,
broadly speaking, U.S. Supreme Court judges’ rulings on constitutional interpretations of
free speech, communication access and any number of other constitutional issues brought to
the fore by many actors at both the state and national level, and these numerous judicial
change agents’ decisions are, in turn, affected by the citizen opinion and behavior.â€
Anonymous respondents also commented:
 “The means and speed of dissemination have changed [the information environment]. It
cannot be legislated without limiting free speech.â€
ï‚§ “It’s impossible to filter content without bias.â€
 “The internet is designed to be decentralized; not with the purpose of promoting accuracy
or social order.â€
 “There is no way – short of overt censorship – to keep any given individual from
expressing any given thought.â€
ï‚§ “Blocking (a.k.a. censoring) information is just too dangerous.â€
ï‚§ “I do not think it can be stopped without doing a lot of damage to freedom of speech.â€
 “Forces of evil will get through the filters and continue to do damage while the majority
will lose civil rights and many will be filtered or banned for no good reason.â€
70
PEW RESEARCH CENTER
www.pewresearch.org
ï‚§ “It’s a hard problem to solve fairly.â€
71
PEW RESEARCH CENTER
www.pewresearch.org
Theme 3: The information environment will improve
because technology will help label, filter or ban
misinformation and thus upgrade the public’s ability to
judge the quality and veracity of content
Many respondents who said they hope for or expect an improvement in the information
environment 10 years from now mentioned ways in which new technological and verification
solutions might be implemented. A number of these proposed solutions include the hope that
technology will be created to evaluate content – making it “accessible.â€
Andrea Forte, associate professor at Drexel University, said, “As mainstream social media
take notice of information quality as an important feature of the online environment, there
will be a move towards designing for what I call ‘assessability’ – interfaces that help people
appropriate assessments of information quality.â€
Filippo Menczer, professor of informatics and computing at Indiana University, noted,
“Technical solutions can be developed to incorporate journalistic ethics into social media
algorithms, in a way similar to email spam filters.â€
Scott Fahlman, professor emeritus of AI and language technologies at Carnegie Mellon
University, commented, “For people who are seriously trying to figure out what to believe,
there will be better online tools to see which things are widely regarded as true and which
have been debunked.â€
Robert Bell, co-founder of the Intelligent Community Forum, commented, “Technology
moves fast and humans adapt more slowly, but we have a proven capability to solve problems
we create with technology.â€
Joanna Bryson, associate professor and reader at University of Bath and affiliate with the
Center for Information Technology Policy at Princeton University, responded, “We are in the
information age, and I believe good tools are likely to be found in the next few years.â€
David J. Krieger, director of the Institute for Communication & Leadership in Lucerne,
Switzerland, commented, “The information environment will improve because a data-driven
society needs reliable information, and it is possible to weed out the false information.â€
72
PEW RESEARCH CENTER
www.pewresearch.org
Andrew McStay, professor of digital life at Bangor University in Wales, wrote,
“Undoubtedly, fake news and weaponised information will increase in sophistication, but so
will attempts to combat it. For example, the scope to analyse at the level of metadata is a
promising opportunity. While it is an arms race, I do not foresee a dystopian outcome.â€
Clifford Lynch, director of the Coalition for Networked Information, noted, “The severity
of the problem has now been recognized fairly widely, and while I expect an ongoing ‘arms
race’ in the coming decade, I think that we will make some progress on the problem.â€
A CEO and research director noted, “There are multiple incentives, economic and
political, to solve the problem.â€
An anonymous respondent said, “The public will insist that online platforms take more
responsibility for their actions and provide more tools to ensure information veracity.â€
Likely tech-based solutions include adjustments to algorithmic
filters, browsers, apps and plug-ins and the implementation of ‘trust ratings’
Matt Mathis, a research scientist at Google, responded, “The missing concept is an
understanding of the concept of ‘an original source.’ For science, this is an experiment, for
history (and news) an eyewitness account by somebody who was (verifiably) present.
Adding ‘how/why we know this’ to non-original sources will help the understanding that
facts are verifiable.â€
Federico Pistono, entrepreneur, angel investor and researcher with Hyperloop TT,
commented, “Algorithms will be tailored to optimize more than clicks – as this will be
required by advertisers and consumers alike – and deep learning approaches will improve.â€
Tatiana Tosi, netnographer at Plugged Research, commented, “The information
environment will improve due to new artificial-intelligence bots that will verify the
information. This should balance privacy and human rights in the automated environment.â€
A web producer/developer for a U.S.-funded scientific agency predicted, “The
reliance on identity services for real-world, in-person interactions, which start with trust in
web-based identification, will force reliability of information environments to improve.â€
An associate professor of business at a university in Australia commented, “Artificial
intelligence technologies are advancing quickly enough to create an ‘Integrity Index’ for news
73
PEW RESEARCH CENTER
www.pewresearch.org
sources even down to the level of individual commentators. Of course, other AI engines will
attempt to game such a system. I can envisage an artificial blogger that achieves high levels of
integrity before dropping the big lie just in time for an election. Big lies take a day or more to
be disproven so it may just work, but the penalty for a big lie, or any lie, can be severe so
everyone who gained from the big lie will be tainted.â€
A distinguished engineer for one of the world’s largest networking technologies
companies commented, “Certificate technologies already exist to validate a website’s
sources and are in use for financial transactions. These will be used to verify sources for
information in the future. Of course, there will always be people who look for information
(true or false) that validates their biases.â€
Ayaovi Olevie Kouami, chief technology officer at the Free and Open Source Software
Foundation for Africa, said, “The actual framework of the internet ecosystem could have a
positive impact on the information environment by setting up all the requisite institutions,
beginning with DNSSEC, IXPs, FoE, CIRT/CERT/CSIRT, etc.â€
Jean Paul Nkurunziza, a consultant based in Africa, commented, “The expected mass
adoption of the IPv6 protocol will allow every device to have a public IP address and then
allow the tracking of the origin of any online publication.â€
Mark Patenaude, vice president for innovation, cloud and self-service technology at
ePRINTit Cloud Technology, replied, “New programming tech and knowledge will create a
new language that will teach us to recognize malicious, false, misleading information by
gathering all news and content sources and providing us with accurate and true information.â€
Hazel Henderson, futurist and CEO of Ethical Markets Media, said, “Global ethical
standards and best practices are being developed in the many domains affected. New
verification technologies, including blockchain and smart contracts, will help.â€
An anonymous respondent based in North America who has confidence things may
be improved listed a series of technologies likely to be effective, writing: “Artificial
intelligence, machine learning, exascale computing from everywhere, quantum computing,
the Internet of Things, sensors, big data science and global collaborative NREN (National
Research and Education Network) alliances.â€
74
PEW RESEARCH CENTER
www.pewresearch.org
An anonymous respondent based in Europe warned, “Technical tools and shields to
filter and recognize manipulations will be more effective than attempts at education in
critical thinking for end users.â€
Anonymous survey participants also responded:
 “Relatively simple steps and institutional arrangements can minimize the malign
influence of misinformation.â€
 “Machines are going to get increasingly better at validating accuracy of information and
will report on it.â€
 “Artificial intelligence technologies will advance a lot, making it easy to make fake news
more difficult to be discovered and identified.â€
 “Technology for mass verification should improve as will the identification of posters.
Fakers will still exist but hopefully the half-life of their information will shrink.â€
 “Things will improve due to [better tracking of the] provenance of data and security and
privacy laws.â€
Regulatory remedies could include software liability law, required identities and the
unbundling of social networks like Facebook
A number of respondents said that evidence suggests people and internet content platform
providers can’t solve this problem and argued there will be pressure for regulatory reforms
that hold consistently bad actors responsible.
An associate professor at a major Canadian university said, “As someone who has
followed the information-retrieval community develop over the past 15 years – dealing with
spam, link farms, etc. – given a strong enough incentive, technologies will advance to address
the challenge of misinformation. This may, however, be unevenly distributed, and may be
more effective in domains such as e-business where there is a financial incentive to combat
misinformation.â€
An anonymous respondent wrote, “I hope regulators will recognise that social media
companies are publishers, not technology companies, and therefore must take responsibility
for what they carry. Perhaps then social media companies will limit the publication of false
advertising and misinformation.â€
A professor of media and communication based in Europe said, “It will be very
difficult to assign penalties to culprits when platforms deny responsibility for any
75
PEW RESEARCH CENTER
www.pewresearch.org
wrongdoing by their ‘users.’ Accountability and liability should definitely be assumed by
platform operators who spread news and information, regardless of its source and even if
unwittingly. Government has very limited power to ‘fake news’ or ‘misinformation’ but it can
definitely help articulate which actors in society are responsible.â€
A senior vice president for government relations predicted, “Governments should
and will impose additional obligations on platforms to increase their responsibility for
content on their services.â€
One possibility that a notable share of respondents mentioned is the requirement of an
authenticated ID for every user of a platform. An anonymous respondent said, “Bad
actors should be banned from access, but this means that a biography or identification of
some sort would be necessary of all participants.â€
Those in support of requiring internet users to provide a real identity when participating
online also mentioned the establishment of a reputation system. A partner in a services
and development company based in Switzerland commented, “A bad reputation is
the best penalty for a liar. It is the job of society to organize itself in a way to make sure that
the bad reputation is easily visible. It should also extend to negligence and any other related
behaviour allowing the spread of misinformation. Penal law alone is too blunt a tool and
should not be regarded as a solution. Modern reputation tools (similar in approach to what
financial audits and ratings have achieved in the 20th century) need to be built and their use
must become an expected standard (just like financial audits are now a legal requirement).â€
An anonymous activist/user wrote, “Loss of anonymity might be a way of ensuring some
discipline in the system, yet the institutions which would be deciding such punishments
today have no credibility with most of the population.â€
An anonymous ICT for development consultant and retired professor commented,
“Government best plays a regulating role and laws are punitive; so both regulation and laws
should be stringently applied.â€
A post-doctoral fellow at a center for governance and innovation replied, “Jail time
and civil damages should be applied where injuries are proven. Strictly regulate nontraditional media especially social media.â€
An associate professor at Brown University wrote, “Essentially we are talking about
the regulation of information, which is nearly impossible since information can be produced
76
PEW RESEARCH CENTER
www.pewresearch.org
by anyone. Government can establish ethical guidelines, perhaps similar to the institutional
review boards that regulate scientific research. Or it can be done outside government, like a
better business bureau.â€
An anonymous respondent based in Europe wrote, “Publicity, monetary fines and
definitely jail terms, depending on the scope and consequences of the spreading false
information. In terms of the government role in terms of prevention, it should not be
different than any other area, including sound legal regulation, strengthened capacities
identify false information and stop at early stages using legal mechanism, education and
awareness raising of citizens, as well as higher ethical stands (or zero tolerance) for public
officials walking on the edge.â€
A postdoctoral scholar based in North America wrote, “If we are talking about
companies such as Facebook, I do think there is room for discussion on the federal level of
their responsibility as, basically, a private utility. Regulation shouldn’t be out of the
question.â€
A legal researcher based in Asia/Southeast Asia said, “Stop them from using any
internet. Government should create regulations for internet companies to prevent the
distribution of false information.â€
A professor of humanities said, “Penalties are a nice idea, but who will decide which
instances of ‘fake news’ require greater penalties than others? The bureaucracy to make these
decisions would have to be huge.â€
77
PEW RESEARCH CENTER
www.pewresearch.org
Theme 4: The information environment will improve,
because people will adjust and make things better
Most respondents who expect an improvement in the information environment in the
coming years put their faith in maturing – and more discerning – information consumers
finding ways to cope personally and band together to effect change.
Alexios Mantzarlis, director of the International Fact-Checking Network based at the
Poynter Institute for Media Studies, commented, “While the risk of misguided solutions is
high, lots of clever people are trying to find ways to make the online information ecosystem
healthier and more accurate. I am hopeful their aggregate effect will be positive.â€
Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.,
observed, “Globally, we have more people with more tools with more access to more
information – and yes, more conflicting intent – than ever before; but, while messy and
confusing, this will ultimately improve the information environment. We will continue to
widen access to all types of information – access for citizen journalists, professionals,
technical experts, others – so while the information environment becomes more diverse, the
broader arc of human knowledge bends towards revelation and clarity; only mass
suppression will stop the paid and unpaid information armies from discovering and revealing
the truth.â€
A North American research scientist replied, “I’m an optimist, and believe we are going
through a period of growing pains with the spread of knowledge. In the next decade, we’ll
create better ways to suss out truth.â€
Sharon Tettegah, professor at the University of Nevada, commented, “As we learn more
about the types of information, we will be able to isolate misinformation and reliable
sources.â€
Pamela Rutledge, director of the Media Psychology Research Center, noted, “Fake news
and information manipulation are no longer ‘other people’s problems.’ This new awareness
of the importance of media will shift resources, education and behaviors across society.â€
Dariusz Jemielniak, professor of organization studies in the department of management
in networked and digital societies (MiNDS) at Kozminski University, said, “There are a
number of efforts aimed at eliminating fake news, and we as a society are going to make them
work.â€
78
PEW RESEARCH CENTER
www.pewresearch.org
Misinformation has always been with us and people have found ways to lessen its
impact. The problems will become more manageable as people become more adept at
sorting through material
Many respondents said the online realm as simply yet another step in human and
communications evolution and that history’s lessons here should be comforting. They argued
that previous information revolutions have inspired people to invent new ways to handle
problems with information overload, the proliferation of misinformation, and opportunities
for schemers to manipulate the emerging systems. The more hopeful among these experts
believe that dynamic will play out again in the digital age.
A professor of media studies at a European university wrote, “The history of technology
shows repeatedly that as a new technology is introduced – whatever the intentions of the
designers and manufacturers, bad actors will find ways to exploit the technology in darker,
more dangerous ways. In the short run, they can succeed, sometimes spectacularly: in the
long run, however, we usually find ways to limit and control the damage.â€
A futurist/consultant replied, “We’re seeing the same kinds of misinformation that used
to be in supermarket tabloids move online – it’s the format that has changed, not the human
desire for salacious and dubious news.â€
Robin James, an associate professor of philosophy at a North American university, wrote,
“The original question assumes that things have recently gotten worse. Scholars know that
phenomena like patriarchy and white supremacy have created ‘epistemologies of ignorance’
that have been around for hundreds of years. ‘Fake news’ is just a new variation on this.â€
The dean of one of the top 10 journalism and communications schools in the
United States replied, “Society always adjusts to new media and responds to weaknesses
and flaws. Individuals will adjust, as will the technology.â€
Lokman Tsui, assistant professor at the School of Journalism and Communication at The
Chinese University of Hong Kong, commented, “The information environment will improve.
This is not a new question; we had concerns about fake news when radio broadcasting and
mass media first appeared (for example, the Orson Welles’ reading of ‘War of the Worlds’).
People will develop literacy. Standards, norms and conventions to separate advertising from
‘organic’ content will develop. Bad actors who profit from fake news will be identified and
targeted.â€
79
PEW RESEARCH CENTER
www.pewresearch.org
Adam Nelson, a developer at Amazon, replied, “We had yellow journalism a hundred years
ago and we have it now. We’re at a low point of trust, but people will begin to see the value of
truth once people become more comfortable with what social platforms do and how they
work.â€
Axel Bruns, professor at the Digital Media Research Centre at the Queensland University of
Technology, commented, “Moral panics over new media platforms are nothing new. The web,
television, radio, newspapers and even the alphabet were seen as making it easier to spread
misinformation. The answer is media literacy amongst the public, which always takes some
years to catch up with the possibilities of new media technologies.â€
An anonymous respondent who predicts improvement replied, “Powerful social trends
have a life cycle, and the pendulum typically swings back over time.â€
An anonymous respondent said, “It is the nature of the technical development that
politics and regulatory forces are only able to react ex post, but they will.â€
A senior researcher at a U.S.-based nonprofit research center replied, “The next
generation of news and information users will be more attuned to the environment of online
news and will hopefully be more discerning as to its veracity. While there are questions as to
whether the digital native generation can accurately separate real news from fake, they at
least will have the technical and experiential knowledge that the older generations mostly do
not.â€
Many respondents expressed faith that technologists would be at the forefront of helping
people meet the challenges of misinformation. A managing partner and fellow in
economics predicted, “In order to avoid censorship, the internet will remain relatively
open, but technology will develop to more effectively warn and screen for fact-inaccurate
information. Think of it as an automated ‘PolitiFact’ that will point out b******* passively to
the reader.â€
An author and journalist based in North America said, “Social media, technology and
legacy media companies have an ethical and economic incentive to place a premium on
trusted, verified news and information. This will lead to the creation of new digital tools to
weed out hoaxes and untrusted sources.â€
80
PEW RESEARCH CENTER
www.pewresearch.org
Susan Price, lead experience strategist at Firecat Studio, observed, “There will always be a
demand for trusted information, and human creativity will continue to be applied to create
solutions to meet that demand.â€
Dane Smith, president of the public policy research and equity advocacy group Growth &
Justice, noted, “I’m an optimist. Truth will find a way and prevail.â€
Louisa Heinrich, founder of Superhuman Ltd., commented, “The need to tell our stories to
one another is a deeply rooted part of human nature, and we will continue to seek out better
ways of doing so. This drive, combined with the ongoing upward trend of accessibility of
technology, will lead more people to engage with the digital information environment, and
new trust frameworks will emerge as old ones collapse.â€
Michael R. Nelson, public policy executive at Cloudflare, replied, “Some news sites will
continue to differentiate themselves as sources of verified, unbiased information, and as
these sites learn how to better distinguish themselves from ‘fake news’ sites, more and more
advertisers will pay a premium to run their ads on such sites.â€
Steven Polunsky, writer with the Social Strategy Network, replied, “As with most
disruptive events, people will adjust to accommodate needs and the changing environment.â€
Liz Ananat, an associate professor of public policy and economics at a major U.S. university
wrote, “It will likely get worse first, but over 10 years, civil society will respond with resources
and innovation in an intensive effort. Historically, when civil society has banded together and
given its all to fight destructive forces, it has been successful.â€
Jane Elizabeth, senior manager at the American Press Institute, said, “The information
environment will improve because the alternative is too costly. Misinformation and
disinformation will contribute to the crumbling of a democratic system of government.â€
A number of these respondents said they expect information platform providers to police the
environment in good faith, implementing the screening of content and/or other solutions
while still protecting rights such as free speech.
A principal network architect for a major edge cloud platform company replied,
“Retooling of social networking platforms will likely, over time, reduce the value of
stupid/wrong news.â€
81
PEW RESEARCH CENTER
www.pewresearch.org
A senior solutions architect for a global provider of software engineering and IT
consulting services wrote, “The problem of fake news is largely a problem of untrusted
source. Online media platforms delegated the role of human judgment to algorithms and
bots. I expect that these social media platforms will begin to exercise more discretion in what
is posted when.â€
An anonymous respondent said, “Information platforms optimized for the internet are in
their infancy. Like early e-commerce models, which merely sought to replicate existing,
known systems, there will be massive shifts in understanding and therefore optimizing new
delivery platforms in the future.â€
An anonymous respondent wrote, “Google and other outlets like Facebook are taking
measures to become socially responsible content promoters. Combined with research trends
in AI and other computing sectors, this may help improve the ‘fake news’ trends by providing
better attribution channels.â€
Adam Gismondi, a researcher at the Institute for Democracy & Higher Education at Tufts
University, predicted, “Ultimately, the information distributors – primarily social media
platform companies, but others as well – will be forced, through their own economic selfinterest and public pushback, to play a pivotal role in developing filters and signals that make
the information environment easier for consumers to navigate.â€
Anonymous respondents shared these related remarks:
 “Everything we know about how human ingenuity and persistence has shaped the
commercial and military (and philanthropic) drivers of the internet, and the web suggests
to me that we will continue to ensure this incredible resource remains useful and
beneficial to our development.â€
ï‚§ “The tide of false information has to be stemmed. The alternative will be dystopia.â€
 “People will gain in sophistication, especially after witnessing the problems caused by the
spread of misinformation in this decade. Vetting will be more sophisticated, and
readers/viewers will be more alert to the signs that a source is not reliable.â€
ï‚§ “I have hope in human goodness.â€
 “Over the next 10 years, users will become much more savvy and less credulous on
average.â€
ï‚§ “People will develop better practices for dealing with information online.â€
82
PEW RESEARCH CENTER
www.pewresearch.org
Crowdsourcing will work to highlight verified facts and block those who propagate
lies and propaganda. Some also have hopes for distributed ledgers (blockchain)
Some respondents expressed optimism about the potential for people’s capabilities in
improving the visibility of the most-useful content, including the implementation of humanmachine evaluation of content to identify sources, grade their credibility and usefulness, and
possibly flag, tag or ban propagators of misinformation. An anonymous respondent
wrote, “AI, blockchain and crowdsourcing appear to have promise.â€
An assistant professor at a university in the U.S. Midwest wrote, “Crowd-based
systems show promise in this area. Consider some Reddit forums where people are called out
for providing false information … if journalists were called out/tagged/flagged by large
numbers of readers rather than their bosses alone, we would be inching the pebble forward.â€
But whose “facts†are being verified in this setting? Ned Rossiter, professor of
communication at Western Sydney University, argued, “Regardless of advances in
verification systems, information environments are no longer enmeshed in the era of
broadcast media and national publics or ‘imagined communities’ on national scales. The
increasing social, cultural and political fragmentation will be a key factor in the ongoing
contestation of legitimacy. Informational verification merely amplifies already existing
conditions.â€
Richard Rothenberg, professor and associate dean at the School of Public Health at
Georgia State University, said, “It is my guess that the dark end of the internet is relatively
small but it has an outsized presence. … If nothing else, folks have demonstrated enormous
resourcefulness, particularly in crowd endeavors, and I believe methods for assuring veracity
will be developed.â€
An anonymous research scientist based in North America wrote, “A system that
enables commentary on public assertions by certified, non-anonymous reviewers – such that
the reviewers themselves would be subject to Yelp-like review – might work, with the
certification provided by Verisign-like organizations. Wikipedia is maybe a somewhat
imperfect prototype for the kind of system I’m thinking of.â€
A Ph.D. candidate in informatics, commented, “It is possible to create systems that are
reliable and trusted, but probably not unhackable. I imagine there could be systems that
leverage the crowd to check facts in real time. Computational systems would be possible, but
it would be very difficult to create algorithms we could trust.â€
83
PEW RESEARCH CENTER
www.pewresearch.org
Jack Park, CEO at TopicQuests Foundation, predicted, “There will be new forms of
crowdsourcing – a radical kind of curation – participation in which will improve criticalthinking skills and will mitigate the effects of misinformation.â€
Some respondents also pointed out the rise of additional platforms where people can publish
useful information could be a positive force. An anonymous respondent wrote, “The rise
of more public platforms for media content (online opinion/editorials and platforms such as
Medium) gives me confidence that as information is shared, knowledge will increase so that
trust and reliability will grow. Collaboration is key here.â€
Blockchain systems were mentioned by a number of respondents – a senior expert in
technology policy based in Europe, commented, “… use blockchain to verify news†– but
with mixed support, as many hedged their responses. A journalist who writes about
science and technology said, “We can certainly create blockchain-like systems that are
pretty reliable. Nothing is ever perfect, though, and trusted systems are often hard to use.â€
The president of a center for media literacy commented, “The technology capability [of
potential verification systems] is immature and the costs are high. Blockchain technology
offers great promise and hope.â€
A journalist and experience strategist at one of the world’s top five technology
companies said, “The blockchain can be used to create an unhackable verification system.
However, this does not stop the dissemination of ‘fake news,’ it simply creates a way to trace
information.â€
A chief executive officer said, “Can P2P, blockchain, with attribution be unhackable? We
need a general societal move to more transparency.â€
84
PEW RESEARCH CENTER
www.pewresearch.org
Theme 5: Tech can’t win the battle. The public must
fund and support the production of objective,
accurate information. It must also elevate information
literacy to be a primary goal of education
A large share of respondents said that technology alone can’t work to improve the
information environment. Among these respondents, most pointed out two areas of concern:
1) The need for better funding of and support for journalism that serves the common good.
The attention economy of the digital age does not support journalism of the general quality of
the news media of the late 20th century, which was fairly well-respected for serving the
public good with information that helped create an informed citizenry capable of informed
decisions; 2) The need for massive efforts to imbue the public with much better information
literacy skills; this requires an education effort that reaches out to those of all ages,
everywhere.
Funding and support must be directed to the restoration of a well-fortified, ethical and
trusted public press
Many respondents said the information environment can’t be improved without more wellstaffed, financially stable, independent news organizations capable of rising above the clamor
of false and misleading content to deliver accurate, trusted content.
Susan Landau, a North American scientist/educator, wrote, “The underlying question is
whether this dissemination will expand or not lies with many players, many in the private
sector. How will the press handle ‘fake news’? How will the internet companies do so? And
how will politicians, at least politicians post-Trump? The rise of ‘fake news’ is a serious threat
to democracy. Post-election [U.S. 2016], some in the press have been pursuing news with the
same care and incisiveness that we saw in the Watergate era, but others are not. We have a
serious threat here, but it is not clear that interests are aligned in responding to it. And it is
not cheap to do so: securing sites against hacking is very difficult when the threat comes from
a powerful nation state. Is there a way to create trusted, unhackable verification systems?
This depends on what the use case is; it is a not 0-1 answer, but an answer in scales of grey. …
If society cannot adequately protect itself against the co-opting of public information by bad
actors, then democracy itself is in serious risk. We have had this problem for quite some
time. … What has changed is the scope and scale of these efforts, partially through domestic
funding, partially through foreign actors and partially through the ability of digital
technologies to change the spread of ‘false news.’ What is needed to protect society against
the coopting of public information is not only protecting the sources of the information, but
85
PEW RESEARCH CENTER
www.pewresearch.org
also creating greater public capability to discern nonsense from sense. … I do not see a role
for government in preventing the spread of ‘fake news’ – that comes too close to government
control of speech – but I do see one for government in preventing tampering with news and
research organizations, disrupting flows of information, etc.â€
Timothy Herbst, senior vice president at ICF International, noted, “We have no choice but
to come up with mechanisms to improve our information environment. The implications of
not doing so will further shake trust and credibility in our institutions needed for a growing
and stable democracy. Artificial intelligence (AI) should help but technological solutions
won’t be enough. We also need high-touch solutions and a reinforcement of norms that value
accuracy to address this challenge.â€
Peter Jones, associate professor in strategic foresight and innovation at OCAD University
in Toronto, predicted, “By 2027 decentralized internet services will displace mainstream
news, as corporate media continues to erode trust and fails to find a working business model.
Field-level investigative journalism will be crowdfunded by smaller consortiums, as current
news organizations will have moved into entertainment, such as CNN already has.â€
A senior international communications advisor commented, “I don’t believe that the
next 10 years will yield a business model that will replace the one left behind – particularly
with respect to print journalism, which in the past offered audiences more in-depth coverage
than was possible with video or radio. Today, print journalists effectively work for nothing
[and] are exposed to liability and danger that would have been unheard of 25 years ago.
Moreover, the separation between the interests of those corporations interested in
disseminating news and editorial has all but closed – aside from a few noteworthy
exceptions. Moreover, consumers of media appear to be having a harder time distinguishing
spurious from credible sources – this could be the end result of decades of neglect regarding
the public school system, a growing reliance on unsourced and uncross-checked social media
or any number of other factors. Bottom line is that very few corporations seem willing [to]
engage in a business enterprise that has become increasingly unfeasible from a financial
point of view.â€
A futurist/consultant based in Europe said, “News has always been biased, but the
apparent value of news on the internet has been magnified and so the value of exploiting it
has also increased. Where there is such perceived value, the efforts to generate misleading
news, false news and fake news will increase.â€
86
PEW RESEARCH CENTER
www.pewresearch.org
An anonymous respondent wrote, “There are too many pressures from the need to
generate ‘clicks’ and increase advertising revenue.â€
There were complaints about news organizations in survival mode that neglect their role of
informing the public in favor of pandering to it to stay afloat. Other experts worried about the
quality of reporting in an age when newsrooms have been decimated.
An anonymous respondent wrote, “The talent pool the media system draws its personnel
from will further deteriorate. Media personnel are influenced by defective information, and –
even more – the quality of inferences and interpretations will decrease.â€
Some expressed concerns about finding unbiased details about the world in an online
environment that becomes more cluttered all the time with content that does not feature this.
An anonymous survey participant wrote, “I worry that sources of information will
proliferate to the point at which it will be difficult to discern relatively unbiased sources from
sources that are trying to communicate a point of view independent of supporting facts.â€
Thomas Frey, executive director and senior futurist at the DaVinci Institute, replied, “The
credibility of the journalism industry is at stake and the livelihood of many people is hanging
in the balance of finding the tools, systems and techniques for validating the credibility of
news.â€
Eileen Rudden, co-founder of LearnLaunch, wrote, “The lack of trust in established
institutions is at the root of the issue. Trust will need to be re-established.â€
An international internet policy expert said, “Demand for trusted actors will rise.â€
This is not an easy fix, by any means. Kelly Garrett, associate professor in the School of
Communication at Ohio State University, said, “Although technology has altered how people
communicate, it is not the primary source of distrust in authority, expertise, the media, etc.
There are no simple technical solutions to the erosion of trust in those who produce and
disseminate knowledge.â€
Rob Lerman, a retired information science professional, commented, “The combination of
an established media which has encouraged opinion-based ‘news.’ The relative cheapness of
websites, the proliferation of state-based misinformation and the seeming laziness of news
consumers seems like an insurmountable obstacle to the improvement of the information
environment.â€
87
PEW RESEARCH CENTER
www.pewresearch.org
Elevate information literacy: It must become a primary goal at all levels of education
A number of participants in this canvassing urged an all-out effort to expand people’s
knowledge about the ways in which misinformation is prepared and spread – an education in
ways they can be wise and well-informed citizens in the digital age.
Jeff MacKie-Mason, university librarian and professor of information science and
economics at the University of California, Berkeley, commented, “One wonder of the internet
is that it created a platform on which essentially anyone can publish anything, at essentially
zero cost. That will become only more true. As a result, there will be a lot of information
pollution. What we must do is better educate information consumers and provide better
systems for reputation to help us distinguish the wheat from the chaff.â€
Sharon Roberts, a Ph.D. candidate, wrote, “Social changes will be the ones that will affect
our perception of the information environment. Just like there are still 1-888 psychic call
lines content on television or ‘Nigerian princes’ promising money sending me email, it’s a
social understanding of those meanings to be scams that have curtailed their [proliferation],
not any actual TV or email technology ‘trusted methods.’â€
Sharon Haleva-Amir, lecturer in the School of Communication at Bar Ilan University in
Israel, said, “I fear that the phenomenon of fake news will not improve due to two main
reasons: 1) There are too many interested actors in this field (both business and politics wise)
who gain from dispersion of false news and therefore are interested in keeping things the way
they are; 2) Echo chambers and filter bubbles will continue to exist as these attitudes are
typical to people’s behavior offline and online. In order to change that, people will have to be
educated since early childhood about the importance of both [the] credibility of sources as
well as variability of opinions that create the market of ideas.â€
Sandra Garcia-Rivadulla, a librarian based in Latin America, replied, “It will be more
important to educate people to be able to curate the information they get more effectively.â€
Jacqueline Morris, a respondent who did not share additional personal details, replied, “I
doubt there will be systems that will halt the proliferation of fake news. … The only way is to
reduce the value of fake news by ensuring that people do not fall for it, basically, by educating
the population.â€
Mike O’Connor, a self-employed entrepreneur, wrote, “The internet is just like real life;
bad actors will find ways to fool people. Healthy skepticism will be part of the mix.â€
88
PEW RESEARCH CENTER
www.pewresearch.org
Tomslin Samme-Nlar, technical lead at Dimension Data in Australia, commented, “I
expect the information environment to improve if user-awareness programs and campaigns
are incorporated in whatever solutions that are designed to combat fake news.â€
Geoff Scott, CEO of Hackerati, commented, “This isn’t a technical or information problem;
it’s a social problem. Fake news works because it supports the point of view of the people it
targets, which makes them feel good, right or vindicated in their beliefs. It takes critical
thinking to overcome this, which requires effort and education.â€
Andreas Vlachos, lecturer in artificial intelligence at the University of Sheffield,
commented, “I believe we will educate the public to identify misinformation better.â€
Iain MacLaren, director of the Centre for Excellence in Learning & Teaching at the
National University of Ireland, Galway, commented, “The fact that more people are now fully
aware of the existence of fake news, or propaganda, as it used to be known, means that there
is increasing distrust of unverified/unrecognised providers of news and information. … I
would like to hope, therefore, that a more sophisticated, critical awareness is growing across
society, and I certainly hear much to that effect amongst the young people/students I work
with. This also shows the importance of education.â€
Greg Wood, director of communications planning and operations for the Internet Society,
replied, “The information environment will remain problematic – rumors, false information
and outright lies will continue to propagate. However, I have hope that endpoints (people)
can become more sophisticated consumers and thus apply improved filters. The evolution of
email spam and how it has been dealt with provides a rough analogy.â€
Some people said, though, that information-literacy efforts, while possibly somewhat helpful
in some cases, will not have an effect in many situations.
Sam Punnett, research officer at TableRock Media, replied, “The information environment
will improve but what will determine this will be a matter of individual choice. Media
literacy, information literacy, is a matter of choosing to be educated.â€
David Manz, a cybersecurity scientist, replied, “Technology exists and will be created to
attribute statements to their source in an easy-to-understand manner. However, this will still
require the public to want to know the quality and source of their information.â€
89
PEW RESEARCH CENTER
www.pewresearch.org
Carol Chetkovich, professor emerita of public policy at Mills College, commented, “My
negative assessment of the information environment has to do primarily with my sense that
consumers of media (the public at large) are not sufficiently motivated and well-enough
educated to think critically about what they read. There will always be some garbage put out
by certain sources, so – even though it’s important that garbage be countered by good
journalism – without an educated public, the task of countering fake news will be
impossible.â€
Peter and Trudy Johnson-Lenz, founders of the online learning community Awakening
Technology, combined on this response: “If we rely on technological solutions to verify trust
and reliability of facts, then the number of states of the control mechanisms must be greater
or equal to the number of states being controlled. With bots and trolls and all sorts of
disinformation, that’s virtually impossible. There are probably some tech solutions, but that
won’t solve the entire problem. And walling off some sections of the information ecosystem
as ‘trusted’ or ‘verified fact-filled’ defeats the purpose of open communication. … If you study
microtargeting during the 2016 election, it’s clear that Facebook in particular was used to
spread disinformation and propaganda and discourage voting in a very effective manner.
This kind of activity is hard to discern and uncover in real time, it adds greatly to the polluted
ecosystem and it is virtually impossible to control. Ultimately, people are going to have to
make critical-thinking discernments themselves. Unfortunately, there are people who have
no interest in doing that, and in fact discourage anyone else from doing that. The echo
chamber is noisy and chaotic and full of lies. The only hope is some combination of
technological advances to trust and verify, people being willing to take the time to listen,
learn and think critically, and a rebuilding of trust. In our accelerating world, that’s a very big
ask! For an eye-opening perspective on acceleration, see Peter Russell’s recent essay, ‘Blind
Spot: The Unforeseen End of Accelerating Change.’â€
Bruce Edmonds, a respondent who shared no additional identifying details, noted, “Lack
of trust and misinformation are social problems that will not be solved with technical or
central fixes. Rather, political and new normative standards will need to be developed in
society.â€
Anonymous respondents wrote:
 “Bad information has always been produced and promulgated. The challenge remains for
individuals to stay skeptical, consider numerous sources and consider their biases.â€
90
PEW RESEARCH CENTER
www.pewresearch.org
 “The way to solve the issue is not so much in designing systems for detecting and
eliminating fake news but rather in educating people to manage information
appropriately. Media and information literacy is the answer.â€
 “Continued misinformation will help people to learn first-hand how bad information
functions in any system.â€
91
PEW RESEARCH CENTER
www.pewresearch.org
Acknowledgments
This report is a collaborative effort based on the input and analysis of the following
individuals.
Primary researchers
Janna Anderson, Director, Elon University’s Imagining the Internet Center
Lee Rainie, Director, Internet and Technology Research
Research team
Aaron Smith, Associate Director, Research
Claudia Deane, Vice President, Research
Editorial and graphic design
Margaret Porteus, Information Graphics Designer
Shannon Greenwood, Copy editor
Communications and web publishing
Shannon Greenwood, Associate Digital Producer
Tom Caiazza, Communications Manager
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
There is a very low likelihood that you won’t like the paper.
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more