June 6, 2018 – It is the 74th anniversary of D-Day, the allied invasion of Normandy during the Second World War, a war fought by an alliance of nations to defeat fascism, and preserve both democracy and the central authoritarian state of the Soviet Union. The battlegrounds of that period are far different from those we experience today where discussions talk about the threat to the future of democracy from social media and the Internet.
In April, a conference was convened at Stanford University, sponsored by the Brooklyn-based Social Science Research Council, a not-for-profit organization that partners with academics, NGOs, and governments spanning more than 80 countries. Its work in part looks at the role of digital knowledge and data, and social media from the perspective of governance, democracy, and civil society. The conference at Stanford focused on media and democracy inviting researchers and academicians to share perspectives.
Among the subjects discussed:
- online hate speech
- online and offline civic discourse
- fake news
- false beliefs
- ideological echo chambers
- use of social media by hostile foreign actors
The rise of incivility and hate speech was explored using the American 2016 election as the model. Participants asked if hate speech increased during the election time period or if this was a false perception? The role of anonymity in social media postings was seen as an insufficient explanation of rates of inflammatory incivility and hate speech. The lack of accountability by social media platforms was seen as a contributing factor.
In tackling fake news, participants reported that exposure during the 2016 election was different based on political affiliation. Registered Republicans and Independents saw more fake news than Democrats. The average rate of exposure, however, was deemed to be low overall but there was no sense or measure of its effectiveness. Nor was it clear to conference participants on whether fake news offline yielded greater impact than that garnered from online exposure. It was determined that the exposure to false information spread through social media platforms needed greater study with researchers developing a better understanding of how individuals consume their media diet from entertainment, to newspapers, to streaming services, websites, and social media platforms. Participants also noted the lack of sufficient research to understand the role identity and community play in political behaviour and beliefs.
One of the questions discussed was the impact of correcting the disinformation spread by fake news. Would a person exposed to and accepting inaccurate information be willing to read corrected information? And then it was asked, in the face of multiple verified sources, would a person invested in community beliefs be willing to change a viewpoint in light of being presented with the real facts?
Another subject the conference dealt with was the motivation of information consumers. The questions asked did people “consume news in order to find the truth?” and “how strong is the demand for accuracy and truth?” And is the seeking of real facts influenced by specific topics, for example, climate change?
Homophily is a term meaning self-segregation, limiting your life associations to people who look like you, think like you, and act like you. It was pointed out by many in the conference that social media exerts similar selectivity online. In 2016 one can see homophily in voting patterns. This self-selection may be more significant than fake news in influencing voting behaviour.
And although the American 2016 election was the focus of the conference in its review of the influence of social media on voting, it was noted that what has happened in the U.S. was also happening elsewhere, for example, the Brexit vote.
The influence of foreign actors through social media was also a subject of discussion. Russian sources with access to social media were noted. The ability of Russian social media actors to influence voting patterns, although largely unprovable, is not much of a stretch since the country’s intelligence operations are actively pursuing such behaviour within Russia itself. China and Russia are both known to exert influence over behaviour and opinions of their own citizens. In 2016 the latter likely extended this practice abroad. Russian disinformation was cited as a tactic on social media to destabilize the natural election cycle through clandestine support to protestors, incitement of protests, creating alternative news sources, and creating fake experts to present “alternative facts.”
Although the report asks more questions than it answers, it is clear that something will have to change around online social media. What was seen as a way of breaking down barriers between countries, as a way to collectively engage in free discourse, is now considered an existential threat to democratic institutions.
In the conference report, participants discussed restrictive options such as controls on free speech similar to Germany’s recently passed Network Enforcement Act which came into effect on January 1, 2018. The law includes fines of up to 50 million Euros to online platforms for any hate speech postings not removed within 24 hours and a 7-day period for the removal of “illegal” content. The law applies to Facebook, Twitter, YouTube, Instagram, Snapchat, and other prominent social media platforms. In enacting the law, Heiko Maas, Germany’s Justice Minister stated, “Incitement to murder, threats, insults, and incitement of the masses or Auschwitz lies are not an expression of freedom of opinion but rather attacks on the freedom of opinion of others.” The German law certainly limits free speech as practiced in the United States, but not as practiced in Canada where “hate speech” is defined as illegal. And other democratic countries in Europe, as well as the European Union, are currently drafting similar statutes to limit freedom of speech when it transgresses civil discourse.
Being predominantly an American conference, attendees referenced a number of outstanding questions that the country needs to address, among these:
- What should be the boundaries of legal speech on a public online forum, or in print?
- Should those same boundaries exist within a private forum or corporations?
- What security measures should be put in place to protect potential targets from foreign online influencers?
- How can these protections be extended to widely used online information resources (citing Wikipedia for one)?
- What are the online boundaries in terms of jurisdiction related to social media campaigns, particularly if the source is not within the country where the campaign is run?
- Will such changes require alterations to the First Amendment which when drafted by the Founding Fathers could never have anticipated the Internet, the World Wide Web, and social media?