Deplatforming certain personalities raises questions around what can and can't be said online.
In early 2019, Facebook announced that it had removed a number of people and organizations from its platform for violating its community standards policy against dangerous individuals and organizations.
In the aftermath of the shooting in Christchurch, New Zealand, the social network announced it was banning speech related to white nationalism and white separatism.
Soon after, Facebook announced the removal of specific individuals, including Faith Goldy, a Canadian far-right political commentator (who ran to be mayor of Toronto), and a number of others who promote views considered to be hateful, including InfoWars host Alex Jones, and conspiracy theorist Laura Loomer.
The ban includes Instagram, which is owned by Facebook.
A Facebook representative explained the decision in a statement:
“Individuals and organizations who spread hate, attack, or call for the exclusion of others on the basis of who they are have no place on Facebook. That’s why we have a policy on dangerous individuals and organizations, which states that we do not allow those who are engaged in offline ‘organized hate’ to have a presence on Facebook. The individuals and organizations we have banned today violate this policy, and they will no longer be allowed a presence on our services. Our work against organized hate is ongoing and we will continue to review individuals, pages, groups and content against our community standards.”
Social media sites have long been criticized for allowing hateful speech and other kinds of harmful content to spread. Historically, these sites have taken the position that they are platforms for others’ views, not publishers themselves, so they are not responsible for the content that appears on the platform.
This position has become more challenging to defend, as awareness has grown of the role of sites like Instagram, Twitter, and Facebook in facilitating the spread of false and misleading information. The platforms have been compelled to take more action.
Advocates for human rights have pressured sites like Facebook to adopt stronger measures to remove harmful content. But some citizens have concerns about placing limits on freedom of expression.
The Charter of Rights and Freedoms protects Canadians’ freedom to express their opinions in public — a fundamental of democracy. But there are limits to what you can say. For example, if you publish something false about someone that damages their reputation, you may be sued for libel. And human rights laws protect people from discrimination on the basis of characteristics such as ethnicity or sexual orientation.
Much online speech that may seem hateful or harmful does not meet the legal definition of “hate speech.” In addition, court challenges are generally too difficult and expensive for most people or groups to consider.
While we often think of social media platforms as public spaces, the platforms are owned by private companies who are able to limit what speech is allowed. In practice, what determines allowable content online is the terms and conditions of the platforms themselves.
On social media, anyone can start an account and post whatever they would like, as long as it does not violate the terms of service agreement each user agrees to when creating their account. It is up to each platform to determine what content is allowed to stay up, and what gets taken down.
As a result, social media companies are often caught in the middle of debates around freedom of speech and whether online speech should be limited.
Because social media algorithms tend to display content that is sensational, controversial voices are often spread broadly, giving them a potentially broad audience.
The ongoing attempts of Facebook and other social media companies like YouTube and Twitter to keep their platforms free of harmful speech have become a central concern for digital citizenship.
Watch this news segment about Facebook’s decision to ban some personalities for violating their terms of service.
[Teacher note: you may wish to research and present selected information about one or more of the banned personalities, whose ideas may be disturbing to some.]
Deplatform — The process of denying people access to a venue for expressing their opinions. On social media platforms, users are most often denied access to their accounts or channels for violating the company’s terms and conditions.
Freedom of expression — The right of individuals and groups to think, believe, express, and receive any information without fear of government censorship or interference. In Canada, it is protected as a “fundamental freedom” (with some limits) as defined by the Charter of Rights and Freedoms.
Harmful speech — Speech or writing that attacks a specific group on the basis of race, ethnicity, religion, sexual orientation, or gender.
Hate speech — In Canada, “hate speech” is a legal term that refers to any form of expression that incites hatred against an identifiable group.
Terms and conditions — The rules that a user must agree to in order to use a service, such as a social media platform. These rules are written by the company that owns the service.