Top News

EDITORIAL: Social media giants content to pass baton on policing hate

Facebook and other social media companies say they would welcome federal guidelines about illegal content. — Reuters file photo

Perhaps the easiest explanation is that businesses prefer a firm set of ground rules to uncertainty — even if those ground rules are going to get tougher, thanks to other people’s problems.

Canada’s federal government was already looking at regulating social media before the U.S. presidential election and the Jan. 6 violence at the U.S. Capitol. After the riot, the necessity of that regulation came even more to the fore.

And, perhaps surprisingly, social media companies are looking forward to having guidelines that they can actually apply. That may be because regulation would take the onus off of the companies themselves — they would simply be applying this country’s rules, rather than trying to haphazardly control the Hydra-like spread of internet hate on their own.

And it may also be because those same social media companies — and their employees — are uncomfortable about the continued presence of violent and hateful language on their platforms, yet feel to a large extent unable to handle, monitor or moderate the far reaches of their own empires.

Google, Facebook and Twitter have all indicated that they would be in favour of clear rules from the federal government about what the government considers to be illegal content.

The government of Prime Minister Justin Trudeau made regulating social media an election promise in 2019, saying in Heritage Minister Steven Guilbeault’s mandate letter that the minister was charged with taking, “action on combatting hate groups and online hate and harassment, ideologically motivated violent extremism and terrorist organizations.”

Australia, France and Germany have all taken steps to address social media hate, with a German law giving social media companies a 24-hour deadline to remove material or face fines.

Though the structure of Canada’s legislation hasn’t been revealed yet, Guilbeault told the Globe and Mail that, “While preserving a fundamental right to freedom of expression, our approach will require online platforms to monitor and eliminate illegal content that appears on their platforms. That includes hate speech, terrorist propaganda, violent content, child sexual exploitation and the non-consensual sharing of intimate images.”

Google, Facebook and Twitter have all indicated that they would be in favour of clear rules from the federal government about what the government considers to be illegal content.

And that shouldn’t come as a surprise to anyone.

Companies spend a fair amount of time analyzing what constitutes a risk to their core business. Anyone who has ever worked in private business at the managerial level knows that there are two prongs that managers are supposed to be aware of — not only opportunities, but threats as well.

Those threats can hit the core of a company. At least one right-leaning social media firm, the much more permissive Parler, saw scores of its suppliers, including its web-hosting company, flee after the Jan. 6 violence brought hate-fuelled Parler postings to the fore.

Even in the world of social media, chickens can come home to roost.

Someone else’s rules can take some of that pressure away.

RELATED:

Did this story inform or enhance your perspective on this subject?
1 being least likely, and 10 being most likely

Recent Stories