That we humans seek pleasure and, when possible, avoid pain seems a truth hardly worth commenting on. (Ironically, pleasure was often historically equated with “sin,” probably because so many seemingly pleasing things come with a destructive flip side. As the ironic expression goes, “If you like it, it’s carcinogenic.”) In social intercourse, many online behaviours also exemplify this two-sided nature.
Billions worldwide spend several dozen hours a week online seeking out loving contacts, beauty, amusement, news, stimulating entertainment, “likes.” But there are many accompanying harms: scams, phishing, identity theft, ransomware, fraud, various addictions, and more. Might internet software platforms be held accountable for at least some damages traceable to their use? Although these platforms are predominantly American, their influence is literally everywhere (like this computer), and certainly permeates Canadian online activity.
The relationship between technologies and their misuse long precedes the arrival of the world wide web. For example, an ongoing dispute exists regarding the liability of gun manufacturers for their contribution to mass shootings, crimes which would be impossible without their often carelessly marketed products. Currently in the U.S., immunity statutes protect gun manufacturers and dealers from prosecution for harms created by gun owners. President Biden wants to revoke this broad immunity, a very controversial proposal politically given America’s infatuation with guns.
Some software applications, multi-billion-dollar businesses, have also been implicated in deplorable social outcomes, but have so far squirmed away from any well-defined accountability.
Various academics and legal officials have critically examined the issue, but have so far failed to dent significantly these companies’ practices.
Harvard professor Shoshana Zuboff wrote a dense condemnation of the models of Facebook and Google (among others) for their unabashed appropriation of private data (she calls it theft) and its conversion into the raw material of commercially and politically targeted advertising. (See The Age of Surveillance Capitalism.)
America’s congress has forcefully invited such figures as Mark Zuckerberg (majority owner of Facebook) to public hearings to answer questions about the role of their programs in promoting hatred, among other evils. Little or nothing came of these meetings, apart from political theatre.
There’s plenty of evil in some of the communication that is freely distributed around the world through these programs. Encouragement of genocide against the Rohingya in Myanmar, spurious distribution of fake medical products, and moronic Trump tweets are all examples.
As with gun legislation immunities, laws protect internet platforms from liability for “speech” posted by individuals or organizations (including solicitation of funds, spreading of malicious rumours or misinformation, and so on). When a vicious fraud can incite armed violence without significant consequence (other than to get rich selling bogus coronavirus cures), something is wrong with the regulation of these organizations.
An NGO called the Aspen Institute has investigated these issues in depth. Among its findings: “…micro-targeting can be used through various filters to enable harmful targeting or exclusion, to violate civil rights law and discriminate against, or harmfully target, groups of users… It therefore may be time for Congress to reconsider the scope of Section 230” (the legislation providing liability protection for internet companies). Might Canada pursue such legislative change, too?
Technical challenges notwithstanding, civil society is increasingly under siege from various torts masquerading as free speech, and the vehicles through which these are promulgated must be properly regulated. I would actually “like” such a proposal.