On February 26, 2024, a new bill called Bill C-63, known as the Online Harms Act, was introduced for the first time in the Canadian House of Commons. According to the bill, its purpose is to modify existing laws, such as the Criminal Code and the Canadian Human Rights Act, to hold online platforms accountable for dealing with harmful content and ensure a safer online environment that prioritizes the well-being of all individuals in Canada, particularly children.
The Bill defines seven categories of content as harmful. I will elaborate on these categories below, but the seven categories can be summarized under three. The first is any content that leads to sexual abuse and sexual victimization of children. The second is any content that incites hatred. The third is content that incites violence, extreme violence, or terrorism.
You may have thought it was a strange legislative approach to include child pornography/child sexual abuse, hate crimes, and incitement to terrorism in the same Bill. Unfortunately, this odd combination of content is the least of the proposal's oddities.
The main purpose of the Bill is to redefine "hate crime" or "hatred" in Criminal Law and to extend its scope to offences in the entire Criminal Law or in any Act of Parliament. Yes, you have heard right. Should the bill be enacted, any offence could be classified as a hate crime. However, you haven't heard the worst if you feel terrorized right now. The offence could be punishable by life imprisonment. The Bill would also raise the maximum penalties for hate propaganda offences by up to five years.
If that isn't enough to concern you, wait for the capabilities of the omnipotent Digital Safety Commission. This new authority, to oversee and enforce the regulatory framework of the Online Harms Act, will have the power to search electronic data without needing a warrant, thus compromising citizens' digital privacy. Many other vital issues strongly contradict the law-making procedure in a free society on our agenda with this Bill.
With the enactment of this bill, unelected bureaucrats will gain the power to unilaterally determine what qualifies as freedom of expression and what constitutes a hate crime. This shift in authority allows politicians, pressure groups, and officials to impose their own cultural agendas on society with broad and unchecked discretion. It seems clear to me that woke culture is exploiting child safety as a pretext to suppress any form of speech that challenges their progressive ideology. Let's delve into the Bill to uncover the imminent threats to our freedoms.
A requirement to provide tools and block access to "harmful" content within 24 hours creates incentives for platforms to excessively and preemptively remove content to avoid sanctions, directly curtailing freedom of expression (Photo by Sean Kilpatrick)
The purported aim of the Online Harm Act is to prevent and reduce harmful content online, in particular child abuse, non-consensual intimate content and content aimed at promoting hate speech and violence. The legislation mandates the swift removal of such content and establishes robust oversight mechanisms to ensure compliance and enforcement.
The Online Harm Act imposes increased responsibility and transparency on social media operators nationwide. The bill mandates that platforms take measures to protect children, restrict access to harmful content, and maintain essential records. Operators must implement measures to reduce risks associated with seven types of harmful content:
Sexual victimization or revictimization of a child.
Non-consensual intimate content.
Hate speech.
Content inciting violent extremism or terrorism.
Content inciting violence.
Bullying a child.
Inducing a child to self-harm
The Bill outlines three essential responsibilities for social media platforms and distribution services: a duty to act responsibly, a duty to safeguard children, and a duty to restrict access to certain content.
Duty to Act Responsibly: Social media platforms must assess the risks of harmful content exposure and implement effective measures to mitigate these risks. They must provide users with tools for reporting harmful content and blocking users.
Duty to Protect Children: Platforms must ensure that design features include age-appropriate safety measures, such as parental controls, warning labels, and safe search options. They must follow guidelines set by the Digital Safety Commission to limit children's exposure to explicit content, bullying, or content encouraging self-harm.
Duty to Make Certain Content Inaccessible: Platforms must block access to the most harmful content within 24 hours, as described in the Bill. They must also evaluate the legitimacy of flagged content to prevent frivolous or bad-faith reports.
Platforms must establish services to handle user complaints, provide advice on online harm, and designate a contact person for these purposes.
Bill C-63 mandates that repeated and artificially amplified content, especially when disseminated on a large scale by bots, must be appropriately labelled to ensure transparency and accountability.
Platforms are required to submit digital safety plans to the Digital Safety Commission, detailing measures taken, their effectiveness, risk assessments, and trends in online safety. This information must be publicly accessible.
Significant concerns exist regarding the potential overreach of government authority and the infringement on freedom of speech and digital privacy that could result from these challenging and stringent duties.
The mandate for social media platforms to assess and mitigate harmful content exposes them to broad and subjective government oversight. The requirement to provide tools for reporting and blocking content could lead to an environment of over-censorship, where platforms might excessively remove content to avoid penalties, thereby stifling free speech. Such measures could easily lead to a slippery slope of increasing censorship and control over digital communication.
The obligation to block access to harmful content within 24 hours is impractical and a potential violation of due process. Platforms might preemptively censor legitimate content to comply with tight deadlines, suppressing controversial yet lawful speech. Such stringent requirements could be abused, leading to the removal of content based on vague or politically motivated definitions of harm. This could result in a chilling effect on free expression, particularly for dissenting opinions against the prevailing government narrative.
These additional requirements should be viewed with a high level of skepticism. They will likely result in platforms over-policing content to avoid liability, thereby undermining the internet's open and free nature. The submission of digital safety plans to the Digital Safety Commission, which must be publicly accessible, can be seen as another layer of bureaucratic oversight that could limit online discourse's dynamic and decentralized nature.
Bill C-63 includes provisions that allow for the accreditation of certain individuals with access to social media data to conduct compliance checks. Allowing accredited individuals access to social media data constitutes a significant invasion of privacy. This provision grants the government and its appointees unprecedented power to monitor and scrutinize private communications without adequate oversight. The Canadian Civil Liberties Association (CCLA) has highlighted the broad search powers provided by the Bill, which do not require a warrant and pose serious threats to individual privacy rights.
While protecting children online is a widely supported goal, we should be cautious against the extensive regulations imposed by the Bill. These measures might infringe on parental rights and autonomy. Parents should primarily oversee their children's online activities, not the government. The Bill risks encroaching on family dynamics and individual responsibility by imposing top-down governmental control. The overreach of government into family life sets a precedent for further intrusion into personal liberties.
The Bill's requirements could impose significant compliance costs on smaller social media platforms and distribution services. The mandate to design and implement age-appropriate safety measures, such as parental controls, warning labels, and safe search options, can be financially and technically burdensome. These smaller entities often lack the resources of larger tech companies to meet such demands, potentially driving them out of the market. This regulatory burden could stifle innovation and competition. New and smaller platforms may struggle to comply with the extensive requirements, ultimately leading to a less diverse and less dynamic online ecosystem.
It's nauseating to think that severe penalties like life imprisonment for speech-related crimes will exist. In George Orwell's book 1984: "If you want a picture of the future, imagine a boot stamping on a human face—forever." (Photo by Dave Chan/AFP/Getty Images)
One of the primary concerns is the vast authority granted to the Digital Safety Commission. This body will have the power to interpret laws, create new regulations, and enforce them without adequate oversight. Concentrating such power in a single, government-appointed entity undermines the principle of the rule of law. The Commission's ability to act as judge, jury, and executioner risks arbitrary decision-making and potential overreach, suppressing free expression and infringing digital privacy rights.
The Bill's provisions allowing the Commission to search electronic data without a warrant pose a significant threat to privacy. This level of intrusion into individuals' digital lives is unprecedented and raises fears of widespread monitoring and censorship. Enforcing content removal carries the risk of being misused, potentially limiting legitimate speech, such as journalism and political activism. This could suppress public discourse, causing individuals and platforms to self-censor to avoid repercussions.
Establishing a Digital Safety Ombudsperson's Office to address systemic online safety issues also poses several concerns. The Ombudsman, appointed for a five-year term, is meant to support public interests, but without sufficient accountability mechanisms, this role could become another tool for government overreach. This could further complicate the regulatory system, allowing for decisions that lack transparency and recourse for affected individuals.
Bill C-63's broad definitions of harmful content and the extensive enforcement powers granted to the Commission and Ombudsperson directly threaten free expression. The Bill criminalizes a wide range of speech, imposing severe penalties for vaguely defined offences. This could discourage open debate and robust discussion on critical issues, as individuals may fear legal repercussions for their statements. The risk of over-censorship is particularly alarming, as it could stifle necessary conversations and exchange ideas in a free society.
The potential for abuse of power under Bill C-63 is significant. The ability to impose hefty fines and other penalties could lead to self-censorship among individuals and platforms. The lack of clear guidelines and the broad discretion given to these regulatory bodies exacerbate fears of arbitrary and biased enforcement. This environment could suppress dissenting voices and limit the diversity of online opinions, ultimately harming the public discourse. As Dr. Michael Gesit, Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa, concludes:
“The poorly conceived Digital Safety Commission lacks even basic rules of evidence, can conduct secret hearings, and has been granted an astonishing array of powers with limited oversight. This isn't a fabrication. For example, Section 87 of the Bill literally says 'the Commission is not bound by any legal or technical rules of evidence.”
This provision even allows for preventive legal actions; that is, the bill changes the logic of the system from punishing actions to punishing thoughts and intentions, completely undermining the fundamental principles of justice and individual freedom
Bill C-63's amendments to the Criminal Code deeply concern those who value free speech and limited government. These amendments propose that any offence under the Criminal Code or any other Act of Parliament can be classified as a hate crime if the act is motivated by hatred based on race, national or ethnic origin, language, colour, religion, sex, age, mental or physical disability, sexual orientation, or gender identity or expression. These amendments allow hate crimes to impose a maximum penalty of life imprisonment and to be prosecuted internationally. Additionally, the Bill increases penalties for hate propaganda offences by up to five years.
This broad and vague definition risks criminalizing a wide range of speech and behaviour, potentially turning political dissent and controversial opinions into prosecutable offences. I cannot highlight this issue enough. The use of harsh penalties, such as life imprisonment, for speech-related offences is excessive and unfair. This broad and subjective approach could result in a wide range of actions being labelled as hate crimes, leading to potentially severe consequences. The vague nature of what constitutes "hatred" allows for significant discretion, which could be misused to target dissenting voices and politically unpopular opinions. This approach risks criminalizing not just hateful actions but also a wide spectrum of speech and behaviours that may be controversial but are nonetheless protected under the principle of free speech.
The potential for any offence to be deemed a hate crime introduces a deterrent effect on free expression. To prevent their actions from being perceived as hate-driven, people may choose to self-censor, limiting open discussions and the free flow of ideas. This is particularly troubling in a free society where open discussion and dissent are essential for progress and accountability. This undermines fundamental freedoms and sets a dangerous precedent for future legislation.
The proposed Bill C-63 poses a significant threat to free speech by amending the Canadian Human Rights Act to classify the communication of hate speech as a discriminatory practice. This amendment places the responsibility for handling complaints under the purview of the newly established Digital Safety Commission of Canada.
Bill C-63 also enhances law enforcement's ability to combat child exploitation online. This includes creating a regulatory body for enforcement, clarifying the responsibilities of internet service providers, simplifying the notification process, extending data retention periods, and expanding regulatory powers.
The Bill claims to speed up hearings and dismiss unjust complaints by the Canadian Human Rights Commission. However, it does not promise to hire a large team to handle complaints. Instead, it allows complainants to keep their identities secret, allowing them to file a complaint against someone else for their speech. The move is seen as a departure from the open courts principle, and unfounded complaints are expected to continue.
The fundamental principle of justice is that individuals are innocent until proven guilty. Punishing someone based on the fear of what they might do in the future violates this principle and opens the door to the emergence of totalitarianism (Photo by Geoff Robins)
Bill C-63 establishes a system of administrative monetary penalties for social media operators and service platforms that fail to comply with their duties under the legislation. The maximum penalty for a violation is set at 6% of the violator's gross global revenue or $10 million USD, whichever is greater. Factors considered when imposing penalties include the nature and scope of the violation, compliance history, benefits obtained from the violation, the violator's ability to pay, and the penalty's impact on their business.
The amendments to the Criminal Code introduced by Bill C-63 are deeply troubling. They propose penalties as severe as life imprisonment for crimes deemed to be motivated by hatred. This broad categorization of offences, if classified as hate-motivated, could result in the harshest penalties under Canadian law. The notion that individuals could face life imprisonment for speech-related offences (such as the promotion of genocide) raises serious concerns for those who prioritize free expression and proportional justice.
Those proposing this Bill claim that those accused within this framework will be able to defend themselves on the basis of the redefined concept of hate speech. The amendment in question is as follows. The accusation of hate crime now depends on feelings of "detestation or vilification" instead of mere "disdain or dislike." Individuals can thus face severe penalties for their actions based on a fine line that distinguishes between 'disdain' and 'detest'. This exposes the disturbing lack of fairness and clarity in these legal distinctions, presenting a grave threat to free speech. For sure, this is an absurdity that must be resisted.
This Bill also financially incentivizes the accusers. The Online Harms Act can lead to defendants being slapped with hefty fines of up to $50,000 CAD ($36,571 USD) while also compelling them to compensate complainants with an additional $20,000 CAD ($14,629 USD). This setup invites complaints, as it imposes minimal financial risks on the accusers. Such a policy could prioritize monetary incentives over genuine grievances, undermining the essence of justice.
What is hilarious is that people making complaints can stay anonymous, and the evidence needed to prove someone's guilt is less strict than in criminal cases. Proofs can depend on a "balance of probabilities" rather than reasonable doubt. There's also no way to defend yourself by proving what you said was true, which you can do in libel cases. So, people accused of discrimination could be fined up to $50,000 CAD, and those who complain can get up to $20,000 CAD. This proposal can be easily abused and lead to unnecessary complaints.
Equally alarming is the introduction of peace bonds for individuals suspected of possibly committing a future hate crime. This provision allows for preemptive legal action, requiring individuals to wear monitoring devices, among other conditions, based merely on suspicion of potential future offences. This notion of pre-crime penalties is reminiscent of dystopian fiction, where people are punished not for what they have done but for what they might do.
The Bill shifts the legal system from punishing actions to punishing thoughts and intentions. This dangerous precedent undermines the core principles of justice and individual freedom. By criminalizing potential future actions, the government assumes a level of control over personal behaviour and expression antithetical to a free society.
The introduction of peace bonds based on the fear of potential future offences undermines the fairness of the judicial system. A fundamental principle of justice is that individuals are presumed innocent until proven guilty. Punishing someone based on the fear of what they might do in the future violates this principle and opens the door to abuses of power by the state. This not only erodes trust in the judicial system but also places an undue burden on individuals to prove a negative—that they are not planning to commit a hate crime.
Canada is my homeland, but I don't live there because of absurdities like this, which go completely against my ethical and moral principles. If you think like me and want to escape this dark future, you are very welcome at Expat Money
Bill C-63 is a significant overreach of government power that threatens free society's core principles. It grants bureaucrats the power to define and enforce what constitutes harmful content, leading to potential censorship and surveillance. The bill's broad classification of hate crimes, with severe penalties like life imprisonment, poses a threat to free expression and proportional justice. The creation of powerful regulatory bodies without sufficient oversight endangers privacy and individual liberties.
The legislation exploits fear to impose extensive controls over digital speech and behaviour, using the protection of citizens, especially children, as a justification. It also establishes a framework that promotes progressive beliefs, potentially supporting woke culture by broadly and subjectively defining hate speech and harmful content. This could lead to ideological conformity and the suppression of dissenting views, consolidating power to align with cancel culture's values and stifling free discourse, thereby undermining society's pluralistic foundations.
It seems that each day, more and more instances of freedom are cut away, like salami, until there is nothing left. Canada is my homeland, but I don't live there because of absurdities that go against my moral and ethical principles. The state perverted the law long ago because of its stupid ambition and false philanthropy.
If you're reading this far, there’s a good chance you're a like-minded individual, ready to take action and protect your freedom, wealth, and family—like I did, setting up my entire family to live here in Panama. One way to avoid being hit by absurdities like this and risking having your freedom cut off is to act on a plan B as quickly as possible.