We are in the midst of the United Nations’ 16 Days of Activism against Gender-Based Violence, a global campaign to end violence against women and girls. As such, there is no better time to confront one of the newest and most insidious forms of digital violence: the non-consensual creation of deepfakes.
Deepfakes — synthetic images, videos and voices generated by artificial intelligence (AI) — allow anyone to convincingly recreate a person’s likeness without their consent. A few uploaded photos are all it takes for powerful tools powered by AI to produce videos or images of someone doing or saying things they never did. The result is a profound violation of autonomy and dignity, especially for women and girls, whose bodies and faces are disproportionately targeted for manipulation.
Recent high-profile cases have made headlines. In May 2024, OpenAI famously mimicked Scarlett Johannson’s voice without her permission before she took legal action. Several months earlier, deepfake pornographic images of Taylor Swift circulated through X before the platform blocked searches of her name.
These cases show that while public figures often have the resources to respond to such mistreatment and the influence to fight back, everyday people do not. In Canada, there is still no clear legal protection for people such as students, teachers, activists or workers whose likeness is used in fake images or videos.
A handful of Canadian civil liability torts — such as intrusion upon seclusion, appropriation of personality and intentional infliction of emotional distress — could, in theory, be used to address deepfake harms. But these legal remedies are unevenly available across provinces, inconsistently applied and rarely accessible to individuals without significant financial resources.
These gaps leave many, particularly women and girls, exposed to devastating harms.
Canada’s Current Legal Landscape: Built for the Past, Not the Present
Canada has made progress addressing one related issue: the non-consensual distribution of intimate images (NCDII). The tragic case of Rehtaeh Parsons in 2013 — in which a 17-year-old died by suicide after sexual images of her were non-consensually shared — catalyzed reforms at both the federal and provincial levels: Section 162.1 of the Criminal Code now makes it illegal to share intimate images without consent, and work by Suzie Dunn demonstrates that all provinces except Ontario and the territories have enacted civil laws and remedies to help survivors seek damages or takedown orders for NCDII.
The manipulation of women’s and girls’ images for entertainment or humiliation is not a harmless prank — it is an attack on autonomy, equality and safety.
These laws represent real progress. They recognize that image-based sexual abuse is a form of gender-based violence and a violation of privacy, autonomy and dignity.
At the same time, Dunn’s work shows that some progress has also been made on the regulation of non-consensual synthetic intimate images (NSII).
She highlights that while Section 162.1 and laws in Alberta, Newfoundland and Labrador and Nova Scotia define “intimate image” in a way that is limited to actual intimate images of a person (original sexual images), the laws of British Columbia, New Brunswick, Prince Edward Island and Saskatchewan define intimate image in a way that includes “altered” images (photoshopped or edited original sexual images). It remains to be seen whether these laws are broad enough to capture fully AI-generated images, where no clear original image exists.
Parallel to that, there have been recent promising legal reforms in both Manitoba and Quebec. Manitoba’s Non-Consensual Distribution of Intimate Images Act now includes a definition for “fake intimate image,” defined as falsely depicting a person in a “reasonably convincing manner” created through a variety of means, “including by modifying, manipulating or altering an authentic visual representation.” This definition could very well capture NSII created without an original image.
Quebec’s new Act to counter non-consensual sharing of intimate images defines an intimate image as one “altered or not, that represents or appears to represent a person either nude or partially nude,” which captures NSII. The province now has a rapid civil resolution regime that allows victims to quickly obtain image takedown and destruction orders, even if there is a threat of an intimate image being shared, with fines of up to $5,000 per day for individuals or up to $50,000 per day for legal persons.
However, for anyone hoping to rely on Section 162.1 for the criminalization of NSII, their hopes may have been dashed by a decision at the Ontario Court of Justice in fall 2025. In this case, a judge concluded that a deepfake photo depicting a woman as topless was “morally reprehensible” and “frankly, obscene” but that sharing such images was not criminal. The judge engaged in a narrow reading of the law, deciding that Section 162.1 does not address fake images.
In an interview, Internet and porn historian Noelle Perdue, commenting on the decision, says that “[the judge] could look at this for what it is, which is sexual abuse. But instead he’s choosing not to, and is taking an illogically limited scope of how he’s acknowledging this crime, in order to protect an abuser.” In Perdue’s view, the judge’s decision “misapplied [one section of the Criminal Code] in a way that sets a dangerous and frankly disgusting precedent for this type of crime to continue.”
What Canada Can — and Must — Do
Canada needs a modernized framework for image and likeness rights that responds to the realities of generative AI. Several approaches are possible.
For one, provinces should increase coordinated action. This could extend related efforts by professors Hilary Young and Emily Laidlaw, in which provinces would build on their experience addressing NCDII by introducing uniform civil remedies for deepfake creation. New laws could allow victims to quickly obtain takedown orders and seek damages for deepfakes, while also carving out exceptions for legitimate expression such as satire and art.
Copyright law could also be amended to explicitly recognize a person’s likeness and voice as protected elements. Currently, protection generally applies only if the person owns the photo or video used to make the fake. That gap leaves ordinary people unprotected when someone else’s photo of them is manipulated. Updating copyright law to cover likeness rights in the context of non-consensual deepfakes, as it now exists in Denmark, would provide a clear, nationally consistent mechanism for redress.
Finally, federal online safety policies need to be modernized, and measures to mitigate harm from deepfakes must be integrated into new and existing AI policies. Canada’s efforts to legislate online harms stalled amid controversy in recent years. It’s time to revisit them — with lessons learned and targeted measures addressing deepfakes. Learning from legal efforts in the European Union as well as in Pennsylvania and Washington, Canada’s new AI and digital innovation minister, along with the Departments of Justice and Heritage, could lead to a framework focused on consent, transparency and dignity for AI-generated content.
Protecting Dignity in the Age of AI
If Canada fails to regulate, people’s images and likenesses will continue to be used without their consent, causing psychological and potentially financial harm. The time for Canada to build guardrails is now, before these technologies become even more powerful and embedded in society.
We may one day have interactive avatars that can impersonate someone in real time, or behavioural doubles that render automated decision making in someone’s name, such as the new AI-generated Albanian government minister is designed to do. These technologies could be used to create synthetic memories and fabricated evidence, or even to personalize persuasion at a massive scale.
The 16 Days of Activism remind us that digital violence is real violence. The manipulation of women’s and girls’ images for entertainment or humiliation is not a harmless prank — it is an attack on autonomy, equality and safety.
As Perdue, the internet historian, points out in the interview, “In reality, we have all the tools we need now to properly regulate without having to lose somebody’s life.” This country has shown before that it can lead in protecting people from new forms of online harm. We must not wait for another tragedy — another Rehtaeh Parsons or worse — to force our courts and governments to finally take this digital violence seriously.