We Are Not Ready for Deepfakes: How Canadian Privacy Torts Do Not Adequately Address the Unique Harm Posed by Targeted Deepfakes

  • October 03, 2023

by Keita Szemok-Uto, winner of the 2023 Privacy and Access Law Section Student Essay Contest

1: Introduction

A: Purpose of this Paper

Deepfake technology poses a significant and ever-increasing threat to the individual privacy of Canadians. Are our privacy torts able to identify and redress the unique harm that deepfakes present? Or do we require alterations to our privacy torts, and our conceptions of privacy law more generally, to ensure that we can enjoy fulsome protection of our online privacy?

This paper will first identify what deepfake technology is and how it enables individuals to violate the privacy of others. It will find that pornographic deepfakes that target women are the norm, not the exception, and that the evolution of artificial intelligence is making the creation and dissemination of these deepfakes easier and easier.

Section 2 will explore the way that deepfakes have the potential to violate a number of traditional privacy principles.

Section 3 will discuss the utility of tort law generally and offer reasons why privacy torts may be employed to address deepfakes.

Section 4 will undergo a discussion of Canadian privacy torts in four categories: misappropriation, the protection of reputation, the exposure of private facts, and privacy invasion as a wrong in itself. In each of these categories, relevant privacy torts will be identified and scrutinized to assess whether they have the potential to apply to a theoretical deepfake case. The general finding will be that none of these privacy torts adequately address the unique harms posed by deepfakes.

Finally, section 5 will offer two suggestions for how the courts and the legislature might fill the gap in the law which currently leaves deepfake victims without protection.

B: What are Deepfakes?

Deepfake technology is a branch of artificial intelligence machine learning which allows a user to replace an individual’s face in a video with somebody else’s.1 So long as the deepfake algorithm has access to enough data – pictures of a victim’s face – the intelligence program eventually “learns” what that face will look like at different angles, and will be able to create a seamless transposition of the victim’s face onto any other body.2

i) Tens of Thousands of Deepfakes Exist Online

While Canada has not yet seen a deepfake case make it to court, deepfakes are still remarkably pervasive. A Sensity study conducted in 2019 identified 14,678 deepfake videos, with an accumulated 134 million views, across only the top four deepfake hosting sites.3 Despite that significant figure, the study was limited by the fact there are more than four active deepfake hosting sites and by the fact that some number of deepfake videos remain unpublished for private use or sale and would not be captured by such a study. In addition, deepfakes only came into existence around 2018, meaning the 2019 study would have only covered the first year or so of deepfake creation. There is no question that tens or hundreds of thousands of more deepfakes are now in existence.

ii) How Deepfakes are Made

Images of faces are the ammunition that make deepfake creation possible. The earliest popularized deepfakes were mostly of public figures, who are easy targets because so many pictures of their face can be found online. While deepfakes can be made with as little as a few hundred photos, the quality and believability of these videos increases with the number of photos plugged into the algorithm.4

iii) Deepfakes and Celebrities

Deepfakes have become somewhat of a phenomenon in recent years, particularly those featuring celebrities. In early 2018, Nicolas Cage was being deepfaked into scenes from movies he never starred in.5 In 2019, Steve Buscemi’s face was deepfaked onto Jennifer Lawrence’s body at the Golden Globes.6 Even world leaders are not safe from the deepfake trend: in a viral video from 2018, comedian Jordan Peele utilized deepfake software to make Barack Obama recite Peele’s eccentric script.7 In a more recent example, TikTok user “deeptomcruise” posts highly realistic deepfakes of Tom Cruise in a variety of comedic skits.8 If you glance at the comments sections of those videos you will see that most users believe they are seeing the real Tom Cruise.9

iv) Most Deepfakes are Pornographic and Target Women

These comedic examples belie the true threat that deepfakes present. Most deepfakes do not involve male A-list Hollywood actors involved in funny skits. In fact, 96% of deepfakes are made as pornographic material, and 100% of the targets in these pornographic deepfakes are women.10 One report found that 99% of the targets of pornographic deepfakes were women working in the entertainment industry, as musicians or actresses.11 However, this report only analyzed the top 5 deepfake pornography websites,12 and deepfakes of well-known musicians and actresses would logically garner the most views. That the targets of these deepfakes are famous does not diminish the privacy invasions they face, but we should also consider the vast number of unreported deepfakes which target non-famous women.

v) Growing Potential for the Creation of Deepfakes

Social media allows for individuals to easily harvest face pictures of women they know in order to include them in a deepfake. Online reports detail anonymous users planning to make deepfakes of their friends, coworkers, friend’s stepmoms, crushes, classmates, and doing so with as few as a few hundred photographs.13 One Pennsylvania mother, using only images available on social media, made deepfakes of her daughter’s cheerleading rivals in which she portrayed them naked and abusing substances.14 While the most popular and mainstream deepfakes are often comical and centre on celebrities, deepfakes have had an untold effect on ordinary, non-famous women and girls online.

It is becoming increasingly easy for the average person to make a deepfake. While the technology that generates deepfakes is complicated, the service is widespread and easily accessible. Online deepfake algorithm software like FakeApp and DeepNude allow users to make deepfakes for themselves using their proprietary algorithms. FakeApp, first released in 2018, advertised that the service put the power of deepfake creation in the hands of “ordinary” people, a tagline that forewarns the widespread, accessible nature of the technology.15 One shudders to think that an “ordinary” person would ever make something like a pornographic deepfake of a coworker or friend, but the free and accessible nature of this kind of software unfortunately helps normalize this kind of behaviour.

DeepNude has capitalized on deepfakes as a profit-making venture. Their marketing taglines advertise that one can “See Anyone Nude”, with advertisements that show women clothed, and then in various states of undress.16 While the service itself is free, each deepfake generated will include a large watermark, which the user can opt to remove for $50.17 In just a single month in 2019, DeepNude received 545,162 visits and had 95,464 active users.18

Gathering the face pictures necessary to make deepfakes has never been easier. The growing presence of individuals posting their face on public social media sites, like Snapchat, TikTok, and Instagram, makes a deepfake creator’s job easier, and allows for the creation of more realistic deepfakes.19 Social media sites like Instagram, Facebook, and TikTok have implemented augmented reality face filters to their platforms which can airbrush, beautify, or fictionalize a user’s face.20 Doing so allows the user to see their own face in a more flattering way, but also encourages users to post videos of their own face online. Inevitably, some of these filters become “trends,” which further encourages users to post videos of themselves applying them. For example, the “teeth whitening” TikTok filter has been a recent trend which has accumulated a collective 12 billion views.21 These kinds of trends have increased the number of users who are willing to post videos of their own faces online, and has made it easier for perpetrators to make deepfakes using this data.

2: Deepfakes Invade Privacy

A: Right to be Let Alone

Deepfake technology has the potential to violate privacy principles in a number of ways. First, being subject to a targeted deepfake violates one’s “right to be let alone.” This concept, developed in an 1890 article by Warren and Brandeis, encapsulates the notion that society has an interest in retaining the personality rights of individuals.22 While the principle has come under scrutiny for being rather vague and broad,23 it does provide a general sense of what actions may or may not violate privacy. Being subject to a targeted deepfake video violates an individual’s privacy in this way because it publishes a victim’s likeness online, to a potentially unlimited audience, and in doing so violates their right to be let alone.

B: Control of Personal Information

Deepfakes can also undermine an individual’s control over their personal information. This encapsulates how one chooses to disclose elements of their private life, the extent of that disclosure, and to whom that information is disclosed.24 While Hunt conceptualizes this element of privacy as akin to a person having a right to their secret diary,25 one can also see how control over personal health details, banking information, and sexual preferences are essential to this privacy principle.

A person’s face can also be categorized under this concept of personal information. The face is the most recognizable part of one’s body, and when, where, and how your face is shown can reveal information about you that you might not want to disclose. In addition, faces are now used as passwords, essentially acting as a key to other aspects of your personal life. Facial recognition is widely used for smartphones in lieu of a passcode, and face scanning is utilized to access banking and social media apps and other personal information stored on your phone.

Tied to this privacy interest is the protection of individual reputation, which is encapsulated in torts such as false light and defamation.26 A person who makes false claims about another can impede that person’s ability to express themselves in the event the victim has to compete with salacious, untrue rumours that have been spread about them.27 In the same way, a deepfake seizes control over a person’s reputation by presenting a version of that person that is not accurate and may be sexually explicit and embarrassing. Deepfakes, like widespread rumours, threaten a person’s ability to express their preferred version of themselves.

C: Dignity and Autonomy

Being the subject of a deepfake video can also violate a person’s dignity and autonomy. Kant viewed this notion of privacy as embodying the ideal that people should be treated as ends in themselves, rather than as a means to furthering another’s desires.28 Hunt concludes that informational intrusions, like when someone reads a page from your private diary or inappropriately accesses your health records, as well as physical intrusions into privacy, such as being peeped on by a neighbour, both violate the Kantian idea of dignity and autonomy because in both instances the perpetrator is placing their own desires above the victim’s.29

Deepfake makers inherently put their own desires above those of their targets and in doing so they use the targets as means to an end. Without delving too deeply into the mind of a deepfake maker, one can imagine that the likely “end” use for a target in a pornographic deepfake is the perpetrator’s sexual fulfillment, or their satisfaction from the very act of privacy invasion itself.

Hunt references peeping when discussing privacy invasions that infringe one’s dignity and autonomy. When one thinks of peeping, they might think of a strange man sitting by his window with a telescope, watching the goings-on of one of his neighbours. Pornographic deepfakes on the internet take the concept of peeping a step further: everybody has a telescope (access to the internet), and everyone is viewable (a deepfake can be made of anybody). When one peeps on a target, and makes a deepfake of them, the target is peeped on not just by the perpetrator but potentially by the world. Videos posted online are notoriously hard to permanently delete because anyone that views it can themselves make a recording of the content, or share it to a separate website. Deepfakes must present a more flagrant violation of a person’s dignity and autonomy than peeping in the traditional sense, simply because of the sheer potential of scale implicit in online publication.

D: Feminist Lens

While deepfakes do present novel issues for privacy law, they are reflective of ongoing, structural misogyny that pervades our culture. Online spaces have historically been ripe for the objectification and sexualization of women, and tools like deepfakes only support that notion. Anita Allen helps us analyze privacy principles with a critical feminist lens. Gender plays a significant role in shaping what society deems as private, and historically women have been expected to exhibit roles of modesty and self-concealment.30 Although women were historically relegated to the private sphere, they were not thereby afforded adequate privacy protections.31 It is not within the scope of this paper to define the connection between women’s historical relegation to the private sphere and imposed patriarchal expectations of female modesty.32 However, it is important to consider that deepfakes are a continuation of gender-based violations of women’s privacy, rather than a novel threat.

3: Using Torts to Address Deepfakes

A: Utility of Tort Law

Torts are uniquely placed in Canadian law to address the novel issues raised by deepfake privacy violations. Historically, torts have been highly responsive to societal change, driven by the judiciary’s “impressions about the social needs presented by the specific facts of the case.”33 While amending legislation requires political agreement and lengthy legislative drafting, torts can be developed quickly and on the facts of a single case. In addition, judicial law allows for more decentralized and location-specific remedies to societal wrongs, illustrated by the fact that many torts have different iterations province by province, varying based upon the perceived needs of each jurisdiction. The decentralized nature of torts allows them to “fill in the gaps” of novel social problems that other areas of law may miss.34 Anita Allen expressed a hope that privacy torts could have some of the “most worthwhile applications as aids to female victims of gender-related privacy invasions.”35 While she wrote in an era long before the advent of algorithms and social media, Allen was correct in assessing the potential torts have in addressing novel issues, like deepfakes, that other areas of law have been slow to address.

B: Criminal Law Is Needed but Has Been Slow to Respond

Although torts are valuable in their capacity to provide monetary relief to wronged parties, it is only through criminal law, the exercise of state power, that fulsome prohibitions against targeted deepfakes can be enforced. California and the US federal government have made recent strides in criminally prohibiting deepfakes by proposing bills which would prohibit their creation and publication.36 Unfortunately, Canada’s “Criminal Code” has no similar provisions. Perhaps the closest relative to criminal deepfake laws are those governing the publication, distribution, and sale of non-consensual intimate images, an offence which carries a moderate offence of up to five years’ imprisonment.37 Although intimate images and pornographic deepfakes both involve sexual infringements of individual privacy, deepfakes do not fulfill the “Code”’s definition of what an intimate image is and thus are not covered by criminal law.38

The utility of criminal law in punishing the creators of pornographic deepfakes is also inhibited by the requirement of the state to prove a suspect’s guilt beyond a reasonable doubt. Whereas success in a civil remedy requires proof on a balance of probabilities, roughly 51%, the reasonable doubt threshold requires certainty of around 90%.39 Inevitably, this will allow perpetrators in less clear-cut cases of deepfake creation and distribution to walk free, and also requires significant state resources to effect successful prosecutions of suspects who create and distribute pornographic deepfakes.

As an aside, criminal law may prove to be a useful aid in cases where children are the targets of deepfakes. In R v Butler, the accused was charged for the possession of child pornography pursuant to s 163.1(4)(a) of the “Criminal Code”. Found guilty for possessing nearly 12,000 images depicting child sexual abuse, one aggravating factor for Butler’s sentence was the fact he had superimposed the faces of young girls onto the naked body of another young girl dozens of times.40 The finding in “Butler” suggests that criminal law, even without a specific prohibition of deepfakes, may be able to play a supporting role by punishing deepfake creators in some circumstances. With that in mind, the rest of this paper will focus on the application of privacy torts to deepfakes.

4: Applying Privacy Torts to Deepfakes

At least eight different privacy torts have a potential to apply in targeted deepfake cases. The goal of this section is to identify what these privacy-related torts are, what overlap they have with deepfakes, and assess whether each one could adequately address a deepfake fact set. Ultimately, this paper will suggest that no Canadian privacy law tort adequately addresses the unique harm that deepfakes pose.

The eight privacy torts will be divided into four separate subcategories based on the crux of their application to potential deepfake scenarios. The first section, misappropriation of likeness, will cover copyright. The second, protection of reputation, will analyze false light and defamation. The third section will look at the exposure of private facts through the torts of public disclosure of private facts and non-consensual sharing of intimate images. The fourth and final section will look at torts which focus on privacy invasion as a wrong in itself, with a discussion of intrusion upon seclusion, intentional infliction of nervous harm, and internet harassment.

A: Misappropriation

i) Copyright

What is Copyright?

Canada’s Copyright Act protects the rights of creators of works like videos and photographs, and limits who can produce or reproduce those works and for what purposes.41 It authorizes a court to order damages, or allows the plaintiff to opt for statutory damages, which range from $100 to $20,000 per work infringed depending on whether the infringements were for commercial purposes or not.42

One potential benefit of copyright actions is that they can provide plaintiffs with high damage awards. In Trout Point Lodge v Handshoe, the plaintiff company was awarded statutory damages of $80,000 for the defendant’s unlawful dissemination of 4 copyrighted photographs, the highest award possible under that part of the Act, as well as punitive damages of $100,000 for the “outrageous” and “highly reprehensible” conduct of the defendant.43

Every picture one takes of themselves and posts online can be considered an artistic work for the purposes of the Act.44 Using and reproducing these images for the purposes of creating a deepfake would fit the definition of copyright infringement under the Act,45 and doing so would allow a successful plaintiff to collect common law or statutory damages from the violation.

Limits to Copyright: Exceptions

Although copyright does seem to have potential merit in remedying deepfake victims, several significant hurdles currently exist. For one, there is a personal use exception to copyright infringement.46 So long as the defendant does not share the deepfake they created with others, and uses it for their own private purposes, the exception would apply and the plaintiff’s claim would fail.

In the same vein, the Copyright Act contains an exception for non-commercial user-generated content. A perpetrator would be able to use photos of a target’s face in a deepfake so long as the deepfake they are contained in is disseminated for non-commercial purposes, the source of the photo is mentioned, and the use of the photos would not have an adverse effect on the target’s ability to profit off of the original photo.47 While some deepfakes are likely created to generate revenue, it seems that most are posted for non-commercial purposes, considering the fact they are numerous, free to make, and mostly free to view. If the target of a deepfake were a celebrity, an influencer, or anyone who is able to profit off of their likeness, this exception would likely not apply. However, this exception would likely inhibit the vast majority of the public from bringing a copyright suit in a deepfake case.

There is an interesting question as to whether deepfake defendants could also defend against a copyright infringement by demonstrating “originality.”48 To do so, a defendant would not need to prove that the deepfake created was creative, novel or unique49 (although deepfakes tend to be all three). Rather, the Supreme Court found that the work has to exercise skill and judgment, applying one’s “developed aptitude or practised ability” and their “capacity for discernment or ability to form an opinion”, and must necessarily require intellectual effort – the work cannot be the result of a purely mechanical exercise.50

It is unclear, and would depend on the facts of the case, whether creating a deepfake would be seen as an act of originality. While the algorithm used to make deepfakes is clearly a “purely mechanical exercise”, it is possible a court could determine that one’s scouring of a person's social media sites for applicable photographs of a target’s face, and their decision-making regarding whose body to transpose the target’s face onto, does constitute the exercise of skill and judgment. Further caselaw on the subject would be required to make any determinative judgment on the effectiveness of the originality defence.

Limits to Copyright: Limited Awards

One final limitation to the use of copyrights for deepfake actions is the fact that damage awards may be minimal. While the statutory damages awarded in Trout Point Lodge v Handshoe were sizable,51 we must consider the fact that many deepfakes will not be created for commercial purposes. In those cases, the amount of damages an individual could claim they incurred from the infringement would be minimal, and, should they opt for statutory damages, their reward would be limited to between $100 and $5,000.52

B: Protection of Reputation

i) False Light

What is False Light?

False light is one privacy tort which has at its core the protection of undue harm to an individual’s reputation. The tort was introduced in the 2019 case Yenovkian v Gulian, where Yenovkian made a variety of untrue and salacious remarks about Gulian and her family, including that they had abused, drugged, and kidnapped children, defrauded governments, and forged documents.53 The elements of the tort require that the false light in which the plaintiff was placed would be highly offensive to a reasonable person, and that the defendant had knowledge of, or was reckless to the fact that the matters publicized were false.54

Sexually explicit materials have often been regarded by the courts as highly offensive in other contexts, such as in the tort of public disclosure of private facts or intrusion upon seclusion.55 It seems that a case where a target’s face is transposed onto the body of another in a sexually explicit setting would be highly to constitute a highly offensive portrayal by the court.

Limits to False Light: Deepfakes Do Not Disseminate False Information

While false light may fill a niche for cases where the defendant disseminates false facts, it is unclear whether the facts disseminated in a deepfake are actually false. In “Yenovkian”, the defendants claims about Gulian and her family were verifiably false: the plaintiffs never kidnapped and drugged children and never defrauded governments. However, it is to be seen whether a court would find that there is anything necessarily false about a deepfake video. Perhaps one could consider the combination of two different people’s bodies into the same entity to be deceptive. But a deepfake does not make a claim that can be disproven, unless it were to state explicitly in the video that the body depicted on screen is the actual body of the target. If the simple forbearance from making such a claim makes this tort unworkable, the false light tort cannot really be considered an effective remedy for deepfake victims. As we will see, the requirement for falsity is an interpretive hurdle that pervades the torts throughout this section.

Limits to False Light: Other Torts More Applicable

A second limitation of this tort in a deepfake context may be that other torts may be more applicable. A major criticism of false light in America has been its significant overlap with other torts, like defamation.56 Even in Yenovkian, where false light was adopted, the court notes its similarity to public disclosure of public facts, with the distinction that false light focuses on disseminated facts which are not true.57

ii) Defamation

What is Defamation?

Closely tied to false light, defamation is a privacy tort also focused on the protection of individual reputation. The test requires that a defendant’s words would tend to lower the plaintiff’s reputation in the eyes of a reasonable person, that the words referred to the plaintiff, and that the words were published.58 Defamation is highly fact-sensitive and not easily comparable to other defamation cases,59 which may allow the tort to be adaptable to a fact set involving a deepfake. Canadian defamation law is also more accessible than its American counterpart because it does not require a defendant to have acted with malice.60 Indeed, the law of defamation is particularly important in regulating the “electronic marketplace of ideas” in our era in which global communication is unlimited and easily accessible.61 Targeted deepfakes are a natural yet abhorrent manifestation of the current milieu of online communication.

Limits to Defamation: Tort Applies to Words, Not Images or Videos

An initial problem with applying deepfakes to defamation is that the tort is highly centred on controlling words, and does not seem to include photographic or video materials. Indeed, the words of the test only mention “words” which are defamatory,62 and further decisions on defamation seem exclusively to mention false words or statements.63 Unless a deepfake includes written text with a defamatory sting,64 it is unlikely that such a video, on its own, would fit into the tort of defamation.

However, at least one American court has considered pictures as constituting defamatory material. In Kiseau v Bantz,65 the defendant photoshopped a picture of the plaintiff, a female police officer, standing in front of her vehicle to make it look as though her breasts were exposed. Bantz continuously shared that picture to coworkers at the police department via email over a period of 10 months. The court stated that the alteration of that photograph was, in itself, defamatory. Applying this finding to a deepfake case might allow a plaintiff to submit that a deepfake video created of them, despite not being a written or verbal statement, was defamatory.

No Canadian court so far has widened the application of defamation to materials other than statements; however, the fact set in Trout Point Lodge v Handshoe gets close. In that case, the defendant made defamatory comments about the lodge, derogatory and homophobic comments towards the two plaintiff owners, and posted doctored photographs depicting the plaintiff owners in a sexually explicit nature.66 Despite the court’s mention of the doctored photographs as highly offensive and defamatory,67 the decision on damages does not once mention these photos, instead only referring to the verbal and written comments about the lodge and the plaintiffs.68 A more firm ruling on the doctored photographs may have helped deepfake fact sets fit in neatly with Canadian defamation law, but, as it is, defamation falls short.

C: Exposure of “Private” Facts

i) Public Disclosure of Private Facts

What is PDPF?

Public disclosure of private facts (PDPF) is one tort which tackles a defendant’s publication of the private affairs of another person. Most notably raised in Ontario with “Jane Doe 72511”,69 and in Nova Scotia with “Racki”,70 the PDPF tests vary slightly between provinces. “Jane Doe 72511” requires that a defendant publicized an aspect of the plaintiff’s private life, that the plaintiff did not consent, that the matter publicized or the fact of its publication would be highly offensive to a reasonable person, and that the publication was not of legitimate public concern.71 “Racki” retains most of these elements, but instead of requiring that the plaintiff did not consent to the publication, it mandates that the facts published were those to which there is a reasonable expectation of privacy.72

Deepfake cases would easily pass most steps of the “Jane Doe 72511” PDPF test. However, it would stumble on the requirement that the aspect of the plaintiff’s life published was a private one, or, in the “Racki” test, that the plaintiff had a reasonable expectation of privacy over the matter published.

A number of PDPF cases have determined that matters of a sexual nature and those dealing with highly personal facts are those to which one ought to have a reasonable expectation of privacy over. In “Jane Doe 72511”, “Jane Doe 464533”,73 and ES v Shillington,74 explicit sexual images and videos, taken with consent but distributed without it, were determined to be aspects of a person’s private life. In “Racki”, Mrs. Racki was found to have a reasonable expectation of privacy over the fact she had a sleeping pill addiction and that she had twice attempted suicide.75

Limits to PDPF: Deepfakes Do Not Violate an Individual’s Private Life

But does a target of a deepfake face a violation of their private life or their reasonable expectation of privacy when the images used in the deepfake were publicly available? An analysis of one’s reasonable expectation of privacy when in a public setting is inherently normative, contextual, and must weigh competing societal interests against each other76, an accused, who took photos of naked women on a public, clothing-optional beach, was acquitted because the court held that such beach-goers did not hold a reasonable expectation of privacy because of the clothing-optional nature of the beach and the fact no by-laws prohibited photography there.77 In “Taylor”, the court clarified that beach-goers on public, non-clothing-optional beaches have a reasonable expectation that close-ups of their private areas would not be photographed.78 In “Jarvis”, the Supreme Court intonated that privacy is not an all-or-nothing concept; that being in public does not negate all expectations of privacy regarding recording or observation, and that privacy can be expected most fulsomely in traditionally private places, where one has chosen to exclude all others, like at home or in the washroom.79

By posting photos of themselves online, deepfake targets place themselves in a context where observation is not prohibited or abnormally intrusive: indeed, it is expected. “Lebenfish” demonstrates that one does not have a reasonable expectation of privacy when they are in public. While in “Taylor” the court expanded privacy to protect the recording of individual’s private areas while in public, deepfake targets do not have their private areas invaded, only their faces. While “Jarvis” suggests that one can retain an expectation of privacy in public contexts, such as by attending public school, it is hard to see how one can expect privacy when posting publicly on social media. While privacy can be expected where one has chosen to exclude all others, on social media, the poster is not trying to exclude the observation of others. Although some platforms require users to accept or “friend” others, like on Facebook, Instagram, which has recently overtaken Facebook in popularity, leaves that optional, and TikTok, a rapidly growing social media platform which was the most-downloaded app of 2021, has videos set to public by default.80 The trend of social media apps is that the purpose of posting there is to be observed on a wide, if not unlimited scale. While posters may not expect their images to be recorded, making them publicly available must preclude the notion that they are being recorded surreptitiously. This would prove a significant, if not determinative barrier to deepfake victims utilizing PDPF.

Nevertheless, the modern approach to Canadian privacy law is decidedly normative.81 Privacy rights are considered as they ought to be, not as they are. Regardless of the outcome of a contemporary reasonable expectation of privacy analysis regarding online photos, shifting opinions and widening interpretations by society and the courts could allow deepfake cases to be covered under PDPF in the future. It could be that a court hearing a case decides that people ought to have a reasonable expectation that their publicly available photos will not be used in a sexually explicit deepfake. If and how that is to happen is to be seen.

ii) Non-Consensual Sharing of Intimate Images

What is NSCII?

Non-consensual sharing of intimate images (NSCII) is a statutory tort introduced in Nova Scotia with the 2017 Intimate Images and Cyber-protection Act. It creates a cause of action against defendants who distribute an intimate image of the plaintiff without consent.82 Successful plaintiffs can be ordered to receive general, special, aggravated, or punitive damages, and can also demand that the intimate image be removed from the internet.83 An “intimate image” is defined as any visual recording of a person in which that person is depicted nude or engaged in explicit sexual activity, that was recorded in circumstances giving rise to a reasonable expectation of privacy.84 So far, the only cases to utilize the Intimate Images Act both dealt with cyberbullying,85 rather than the intimate images tort.

Limits to NSCII: Wording Precludes Deepfake Application

An immediate limitation of the Intimate Images Act in applying to deepfake cases is its wording. Section 3(f)(i) requires that “the person depicted” in an image must be nude or engaged in explicit sexual activity for it to be considered an intimate image. Taken ordinarily and grammatically, this seems to align less with deepfake-type cases and more with situations like revenge porn, where the target is nude or engaged in explicit sexual activity. A court could infer that pornographic deepfakes do depict a target engaged in explicit sexual activity, when the body they are tranposed onto is depicted in such a way, but such an interpretation seems to stretch the intended meaning of the Act’s words. A simple alteration that would allow deepfake victims to utilize the Act would be to change the definition of “intimate image” to require that the person is “depicted as” nude, rather than that the person depicted “is” nude. This would cover individuals targeted in deepfakes who, while not nude or engaged in sexual activity in the original images, are nevertheless depicted as nude or engaged in sexual activity.

Limits to NSCII: Reasonable Expectation of Privacy Analysis Unlikely to Cover Deepfakes

Additionally, section 3(f)(ii) of the Act requires the intimate image to have been recorded in circumstances that gave rise to a reasonable expectation of privacy. This is a familiar barrier for the private facts torts we have looked at so far, and, as discussed above, will hinge on future courts’ interpretations of a reasonable expectations analysis regarding public photos used in deepfakes.

D: Privacy Invasion as Wrong In Itself

The final categorization of torts identifies that privacy invasion is an actionable harm in itself. These torts are distinct in that they do not require the publication of impugned materials relating to the plaintiff, unlike copyright, which includes a personal-use exemption, or the torts under the protection of reputation and private facts categories, both of which require a defendant to publish aspects of the plaintiff’s private life.

i) Intrusion Upon Seclusion

What is Intrusion Upon Seclusion?

Intrusion upon seclusion was introduced in 2012 with Jones v Tsige. In that case, the “private affairs” invaded were personal banking information, which the defendant illicitly accessed 174 times over four years, which resulted in a damage award of $10,000.86 The test requires that a defendant’s conduct was intentional or reckless, that they invaded a plaintiff’s private affairs or concerns without lawful justification, and that the invasion, causing distress, humiliation, or anguish, would be seen as highly offensive to a reasonable person.87 Importantly, no actual monetary loss was suffered by the plaintiff in that case; rather, the tort introduced a way for the court to order damages as a “symbolic” vindication of the privacy breach.88 In “Nitsopoulos”, the plaintiffs suffered a privacy infringement because the defendant gained access to their home under the fraudulent pretense she was working as a maid.89 In “Demme”, the intrusion claim resulted from Ms. Demme’s wrongful access to hospital patient files.90 At issue in “Powell” was the defendant’s unauthorized accessing of the plaintiff’s credit information.91

Intrusion upon seclusion may have application in a deepfake context: deepfake creators certaintly act with intention; being depicted in an explicit sexual deepfake would likely cause distress, humiliation, or anguish to the plaintiff; and such a deepfake would be seen as highly offensive to a reasonable person.92

Limits to Intrusion Upon Seclusion: Deepfakes Do Not Intrude Upon Private Affairs

However, the target of a deepfake does not truly have their private affairs violated. The private affairs violated in “Jones”, “Nitsopoulos”, “Powell”, and “Demme” were all in relation to private information – about finances, health, or the home. These are all elements of one’s life to which there would be a reasonable expectation of privacy. Unfortunately, this seems to bring us back around to the problem that one’s publicly posted social media images are not private matters in a commonly understood sense, certainly not to the level that private health information or intimate images are. Deepfakes are intrusive not because the plaintiff’s private affairs are infringed, but because their likeness is used in an inappropriate way. A deepfake target would not be able to effectively utilize the intrusion upon seclusion tort for this reason.

ii) Intentional Infliction of Nervous Shock

What is IINS?

Intentional infliction of nervous shock (IINS) is another tort which focuses on how an invasion of privacy is a wrong in itself. The test requires a plaintiff to show that a defendant was engaged in flagrant or outrageous conduct which was calculated to produce harm, and resulted in a visible and provable illness in the plaintiff.93

Canadian cases identify flagrant and outrageous behaviour as that which is invasive, unusual, and which tends to humiliate the victim. Flagrant and outrageous behaviour may encapsulate sharing an intimate sexual video of the plaintiff on a public website,94 spreading a slew of false and salacious allegations about the plaintiffs and their family to their friends, community, and in public forums,95 or an employer’s continuous belittlement and humiliation of an employee in front of other co-workers.96

Creating a pornographic deepfake of somebody and sharing it on the internet seems inherently flagrant and outrageous. It is similar in character to the sharing of an intimate sexual video, although the target themselves is not portrayed explicitly. Perhaps deepfakes also spread falsehoods by identifying a target’s face with a body that is not theirs.97 They also are inherently humiliating for the target, who must have their likeness associated with explicit sexual imagery.

It also seems likely that a targeted deepfake could result in a visible and provable illness. The scope of illnesses accepted by the courts for the purposes of this element are quite broad, ranging from depression and emotional fragility,98 to nightmares, mental stress, and hyper-vigilance,99 to abdominal pain, constipation, and hematemesis.100 Flagrant and outrageous behaviour conducted on the internet, specifically, seems to amplify the effects that victims suffer, because one can never ensure that media posted online is permanently deleted. Viewers of a blog post or video can make their own recording of it, or download it, or reshare it to myriad other websites. The aggrieved mother in “Yenovkian” specifically stated she had continuing fears that her children would be able to search her name on the internet and read the obscene and salacious remarks that the father had made about them.101 Doubtless the same kind of concerns would have affected the victim in “Jane Doe 464533” who had her intimate image shared to tens of thousands of people online.

It seems highly likely that being the target of a deepfake would cause similar effects. “Yenovkian”, “Boucher”, and “Jane Doe 464533” all centre in some way around a victim’s loss of control. In each, the plaintiff lost the ability to control explicit videos taken by a trusted partner, what kind of abhorrent information their children are exposed to, or how they are treated and perceived at their place of employment.

Helen Mort, who unexpectedly found pornographic deepfakes of herself online, despite having never taken or shared an intimate image before, felt this same sense of powerlessness.102 Mort began to suspect those in her life who she thought might have been the creator of the deepfakes, and caused her to doubt her entire reality.103 One can easily imagine someone in Mort’s position suffering from depression, mental stress, or the kind of hyper-vigilance that she experienced. Although there is not yet any caselaw linking deepfakes with IINS, there is no question that being the target of a deepfake would result in many of the same illnesses that other kinds of online, flagrant behaviour have led to.

Limits to IINS: Deepfakes Are Not Necessarily Made with the Calculation to Produce Harm

Where a deepfake fact set might fail the IINS test is in proving that the defendant’s conduct was calculated to produce harm. While the intent need not be malicious, a plaintiff must show that a defendant not only knew that harm would occur, but that he or she intended to produce harm, or was almost certain that it would follow the flagrant or outrageous behaviour.104 Obviously, this level of intent will vary case by case, but one gets the sense that the primary purpose of pornographic deepfake creation is personal sexual fulfillment or morbid curiosity, rather than an outward attempt to cause harm to the victim. It is probable that many deepfake targets never find out they have been featured in deepfakes, as Helen Mort would not have if her friend had not informed her she had seen deepfakes of her.105 While a court could determine that the very act of making a deepfake imputes a knowledge that doing so would almost certainly cause a visible and provable illness in the target, it seems unlikely that they would make such a  leap without any other caselaw touching on the matter. As is the case broadly, jurisprudence by any court that directly addresses a deepfake case would be highly useful in ascertaining the potential effectiveness of this tort.

iii) Internet Harassment

What is Internet Harassment?

The tort of internet harassment was recently adopted by Canadian courts with Caplan v Atas.106 It was developed as an addition to IINS, to provide an extra layer of protection to targets of outrageous behaviour. It is similar to IINS in that it requires outrageous conduct that was calculated to produce harm, although it adds a requirement of malice, and does not require that the plaintiff suffered from a provable illness.107 Atas’ behaviour was certainly malicious, and Justice Corbett held little back in describing her “sociopathic” lack of empathy for the plaintiff, describing the behaviour she engaged in as so egregious it suggested “serious mental illness”.108 Atas engaged in “systematic campaigns of malicious falsehood to cause emotional and psychological harm” against around 150 victims, including her own lawyers and agents as well as their families.109 Requests for damages were withdrawn after Atas filed for bankruptcy so there is no indication of the amount each of her victims could have reaped.110

Limits to Internet Harassment: Applies Only to Fringe Cases

A deepfake perpetrator could potentially meet all of the elements of internet harassment, but the test’s stringency, and its requirement for malice, suggests it would only apply to truly fringe cases. One can imagine a deepfake maker, perhaps suffering from mental health issues, that makes hundreds of deepfakes of an individual target, or individual deepfakes of a plethora of classmates or coworkers, for misogynistic or hateful reasons. Yet although no research has suggested what demographics are most likely to make deepfakes, it is unlikely that they are made only by the deranged or mentally ill. The fact that tools for deepfake creation are so widespread and accessible hints to the fact that it is not a small minority engaged in making deepfakes – it seems more likely that a few are made by many, rather than many by a few. Consequently, the tort of internet harassment would have a highly specific and limited application. Even then, there is no sense of how much victims would be able to receive in damages.

5: Solutions

This paper has analyzed eight privacy torts and found that none of them are sufficiently applicable to potential deepfake fact sets. To conclude, this paper will offer some solutions to how the courts or legislature can fill the gap in remedies available to victims of targeted deepfakes.

Unfortunately, most torts addressed in this paper would have to go through significant and perhaps untenable upheavals to be workable in a deepfake context. Copyright would need to eliminate the personal-use and non-commercial user-generated exceptions, which would have untold effects on the wider scope of intellectual property law. False light would need to read-in that transposing one’s face onto another’s body is an inherent dissemination of a false “fact”, and defamation would need to read-in that images and videos can constitute defamatory material, not just statements.

A structural problem, too, is that until these torts are more accommodating of deepfake fact sets, victims will not be incentivized to bring suit in these areas. Paradoxically, the only way for these laws to change is for victims of deepfakes to bring suit under the scope of these privacy torts anyways.

A: Reinterpretation of Private Facts or Affairs

However, one solution is for the courts to reinterpret what private facts or affairs are. Broadening the definition to include pictures of one’s face posted on social media would allow deepfake plaintiffs to utilize PDPF, intrusion upon seclusion, or NSCII with relative ease. The “public” nature of social media photos would unfortunately be highly dependent on the nature of the target’s social media privacy settings, and the circumstances under which a perpetrator accesses them. Nevertheless, any stride in broadening the interpretation here could help an untold number of victims secure monetary remedy.

B: Create a New Statutory Tort

Another solution is for governments to develop statutory torts, akin to Nova Scotia’s Intimate Images Act, that specifically target deepfakes. While the wording of that Act, as it is, precludes deepfake victims from utilizing it, it would not take much for a government to pass a law targeting the specific elements inherent in a deepfake fact scenario. For example, it could read: “a plaintiff may sue for damages or statutory damages when a defendant creates, shares, or hosts a deepfake image or video of any person, where that deepfake is explicitly sexual or portrays nudity (considering the entire context of the video, not just whether the target is depicted as nude or in an explicit sexual nature), and is made or shared without the plaintiff’s consent.” This would avoid any of the complexities involved in reasonable expectation of privacy analyses, or expanding what private affairs or facts means. It would work around the definition of intimate images, which just narrowly misses deepfake application. It also would not require publication or dissemination, which would broaden its application. The possibility for statutory damages, like in copyright, could also allow victims to receive monetary compensation even where they did not suffer any financial harm. In all, a law specifically targeted at prohibiting the creation and dissemination of deepfakes would be the most fulsome and flexible way to adequately address the deepfake problem in a civil context.

6: Conclusion

Deepfakes pose a significant threat to our privacy, a threat that will grow as algorithms get more powerful and social media becomes even more pervasive. As they are, Canadian privacy torts do not adequately address the unique harms posed by targeted deepfakes. This paper has suggested reinterpreting private facts and affairs to include an individual’s social media images, or creating a new statutory tort specifically targeting the unique elements of deepfakes, as two solutions to address the current gap in the law. Courts and legislatures across the country should be proactive in altering existing torts, or developing new ones, to ensure that victims of targeted deepfakes are not left unprotected in our increasingly hostile online world.

Bibliography

A: Secondary Sources

@deeptomcruise, “When jokes fly over your head” (April 2021), online: TikTok.

Ana Javornik et al, “‘What lies behind the filter?’ Uncovering the motivations for using augmented reality (AR) face filters on social media and their effect on well-being” (2022) 128 Computers in Human Behavior.

Andrea Hauser, “Deepfakes Analysis: Amount of Images, Lighting and Angles” (November 2018), online: SCIP.

Anita L Allen and Erin Mack, “How Privacy Got Its Gender” (1991) Faculty Scholarship at Penn Law.

BBC News, “Mother ‘used deepfake to frame cheerleading rivals’” (March 2021), online: BBC.

Catherine Kerner and Mathias Risse, “Beyond Porn and Discreditation: Epistemic Promises and Perils of Deepfake Technology in Digital Lifeworlds” (2020) 8:1 Moral Philosophy and Politics.

Chandell Gosse and Jacquelyn Burkell, “Politics and porn: how news media characterizes problems presented by deepfakes” (2020) 37:5 Critical Studies in Media Communication.

Chris DL Hunt, “Conceptualizing Privacy and Elucidating its Importance: Foundational Considerations for the Development of Canada’s Fledgling Privacy Tort” (2011) 37:1 Queen’s LJ, p 179.

Clayton Purdom, “Deep learning technology is now being used to put Nic Cage in every movie” (January 2018), online: AV Club.

Cristina Carmody Tilley, “Tort Law Inside Out” (2017) 126:5 The Yale Law Journal.

Deepnude “See Anyone Nude” (nd) online: Deepnude.

Ellissa Bain, “How to Whiten Your Teeth on TikTok: New Filter Explained!” (2021), online: HITC.

Franziska Barczyk, “Deepfake porn is ruining women’s lives. Now the law may finally ban it” (February 2021), online: Technology Review.

Fraser Duncan, “Illuminating False Light: Assessing the Case for the False Light Tort in Canada” (2020), 43:2 Dalhousie LJ.

Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Lauren Cullen, “The State of Deepfakes: Landscape, Threat, and Impact” (September 2019), online: Deeptrace.

James Vincent, “Watch Jordan Peele use AI to make Barack Obama deliver PSA about fake news” (April 2018), online: The Verge.

Jared A Mackey, “Privacy and the Canadian Media: Developing the New Tort of ‘Intrusion Upon Seclusion’ with Charter Values” (2012) 2:1 Western Journal of Legal Studies.

JB Weinstein and I Dewsbury, “Comment on the meaning of ‘proof beyond a reasonable doubt’ (2007) 5:2 Law, Probability and Risk.

Marissa Muller, “Even Steve Buscemi Is Mystified by The Video of His Head on Jennifer Lawrence” (February 2019), online: WMagazine.

Mitchell Clark, “This TikTok Tom Cruise impersonator is using deepfake tech to impressive ends” (February 2021), online: The Verge.

Ryan Mullins, “R v Jarvis: An Argument for a Single Reasonable Expectation of Privacy Framework” (2018) 41:3 Manitoba LJ.

Samantha Cole, “People Are Using AI to Create Fake Porn of Their Friends and Classmates” (January 2018), online: Motherboard.

TikTok, “teeth filter” (nd), online: TikTok.

B: Statutes

Copyright Act, (RSC, 1985, c C-42)

Criminal Code, (RSC, 1985, c C-46).

Intimate Images and Cyber-protection Act, (SNS 2017, c7).

C: Cases

Boucher v Wal-Mart Canada Corp, 2014 ONCA 419.

Candelora v Feser, 2019 NSSC 370.

Caplan v Atas, 2021 ONSC 670.

CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13.

Chartier v Bibeau, 2022 MBCA 5.

Crookes v Newton, 2011 SCC 47.

Demme v Healthcare Insurance Reciprocal of Canada, 2021 ONSC 2095.

ES v Shillington, 2021 ABQB 739.

Fraser v Crossman, 2022 NSSC 8.

Hill v Church of Scientology of Toronto, [1995] 2 SCR 1130, 24 OR (3d) 865.

Jane Doe 464533 v ND, 2016 ONSC 541.

Jane Doe 72511 v NM, 2018 ONSC 6607.

Jones v Tsige, 2012 ONCA 32.

Kiseau v Bantz, 686 NW 2d 164 (Iowa 2004).

Nitsopoulos v Wong, 169 ACWS (3d) 74, 298 DLR (4th) 265.

Papp v Stokes et al, 2017 ONSC 2357.

Powell v Shirley, 2016 ONSC 3577.

R v Butler, [2011] NJ No 17, 92 WCB (2d) 68.

R v Jarvis, 2019 SCC 10.

R v Lebenfish, 2014 ONCJ 130, 10 CR (7th) 374.

R v Taylor, 2015 ONCJ 449.

Racki v Racki, 2021 NSSC 46.

Rutman v Rabinowitz, 2016 ONSC 5864.

Trout Point Lodge v Handshoe, 2012 NSSC 245.

Trout Point Lodge v Handshoe, 2014 NSSC 62.

Yenovkian v Gulian, 2019 ONSC 7279.

Endnotes

1 Catherine Kerner and Mathias Risse, “Beyond Porn and Discreditation: Epistemic Promises and Perils of Deepfake Technology in Digital Lifeworlds” (“Sensity Study”) (2020) 8:1 Moral Philosophy and Politics pp 81-82.
2 Ibid.
3 Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Lauren Cullen, “The State of Deepfakes: Landscape, Threat, and Impact” (September 2019), online: Deeptrace  p 1.
4 Andrea Hauser, “Deepfakes Analysis: Amount of Images, Lighting and Angles”  (November 2018), online: SCIP.
5 Clayton Purdom, “Deep learning technology is now being used to put Nic Cage in every movie” (January 2018), online: AV Club.
6 Marissa Muller, “Even Steve Buscemi Is Mystified by The Video of His Head on Jennifer Lawrence” (February 2019), online: WMagazine.
7 James Vincent, “Watch Jordan Peele use AI to make Barack Obama deliver PSA about fake news” (April 2018), online: The Verge.
9 @deeptomcruise, “When jokes fly over your head” (April 2021), online.
10 Sensity Study, p 2.
11 Ibid.
12 Ibid.
13 Samantha Cole, “People Are Using AI to Create Fake Porn of Their Friends and Classmates” (January 2018), online: Motherboard.
14 BBC News, “Mother ‘used deepfake to frame cheerleading rivals’” (March 2021), online: BBC.
15 Chandell Gosse and Jacquelyn Burkell, “Politics and porn: how news media characterizes problems presented by deepfakes” (2020) 37:5 Critical Studies in Media Communication.
16 Deepnude “See Anyone Nude” (nd), online: Deepnude.
17 Sensity Study, p 8.
18 Ibid.
19 Hauser supra note 4.
20 Ana Javornik et al, “‘What lies behind the filter?’ Uncovering the motivations for using augmented reality (AR) face filters on social media and their effect on well-being” (2022), 128 Computers in Human Behavior, p 1.
21 Ellissa Bain, “How to Whiten Your Teeth on TikTok: New Filter Explained!” (2021), online: HITC >;  TikTok, “teeth filter” (nd), online: TikTok.
22 Chris DL Hunt, “Conceptualizing Privacy and Elucidating its Importance: Foundational Considerations for the Development of Canada’s Fledgling Privacy Tort” (2011) 37:1 Queen’s LJ, p 179.
23 Ibid., pp 179-180.
24 Ibid., p 181.
25 Ibid., p 182.
26 Fraser Duncan, “Illuminating False Light: Assessing the Case for the False Light Tort in Canada” (2020), 43:2 Dalhousie Law Journal, p 20.
27 Ibid., p 22.
28 Hunt supra note 23, p 203.
29 Ibid., p 204.
30 Anita L Allen and Erin Mack, “How Privacy Got Its Gender” (1991) Faculty Scholarship at Penn Law, p 444.
31 Ibid., p 447.
32 Ibid., pp 444-445.
33 Cristina Carmody Tilley, “Tort Law Inside Out” (2017) 126:5 The Yale Law Journal, p 1328.
34 Jane Doe 464533 v ND, 2016 ONSC 541 at para 45.
35 Allen and Mack supra note 33, p 443.
36 Kerner and Risse, supra note 1, pp 105-106.
37 “Criminal Code” (RSC, 1985, c C-46), s 162.1(1).
38 Ibid., s 162.1(2).
39 JB Weinstein and I Dewsbury, “Comment on the meaning of ‘proof beyond a reasonable doubt’ (2007) 5:2 Law, Probability and Risk pp 172-173. Of course, there is no set percentage of certainty inherent in BARD; these authors suggest somewhere in the 80s or 90s, depending on a number of factors affecting a jury (personal experience, confidence in the system, the presence of current media scares, etc.)
40 R v Butler, [2011] NJ No 17, 92 WCB (2d) 68 para 29.
41 Copyright Act (RSC, 1985, c C-42), ss 3(1), 5(1), 27(1).
42 Ibid., ss 34(1), 35(1), 38.1(1).
43 Trout Point Lodge v Handshoe, 2014 NSSC 62 at paras 18, 20, 26-28; Copyright Act’s 38.1(1)(a).
44 Ibid., s 5(1).
45 Ibid., ss 27(1), 3(1).
46 Ibid., s 29.22(1).
47 Ibid., s 29.21(1).
48 Ibid., s 5(1): “...in every “original”… work”.
49 CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13 at para 16.
50 Ibid.
51 Trout Point Lodge, supra note 46 at paras 18, 20, 26-28.
52 Copyright Act, s 38.1(1)(b).
53 Yenovkian v Gulian,2019 ONSC 7279 at paras 22, 23, 175, 176.
54 Ibid., at para 170.
55 Jane Doe 464533, supra note 37 at para 47; ES v Shillington, 2021 ABQB 739 at para 72; Trout Point Lodge v Handshoe, 2012 NSSC 245 at para 77.
56 Duncan, supra note 27, p 26.
57 Yenovkian, supra note 56 at para 172.
58 Papp v Stokes et al, 2017 ONSC 2357 at para 64.
59 Chartier v Bibeau, 2022 MBCA 5 at para 48.
60 Hill v Church of Scientology of Toronto, [1995] 2 SCR 1130, 24 ORD (3d) 865, at paras 134, 137.
61 Caplan v Atas, 2021 ONSC 670 at para 6.
62 Papp v Stokes et al, supra note 61 at para 64.
63 Rutman v Rabinowitz, 2016 ONSC 5864 at para 133; Hill v Church of Scientology, supra note 63 at para 110; Crookes v Newton, 2011 SCC 47 at para 11.
64 Rutman v Rabinowitz, supra note 66 at para 133.
65 Kiseau v Bantz, 686 NW 2d 164 (Iowa 2004).
66 Trout Point Lodge v Handshoe, supra note 58 at para 58.
67 Ibid., at para 77.
68 Ibid., at paras 81-91.
69 Jane Doe 72511 v NM, 2018 ONSC 6607.
70 Racki v Racki, 2021 NSSC 46.
71 Jane Doe 72511 v NM, supra note 72 at paras 81, 98, 99.
72 Racki, supra note 73 at para 26.
73 Jane Doe 464533, supra note 27 at para 47.
74 ES v Shillington, supra note 58 at para 72.
75 Racki, supra note 73 at para 34.
76 Ryan Mullins, “R v Jarvis: An Argument for a Single Reasonable Expectation of Privacy Framework” (2018) 41:3 Manitoba Law Journal, p 83; R v Jarvis, 2019 SCC 10 at para 68.
77 Ibid., p 95; R v Lebenfish, 2014 ONCJ 130, 10 CR (7th) 374.
78 Mullins, supra note 79, p 95; R v Taylor, 2015 ONCJ 449.
79 Jarvis, supra note 79 at paras 37, 40, 41.
80 Simon Kemp, “TikTok Gains 8 New Users Every Second (And Other Mind-Blowing Stats) (January 2022), online: Hootsuite. TikTok had been gaining almost 8 new users every second as recently as early 2022.
81 Jarvis, supra note 79 at para 68.
82 Intimate Images and Cyber-protection Act, (SNS 2017, c7) s 6(1).
83 Ibid., ss 6(2)(b), 6(3).
84 Ibid., s 3(f).
85 Candelora v Feser 2019 NSSC 370; Fraser v Crossman, 2022 NSSC 8.
86 Jones v Tsige 2012 ONCA 32 at paras 2, 5, 42, 90.
87 Ibid., at para 71.
88 Jared A Mackey, “Privacy and the Canadian Media: Developing the New Tort of ‘Intrusion Upon Seclusion’ with Charter Values” (2012) 2:1 Western Journal of Legal Studies, p 6.
89 Trout Point Lodge, supra note 58 at para 60; Nitsopoulos v Wong, 169 ACWS (3d) 74, 298 DLR (4th) 265 at para 20.
90 Demme v Healthcare Insurance Reciprocal of Canada, 2021 ONSC 2095 at para 2.
91 Powell v Shirley, 2016 ONSC 3577 at para 26.
92 As it would under the false light or PDPF tests discussed above.
93 Boucher v Wal-Mart Canada Corp, 2014 ONCA 419 at para 41.
94 Jane Doe 464533, supra note 37 at para 28.
95 Yenovkian, supra note 56 at paras 176, 184.
96 Boucher, supra note 96 at para 50.
97 While this conflicts with my interpretation of falsehood in the false light analysis, I submit that it really is up for interpretation. From an everyday perspective, one could consider that a deepfake does spread false information. In the legal wording, in line with the false light caselaw, it is not so clear.
98 Jane Doe 464533, supra note 37 at para 32.
99 Yenovkian, supra note 56 at para 177.
100 Boucher, supra note 96 at para 52.
101 Yenovkian, supra note 56 at para 178.
102 Franziska Barczyk, “Deepfake porn is ruining women’s lives. Now the law may finally ban it” (February 2021), online: Technology Review.
103 Ibid.
104 Boucher, supra note 96 at para 44; “Jane Doe 464533”, supra note 37 at para 27.
105 Barczyk, supra note 105.
106 Caplan v Atas, supra note 64.
107 Ibid, at para 171.
108 Ibid., at para 3.
109 Ibid., at para 7.
110 Ibid., at para 236.