AI – Deepfakes


AI - Deepfakes


By Estelle Chambers
26 April 2024

What is AI?

Artificial intelligence (commonly referred to as AI) is a machine’s ability to simulate human intelligence. In effect, it enables machines to perform functions and tasks that typically require human cognitive abilities. AI has developed at an exponential rate in recent years, and the functions that can be performed now appear to be limitless. The most common form of AI which is best known to non-expert users is programmes such as chat-GPT and Google Gemini, both of which are easily accessible by the general public and can perform a multitude of functions.


Many users of such programmes use AI for creative purposes. One particular function that AI can perform, is the creation of images. Users can input a request, which can be extremely detailed, and the AI will create an image which depicts that request. This does not on the face of it appear to be an issue, but the abilities of AI are limitless, and this poses the question of to what end can a user go when depicting an image for AI to create.

What are deepfakes?

This leads the discussion onto ‘deepfakes’. Deepfakes are photos or videos of a person in which their face or body has been digitally altered so that they appear to be someone else. They are typically used maliciously or to spread false information and are created by AI at the request of a user. Deepfakes can be used for non-sexual purposes, however, many deepfakes are of a pornographic nature which is why the term ‘deepfake’ often has negative connotations attached to it.


The adequacy of laws in relation to deepfakes has been the topic of conversation for some time now. However, it has been brought to the greater attention of the public earlier this year when Taylor Swift deepfakes were created and published across social media sites. These deepfakes were created by AI at the request of one particular user, yet they have been seen by millions of people as they quickly spread across sites such as (formerly Twitter) and Reddit.


Whilst deepfakes were not new and by no means unheard of at the time of the Taylor Swift scandal, it certainly brought the issue to the minds of many who perhaps did not have concerns about deepfakes before the ordeal. The controversy has also led to a more thorough discussion of how deepfakes are affecting the public in general. It has been recognised that the use of deepfakes has been increasing in recent years, with a website that virtually strips women naked receiving 38 million hits in the first eight months of 2021. These issues have then led to a much greater discussion as to whether laws are adequate in dealing with the issue of deepfakes.

The current law in England & Wales

The law has recently acknowledged the illegality of deepfakes and has implemented an additional law in the Sexual Offences Act 2003. This Act was amended by the Online Safety Act 2023 to account for the sharing of deepfakes and creates an offence of intentionally sharing, or threatening to share, a photograph or film which shows, or appears to show, another person in an intimate state.1 A person who shares such a photograph or film commits an offence if:

- they do so without the other person's consent (and with no reasonable belief in consent),

- they do so with intent to cause alarm, distress or humiliation of that person, or

- they do so for the purpose of obtaining sexual gratification (either for themselves or for another person).

The offence is triable either way and is punishable by up to two years if sentenced by the Crown Court.

This amendment results in deepfakes being covered by the Act through the use of the phrase ‘or appears to show’ and shows a change in the way deepfakes are being addressed by the legal system. Of course, deepfakes only enter into the criminal realm when they are of a sexual nature, as outlined in the Act. Deepfakes created for other purposes, such as for political parodies as is often the case, are not made criminal. The current focus of the law lies on sharing, not creating, deepfakes. Therefore, creating a deepfake without the intent to share it would not necessarily be illegal under the Sexual Offences Act 2003.


Estelle Chambers

Estelle has a background in Criminal Law after working as a paralegal at a prominent criminal defence firm in the North East before commencing pupillage.

Estelle joined Trinity Chambers as a pupil in January 2024 under the Supervision of David Callan and is undertaking a specialist Criminal pupillage.

Is the current law adequate?


The question of whether the current law is adequate in dealing with the issue of deepfakes is difficult, given that the amendments are so new. Without cases involving the newly amended legislation passing through the court systems, it is difficult to determine whether they are achieving the aims they set out to achieve. One point which is perhaps able to be commented on at present is the difficulties posed by the lack of law preventing a person from creating deepfakes.


Issues such as these often require mechanisms which stop the act at its source. For example, it is a crime to share indecent images of children, but it is also a crime to create, possess and advertise such images. This reflects the seriousness of the offence and prohibits people from becoming involved in any way with images of that nature. This therefore poses an argument as to whether the law should treat deepfakes in the same way. If this is to be the case, not only would it be an offence to share a deepfake, but it would also be an offence to create a deepfake, as well as possessing and advertising a deepfake. If the law were to reflect these factors, then the issues created by deepfakes would be tackled at the source, namely the creation, and every subsequent action in relation to the deepfake would be criminal in nature also.


But this would create further difficulties which, frankly, technology may not be able to deal with. For example, if the creation of deepfakes is to be made criminal, there would be great difficulty in prosecuting such cases. This is because the process of identifying where the deepfake originated from is almost impossible, as has been made apparent after the Taylor Swift deepfakes. In order to make the prospect of prosecuting the creation of deepfakes strong enough to amend the law, then further research would likely need to be carried out in order to determine how the offence could be proved. Yet these difficulties arguably do not arise in relation to possessing and advertising deepfakes; it seems plausible that the law could in fact cover these elements in the same way it does for indecent images. But with that in mind, at present, we are to deal only with the sharing of deepfakes, limiting the potential issues which may arise in a case of this nature.

Further offences created by the Online Safety Act 2023

The inclusion of deepfakes in relation to the offence of sharing or threatening to share intimate photographs or films is not the only amendment made by the Online Safety Act 2023. Other offences contained within the Act are:

- Section 179: False Communications

- Section 183: Sending or showing flashing images electronically

- Section 184: Encouraging or assisting serious self-harm

- Section 187: Sending photographs or films of genitals (cyber-flashing)

A short summary of the new offences outlined above follows:

False Communications
This offence is not entirely ‘new’ but replaces similar offences under the Malicious Communications Act 1988 which have now been repealed. A person commits this offence if they send a message conveying information they know to be false with the intention of causing non-trivial, emotional, psychological or physical harm.  The offence requires the sender to have knowledge that the information conveyed is false and that the sender is aware that sending the message is likely to inflict harm.

Sending or Showing Flashing Images Electronically
This section creates an offence of sending communications which include flashing images that induce harm to persons with epilepsy. The creation of this offence stems from recommendations of the Law Commission in relation to the severity of psychological harm caused by epilepsy trolling.

This offence is triable either way and is liable to a maximum sentence of five years imprisonment if sentenced by the Crown Court.

Encouraging or assisting serious self-harm
This new offence is similar to the already existing offence of encouraging or assisting suicide under the Suicide Act 1961. However, this offence covers acts that encourage harm which amounts to grievous bodily harm. It is irrelevant whether grievous harm was caused. The offender must only do an act capable of encouraging or assisting in the serious self-harm of another person, with the required intent.

This offence is triable either way and is liable to a maximum sentence of five years imprisonment if sentenced by the Crown Court.

Sending photographs or films of genitals (cyber-flashing)
This new offence covers the sending of unsolicited sexual images. It applies where a person sends or gives a photograph or film of any person's genitals to another person, and the offence will be made out if:
- The person sending the photograph/film intends that the other person will see the genitals and be caused alarm, distress, or humiliation; or
- Sends such photograph/film for the purpose of obtaining sexual gratification and is reckless as to whether the other person will be caused alarm, distress, or humiliation.

This offence is triable either way and is liable to a maximum sentence of two years imprisonment if sentenced by the Crown Court.

Free and independent legal advice

You will never be penalised for asking for legal advice. It is your legal right and it is free of charge.

Remember: the law is complex and it never hurts to get expert advice, even if you are sure you have done nothing wrong.

Ask for Michael Herford and he, or one of his specialist team will provide you with a Legal LifeLine when you need it most.

  1. Section 66B, Sexual Offences Act 2003