Taylor Swift Stock photos by Vecteezy
Co-authored by: Grace Jolly
KEY TAKE OUTS:
- Taylor Swift scandal reiterates the difficulty Australian legislation has in protecting individuals against AI generated explicit content.
- “Deepfakes” have harmful effects on individuals.
- Australia needs to reform the Australian legislative framework to ensure that “deepfakes” can be combatted in the future.
What is happening
As Taylor Swift mania sweeps the country, in recent news, explicit AI generated images have surfaced of the well-known popstar, causing a whirlwind of outrage from fans. This incident has brought to light the challenge of regulating “deepfakes” and their harmful distribution online.
Although very much fake, the exaggerated images of the Taylor Swift at football games have gone viral, raking in millions of views. From this, questions are being raised as to the law’s ability to combat AI generated imagery. The social media platform “X”, formally known as Twitter, tracked close to 47 million views of the images before the account was suspended.
To minimise the spread of the images, the site placed a temporary ban not only on the image itself but also on the word “Swift”. An error message was brought up when users attempt to search this saying “Something went wrong. Try reloading”. The platform then later released a statement stating that the spread of non-consensual nudity on the site is “strictly prohibited”. Although the ban made it more difficult to locate, these images will unlikely ever be entirely taken off the internet.
“Deepfakes”
At the time of writing, there is no uniform definition of what a “deepfake” actually is. It is commonly known as made up material through AI generated technology which has the ability to produce highly realistic images of an individual engaging in activities, they did not participate in and are often sexual in nature.
Although the images which arose of Taylor Swift may not be the first, or last, deepfakes on the internet, this scandal, and the outrage it caused, may finally call for a change. It has reiterated the unpreparedness of social media platforms to put in place effective stops on AI generated content.
Australia’s attempt at regulating AI generated images
Questions are raised surrounding Australia’s legislative protections and capabilities when it comes to AI generated content. While generating non-consensual explicit AI images is not directly illegal under any Australian laws, existing legislation such as the Crimes Act 1900 (NSW) and the Criminal Code Act 1995 provides helpful examples in attempting to regulate online harms. However, neither directly solves the challenges against explicit deepfakes.
The first section that has the potential to criminalise this type of imagery is section 91P of the Crimes Act 1900 (NSW). This section makes it an offence to record an intimate image of an individual without their consent. Although not specifically speaking towards AI generated content, this section was brought in to minimise the effects of “revenge porn”. Revenge porn refers to the unauthorised sharing of intimate images. Although successful in other circumstances, this provision further highlights the need for new reform to directly criminalise non-consensual explicit AI imagery.
A further example is section 91Q of the Crimes Act 1900 (NSW), which makes it an offence for anyone to intentionally distribute an intimate image of another person without their consent and having the knowledge that they did not consent. If these three elements can ultimately be proven, within specific circumstances, there may be the possibility for AI generated content to be prosecuted against.
Section 474.17 of the Criminal Code Act 1995, which is Federal legislation, carries a maximum penalty of 5 years imprisonment for a person who uses a carriage service that a reasonable person would regard as menacing, harassing or offensive. A carriage service in this context is defined as ‘a service of carrying communications by means of guided and/or unguided electromagnetic energy’. With the broad nature of this provision, it has the ability to criminalise the use of AI generated deep fakes of individuals in Australia.
Given the uncertainty surrounding AI generated imagery and the rapidly changing pace of technology, it is important Australian legislation can provide a suitable path for regulation.
Where is Australia headed?
The Australian Government released an Interim Response to the 2023 “Safe and Responsible AI in Australia” consultation, which explored what governance mechanisms could be put in place to ensure AI is developed in a safe manner.
In their Interim Response the government considered how current Australian laws do not adequately and successfully address the risks associated with AI content, however further information has not been released as to how the Government is going to regulate the usage of AI. It has rather just been indicated that it will reach out to the industry and provide a guide using principles such as a risk-based approach and aiming for a balanced and proportionate position.
ABOUT GRACE JOLLY:
Grace joined the Coutts team in August 2023 as a Paralegal working in the Criminal & Family Law teams, from our Camden office.
Grace is currently in her 2nd year studying a Bachelor of Laws and Business at Wollongong University.
She is passionate about the law and looks forward to learning new things. At the completion of her degree, she looks forward to practicing as a Lawyer at Coutts.
For further information please don’t hesitate to contact:
Grace Jolly
Paralegal
info@couttslegal.com.au
1300 268 887
Contact Coutts today.
This blog is merely general and non-specific information on the subject matter and is not and should not be considered or relied on as legal advice. Coutts is not responsible for any cost, expense, loss or liability whatsoever to this blog, including all or any reliance on this blog or use or application of this blog by you.