Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 - 1 July 2024

2/7/24

The Status of Women Report Card released in March this year contains some sobering statistics, including that 57 per cent of women who recently experienced sexual harassment experienced it electronically, such as online or on a phone. The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 is an important bill. These amendments meet a need to strengthen existing offences which deal with the unlawful use of private sexual material. The amendments create offences that prohibit both the creation of and the non-consensual sharing of sexually explicit material depicting adults online. Children are dealt with under separate legislation. The bill goes further and also creates an offence in relation to images that have been created or altered with technology, including AI—in other words, what we refer to as deepfakes. It's currently not illegal to create a fake AI generated pornographic image.

Obviously, as technology advances at a rapid pace, it becomes easier and quicker to generate sexually explicit material and images. The technology is so sophisticated, it's become difficult to detect whether an image is real or not, and there are several issues here. AI enables an image to be manipulated so people can be depicted in a highly convincing way, physically misrepresenting them, making it look like they're doing something that they have not done. Critically, whether the images are real or false, the key issue is consent, or lack of it, and the devastating impact this could have on the person depicted, on their personal and professional lives. The purpose of creating those images is almost always to deliberately cause harm—to intimidate, shame, bully and frighten people.

I'm very supportive of these amendments. This bill is the criminal justice response to the unlawful use of non-consensual sexual images. But it's only one part of a wider reform agenda in relation to online safety. And this reform agenda for online safety is only one part of the national approach required to address the worrying issue of intimidation and violence, particularly against women and children. In the context of this bill, we should be very concerned about the broader and long-term cultural and social impact of image based abuse in our communities and particularly on women. The sharing of intimate images without consent, whether they're deepfake or otherwise, largely reflects a concerning culture of harassment and misogyny. AI and technology are allowing perpetrators to amplify that culture further, with devastating reach.

While this bill rightly focuses on the unlawful use of private sexual material, victims of deepfakes can also experience an intense sense of violation even when the deepfakes are not sexual in nature. Having your identity co-opted, being made to appear to say or do things that you would never say or do, can undermine your sense of self and change how others see you.

I've already discussed the pressing need to address deepfakes to preserve our democracy. Effective use of deepfakes by bad actors can create confusion and mistrust that will make it difficult for our democratic and economic institutions to function. Deepfakes present huge risks in relation to the spread of misinformation both in elections and, importantly, in scams.

There's a significant asymmetry between the ease and cost with which misleading AI generated content can be created and disseminated, and the time and expense of having it removed, whether it's of a sexual nature but, also, especially when the content is not of a sexual nature. This is a huge issue, wider than the scope of this bill, and I urge the government to address it urgently. We need to find creative ways to address this imbalance between the ease of creation and the cost of removal.

Fundamentally we require a massive cultural change that has many parts to it. Legislation plays an important role in defining acceptable norms, but cultural change will only happen if it's also addressed at an educational level with young people. I welcomed the announcement on 17 June of a new phase of the Stop it at the Start campaign, which aims to give tools to parents and caregivers to challenge the echo chamber of disrespect that children are often immersed in online and to help young people navigate harmful content. The online normalisation of harmful stereotypes and behaviours against girls and women is having a particularly alarming impact on adolescent boys. To achieve cultural change we must address this on the actual platforms, apps and devices where that material is readily available and where the algorithms are targeting young men, and at an early age. I note the current Statutory Review of the Online Safety Act 2021, which is considering, among other things, a minimum age for access to social media and the prevention of online harms, including online and technology facilitated abuse.

My electorate of Curtin has provided the minister with a community submission, following the completion of an online safety survey by 600 constituents, testing their support for stronger regulation. Seventy-five per cent of respondents want the Online Safety Act to regulate technology facilitated abuse, and 86 per cent want the act to address emerging technology risks, including placing responsibility and expectations on platforms to protect users. This is a recurring theme and challenge. How can service providers effectively detect and remove offensive content, and, specifically in the context of this bill, deepfake content, and how do we effectively require them to do so?

We have some measures in place. The basic online safety expectations, the BOSE, are determined under the Online Safety Act and provide a framework of expectations to service providers of social media, mobile phone and other messaging services, and apps. Those providers are expected to ensure we can use their platforms and online services safely, with a minimisation of unlawful and harmful material and activity. The government amended the BOSE in May this year to include an expectation of providers to proactively minimise 'the extent to which online services are used to produce or facilitate unlawful or harmful material, including deepfake non-consensual intimate images'. This is a good start, but testing and policing this is a challenge. Serving a notice on providers that requires them to explain how they're doing this is not enough. The BOSE fall short of requiring platforms and service providers to develop ways of independently detecting harmful content and deepfake images.

I look forward to the report of the Statutory Review of the Online Safety Act, as well as the findings of the joint select committee on the impact of social media in Australia. I'll be advocating sharper instruments and stronger solutions.

In conclusion, this bill is a very welcome criminal justice response to the unlawful dissemination of non-consensual images, and I support these amendments. The distribution of deepfake AI generated images is one part of a disturbing challenge we face in Australia to address harm, particularly against women and children. Cultural change will require many things, but, most certainly, we need to embed early targeted education. We need to look for creative solutions to minimise harm. We also need to require service providers to detect and remove harmful material, including deepfake images. I commend this bill to the House.

Previous

Budget in Reply - 3 July 2024

Next

Gambling Reform - Question to the Prime Minister - 1 July 2024