eSafety Commissioner Julie Inman Grant has disclosed that sexual deepfakes have been increasing by over five times annually on average since 2019. Deepfakes are manipulated videos and images of individuals where their faces or bodies have been digitally altered using software or AI to make them appear as someone else.
During an inquiry hearing on July 23, Ms. Inman Grant stated, “There’s compelling and worrying data that explicit deepfakes have been growing on the internet by as much as 550 percent year on year since 2019. Pornographic videos account for 98 percent of the current online deepfake material, with 99 percent of that imagery depicting women and girls.”
The Commissioner highlighted the increase in deepfake image-based abuse, noting its prevalence and distressing impact on victim-survivors. She also discussed the challenges faced by law enforcement in addressing sexual deepfakes.
According to Ms. Inman Grant, AI-generated deepfakes are overwhelming investigators and support hotlines as the material can be created and shared quicker than it can be reported, assessed, and analyzed.
Ms. Inman Grant’s comments coincide with the federal Labor government’s push for the passage of a new bill that targets the dissemination of non-consensual sexual deepfake materials. The legislation proposes penalties of up to six years in prison for those sharing such content, with creators and sharers facing a maximum of seven years.
However, the bill does not penalize the creation of sexual deepfakes if the content is not shared. The legislation has already cleared the lower house and awaits approval from the Senate to become law.
eSafety Supports Sexual Deepfake Legislation
At the hearing, Ms. Inman Grant expressed her support for the legislation, believing it would strengthen her agency’s efforts to combat abusive materials online. She emphasized the importance of criminalizing such actions as a deterrent and a reflection of societal disapproval towards such behavior.
Under the existing Online Safety Act, eSafety is empowered to compel tech companies to remove abusive content. With advancements in AI technology, the regulator has expanded its efforts to address synthetic materials and deepfakes across all its complaint schemes.
Ms. Inman Grant stated, “We have received deepfake reports across all our schemes except the adult cyber abuse scheme. Criminalizing deepfake offenses could serve as a deterrent and punishment for perpetrators, although the mechanics of enforcement regarding companies promoting sexual deepfake apps remain uncertain.”