Deepfakes, Digital Trust, and the Urgent Case for Safety by Design

Last week’s ruling in Australia marks a first. Anthony Rotondo was fined $343,500 for creating and distributing deepfake pornography of prominent women. It’s a milestone case, the first of its kind under the Online Safety Act, and it sends a clear message: image-based abuse is not a prank. It is harm.
But let’s be honest – the headlines weren’t written because of the fine alone. They were written because the women targeted are public figures. And while that visibility matters, it risks obscuring a harsher truth: hundreds of thousands of women, most of them not famous, are also victims of deepfake abuse. For them, the road to justice is almost non-existent. Complaints are ignored. Legal options are costly. Platforms move slowly, if at all. And yet the damage – to reputations, careers, mental health, and basic safety – is just as devastating.
If this ruling is to mean anything, it must be more than a one-off punishment. It must become a precedent that extends protection to every woman, not just those whose names make headlines.
Deepfakes and the Erosion of Trust
Deepfakes aren’t new, but their proliferation is accelerating. And they chip away at one of the most fragile commodities we have online: trust. When you can’t be sure whether an image or video is genuine, everything begins to wobble – from journalism to justice, from elections to everyday relationships.
Women are disproportionately targeted, especially through sexualised deepfakes. For public figures, this means humiliation on a global scale. For private citizens, it often means living in silence with images circulating through schools, workplaces, or local communities. Fame offers visibility, but anonymity often offers no protection at all.
At its core, this is a gendered abuse of technology. It isn’t about free expression. It’s about silencing, shaming, and undermining women’s ability to speak, work, or even feel safe in their own lives.
From Cybersecurity to Human Security
We usually talk about cybersecurity in terms of data: keeping systems patched, protecting personal details, ensuring confidentiality and integrity. But deepfake abuse reminds us that safety in the digital world can’t stop at data.
Cybersecurity and trust and safety are overlapping fields. When someone’s identity is co-opted, manipulated, or distributed against their will, that is not simply “content.” It’s a direct attack on dignity. Protecting the bytes without protecting the person leaves us with a hollow definition of security.
We need to see deepfake abuse for what it is: part of the broader threat landscape that sits at the intersection of technology, safety, and human rights.
Why This Matters for Businesses and Policymakers
For regulators, this ruling is a chance to show that the Online Safety Act has teeth. But it can’t stop here. Laws must extend protections to every victim, not just the high-profile few. Otherwise we risk a two-tier system where only those with media attention see justice.
For platforms, the lesson is equally stark. Detection and takedown systems cannot be reserved for celebrities. Safety by design means every user should have access to meaningful reporting pathways, fast removal processes, and tools to protect themselves.
And for businesses, the risks are broader than many realise. Deepfakes are already being weaponised against executives, board members, and employees. They represent reputational risk, legal liability, and a direct attack on brand trust.
The Global Picture
Australia is not alone in grappling with this. The UK has begun criminalising non-consensual deepfake pornography. The EU is debating regulation of AI-generated media. In the US, state laws vary wildly, leaving a patchwork of inconsistent protections.
The problem, of course, is that deepfakes don’t respect borders. A video created in one country can be distributed globally in seconds. And if we can’t rely on the authenticity of digital evidence – in court, in news, in elections – then the very foundations of public trust start to crumble.
Fragmented regulation won’t cut it. This is a global problem demanding coordinated solutions.
Prevention, Not Just Punishment
The fine against Rotondo is significant. But fines are reactive. They punish after the harm is done. By then, reputations are shattered, trust is corroded, and victims are left picking up the pieces.
What’s needed is prevention by design. That means watermarking AI-generated content. It means building and deploying better detection tools. It means giving victims rapid takedown options that actually work. And it means placing ethical obligations on technologists and platforms, not just legal obligations on the victims to prove their harm.
Punishment alone will never be enough. If we want to safeguard digital trust, prevention must be baked into the systems we build.
Choosing the Future of Digital Trust
Deepfakes aren’t going away. The technology can be effectively used for creativity, satire, even education. But without robust safeguards, it will continue to be used as a weapon of abuse – one that disproportionately targets women and undermines the very possibility of trust online.
Australia’s ruling is a milestone. But milestones only matter if they lead somewhere. The true test will be whether this precedent expands into real protections for the countless women whose stories never make headlines.
Because the question isn’t whether we can fine our way out of this problem. The question is whether we can design a digital ecosystem where dignity and trust are non-negotiable.
About the Author

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.