When “Just a Tool” Stops Being a Defence

Over the past three days, a pattern has emerged that is difficult to dismiss as coincidence, mischief, or edge-case misuse.

Investigations reported by reputable outlets, including The Guardian, reveal that Grok, an AI image generation tool, has been used to produce sexualised fake images of real women and girls. Some of those depicted are public figures. Others are private individuals. In all cases, the images are non-consensual, explicit, and harmful.

By Wednesday, Australia’s eSafety Commissioner, led by Julie Inman Grant, confirmed the issue was under investigation locally.

None of this is hypothetical. None of it is novel. And none of it should be surprising.

What is confronting is not that this technology exists. It is that we continue to pretend that the consequences are accidental.

This Is Not “Misuse”. It Is Design Failure.

When sexually explicit deepfakes of women and children can be generated easily, repeatedly, and at scale, the problem is not bad actors alone.

It is a system that permits, enables, and in some cases rewards the outcome.

The idea that generative AI tools are neutral, and that responsibility rests solely with end users, has been stretched beyond credibility. If a product reliably produces harmful outputs in predictable ways, that is no longer a question of user behaviour. It is a question of product governance.

We would not accept this logic elsewhere. A car manufacturer could not dismiss a fatal defect by pointing out that drivers chose to turn the key. A social media platform cannot credibly claim neutrality when its algorithms amplify abuse because outrage drives engagement.

AI should not be treated differently simply because it is complex, novel, or commercially valuable.

Sexualised Deepfakes Are a Form of Abuse

For the people targeted, these images are not “content”. They are violations.

Sexualised deepfakes are increasingly recognised as a form of technology-facilitated abuse. They strip individuals of agency, dignity, and safety. They are used to harass, coerce, humiliate, and silence. In the case of girls and young women, they intersect directly with child sexual abuse material risks, even when images are synthetic.

The harm does not require the image to be believed. It only needs to exist.

Once generated, these images are near-impossible to fully remove. They travel across platforms, jurisdictions, and private channels. The burden of response is placed almost entirely on the victim, who must prove harm, lodge complaints, and relive the violation repeatedly.

This asymmetry matters. It is why “after-the-fact moderation” is not a sufficient safeguard.

Guardrails Are Not Optional Extras

Claims that stronger safeguards would stifle innovation are both tired and misleading. We regulate cars, medicines, and financial systems not because we oppose progress, but because unmanaged risk causes real harm.

AI systems that can generate photorealistic sexualised imagery of real people require meaningful constraints by default. That includes robust training data governance, clear prohibitions on specific outputs, effective pre-generation checks, and friction that makes abuse harder rather than effortless.

Crucially, it also includes accountability when those measures fail.

Voluntary ethics statements and trust us assurances are no longer adequate. Nor is the strategy of releasing products quickly and apologising later, once harm has already occurred.

Australia Is Right to Investigate, But It Cannot Stop There

Australia has taken important steps in recognising and responding to online harm. The involvement of the eSafety Commissioner is appropriate and necessary. But investigations alone will not shift behaviour if they are not backed by consequences.

This moment should prompt a broader conversation about duty of care for AI providers operating in Australia, regardless of where they are headquartered. If a tool is accessible here, and harm occurs here, then responsibility should follow.

It should also sharpen the focus on gendered impacts of emerging technologies. The disproportionate targeting of women and girls is not incidental. It reflects longstanding patterns of abuse that technology has amplified, not invented.

This Is About Power, Not Novelty

It is tempting to frame this as a story about a particular tool, a particular company, or a particular personality. That framing is convenient, but incomplete.

The deeper issue is power. Who gets to build systems that shape reality. Who bears the cost when those systems cause harm. And who is expected to quietly absorb the damage in the name of progress.

If AI is to be integrated into society at scale, it must meet the same basic standard we expect of any infrastructure: it must not predictably harm people, and it must be accountable when it does.

Anything less is not innovation — it is abdication.


About the Author

Kim Chandler McDonald

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.

She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.