Used, Not Consulted: When AI Trains on Our Work Without Consent

CyAN Context

At CyAN, we often talk about trust, governance, and transparency as
pillars of a secure digital future. But what happens when those
principles are ignored, not in a breach or a ransomware attack, but in
the slow, quiet erosion of creator rights?

As a cybersecurity professional and a published author, I’ve found
myself caught at the centre of a disturbing overlap. The very
technologies we celebrate for their potential are being built on
practices that, if applied to personal data, would be called negligent or
even unlawful. But because it is “just” books, creative work, it gets
waved through.

I’ve chosen to share the following reflection across CyAN, Medium,
and LinkedIn to encourage broader engagement across industries and
disciplines. This is a conversation that sits squarely within CyAN’s
mission, because digital safety does not stop at the firewall.

The timing is fitting. While writers are gathering to celebrate craft
and creativity, I’ve been forced to reckon with the quiet
disappearance of my own work into the machinery of AI
. Like
many writers, I recently discovered that one of my books had been
used to train an AI model developed by Meta. No permission sought.
No credit given. No compensation offered.

And when I tried to read a news article about it? I hit a paywall.

I don’t resent that. I pay for journalism. I support The Guardian, even
when it doesn’t require it. But the double standard is hard to ignore:
AI tools avoid scraping paywalled content, yet books, edited, priced,
and protected, are treated like free training data.

The Double Standard

To be clear, I’m not bemoaning the paywall. Quality journalism is
worth paying for. And even in cases where a publication like The
Guardian doesn’t enforce one, I still support it, which is why I pay for
a subscription. But the double standard is hard to ignore. AI tools are
built to avoid infringing on paywalled content, yet when it comes to
books, essays, and other creative works, the same respect doesn’t
seem to apply.


Imagine you’ve spent weeks preparing a client report or policy paper,
your voice, your nuance, your unpaid refinement, and then it’s
scraped, digested, and mimicked by a machine doing your job in
seconds. No consent, no attribution. That’s what it feels like when
creative work is quietly repurposed for AI training.

The Scope of the Issue

Most AI developers say they only use publicly accessible content and
avoid subscription sites. But that principle seems to vanish when it
comes to books. My work, carefully edited, published, priced, and
protected, has now been absorbed by a machine that will use it to
generate content for others. Probably for free.

The message is clear: your work is yours, right up until it’s useful to
someone else’s algorithm.

If this were customer data, there would be outrage. But because it’s
creative work, too many people wave it through, ignoring the same
principles of consent, provenance, and data control that are
fundamental to digital safety and trust.

This isn’t about being anti-AI. I’m a tech founder. I work at the
intersection of innovation, policy, and creativity. I use AI tools myself
to draft, organise, and visualise ideas. But I use them with intention,
and always with respect for where the raw material comes from.
That’s the line we all need to draw, not whether we use AI, but how.

Why It Matters to Everyone

You might be thinking, “I’m not an author, why should I care?”
Because this isn’t just about books. It’s about ownership. It’s about the
value of what you create, whether that’s a blog post, a training
manual, a school resource, or a workplace presentation. If we
normalise scraping without consent, we’re handing over more than
words, we’re giving up agency.


And this isn’t just about me. At a time when writers are gathering to
celebrate craft and storytelling at the Sydney Writers’ Festival, it’s
worth asking what future is left for our work if we’re not even told
when it’s being taken.


I searched a dataset of scraped books and found more than 200 titles
had been taken from just 23 members of my writers’ group, a group
that includes some of Australia’s most celebrated, bestselling, and
internationally recognised female authors.


These are women whose work has shaped classrooms, award lists,
bestseller charts, and the cultural conversation, and their words were
taken without permission, credit, or compensation.
That’s not an accident. That’s a system.

Global Backlash and Legal Challenges

Globally, the backlash is growing, but patchy. In the US, lawsuits from
authors like George R.R. Martin, Ta-Nehisi Coates and The New York
Times are now consolidated in federal court. Ziff Davis, which
publishes IGN and PCMag, has also launched a case of its own. But
while litigation is gaining pace, there is still no federal legislation in
place to regulate how copyrighted works can be used to train
generative AI systems.


Just days after the US Copyright Office released a report questioning
whether large-scale AI training on copyrighted works qualifies as fair
use, the head of that office, Shira Perlmutter, was abruptly dismissed
by President Trump, alongside Librarian of Congress Dr Carla Hayden.
Both were widely respected for their independence and expertise.
Their removal sends a clear signal: even modest attempts to protect
creators can be swept aside when they challenge the interests of
powerful tech players.


In the UK, the government initially floated a controversial plan to
exempt AI developers from needing licences to mine copyrighted
works. It was shelved after intense pushback from author
organisations, publishers, and the creative community. A collective
licensing scheme is due to launch in 2025, but the long-term strategy
remains unclear. More recently, over 400 prominent British musicians,
writers, and artists including Elton John, Paul McCartney and Dua
Lipa signed an open letter to Prime Minister Keir Starmer calling for
urgent reform of the UK’s copyright laws. Sir Elton went further in a
BBC interview, calling the government’s approach “thievery on a high
scale” and warning that it would “rob young people of their legacy
and their income.” Their message was unequivocal: the unchecked use
of creative work to train AI systems is a direct threat to the future of
Britain’s creative industries.


In Europe, the AI Act passed in 2024 was a step forward. It requires AI
providers to disclose whether copyrighted material was used in
training, a win for transparency. But critics argue the Act still doesn’t
guarantee compensation or enforcement. It tells us what is being
used, but not whether it should be.

The Situation in Australia

And in Australia? There is silence. The Australian Society of Authors
has briefed policymakers and called for reform. Individual writers,
including Holden Sheppard and Tracey Spicer, have spoken out. But
the federal government has yet to propose a roadmap, initiate
consultation, or clarify whether it views scraping as acceptable.

A Call to Action

We can do better. We must. Innovation doesn’t need to come at the
cost of creators. We need systems that allow for transparency,
licensing, and consent, and policies that protect creative labour from
being reduced to invisible infrastructure for someone else’s product.
If you’re reading this and thinking, “Surely they didn’t use my work,”
don’t be so sure.


Check. Ask. Speak up. And whether you’re a creator or not, care.
Because this digital world we’re building belongs to all of us. The
decisions we make now will shape how future generations share,
create, and connect.


Creators aren’t just content. We are the culture. And we deserve to be
at the table, not just on the training menu.

Final Note

This article is part of a broader campaign to highlight the need for
ethical, transparent, and rights-respecting AI development. You’ll also
find it shared via Medium and LinkedIn, where I welcome public
discussion from creatives, technologists, and policymakers alike.

If we truly believe in building a secure and inclusive digital ecosystem,
then consent, transparency, and respect must apply to all forms of
data, including the cultural and creative expressions that define who
we are.


About the Author:

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.