Tag: AI

Cyber (In)Securities – Issue 151 – Snapshot Edition

Cyber (In)Securities – Issue 151 – Snapshot Edition

You can download this edition using the download icon at the bottom. To enlarge the view, click the fullscreen icon on the bottom right. All article titles inside the flipbook are clickable links.

Cyber (In)Securities – Issue 150 – Snapshot Edition

Cyber (In)Securities – Issue 150 – Snapshot Edition

You can download this edition using the download icon at the bottom. To enlarge the view, click the fullscreen icon on the bottom right. All article titles inside the flipbook are clickable links.

Cyber (In)Securities – Issue 149

Cyber (In)Securities – Issue 149

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald

AI has potential. But it is not a magic wand

Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

They look just like a regular pair of Ray-Bans. But behind the dark lenses?Cameras. Microphones. AI-powered assistants. All quietly recording, analysing, and storing data, sometimes even in real-time. And unless you’ve signed up for a starring role in someone else’s life capture experiment, you probably 

Used, Not Consulted: When AI Trains on Our Work Without Consent

Used, Not Consulted: When AI Trains on Our Work Without Consent

CyAN Context

At CyAN, we often talk about trust, governance, and transparency as
pillars of a secure digital future. But what happens when those
principles are ignored, not in a breach or a ransomware attack, but in
the slow, quiet erosion of creator rights?

As a cybersecurity professional and a published author, I’ve found
myself caught at the centre of a disturbing overlap. The very
technologies we celebrate for their potential are being built on
practices that, if applied to personal data, would be called negligent or
even unlawful. But because it is “just” books, creative work, it gets
waved through.

I’ve chosen to share the following reflection across CyAN, Medium,
and LinkedIn to encourage broader engagement across industries and
disciplines. This is a conversation that sits squarely within CyAN’s
mission, because digital safety does not stop at the firewall.

The timing is fitting. While writers are gathering to celebrate craft
and creativity, I’ve been forced to reckon with the quiet
disappearance of my own work into the machinery of AI
. Like
many writers, I recently discovered that one of my books had been
used to train an AI model developed by Meta. No permission sought.
No credit given. No compensation offered.

And when I tried to read a news article about it? I hit a paywall.

I don’t resent that. I pay for journalism. I support The Guardian, even
when it doesn’t require it. But the double standard is hard to ignore:
AI tools avoid scraping paywalled content, yet books, edited, priced,
and protected, are treated like free training data.

The Double Standard

To be clear, I’m not bemoaning the paywall. Quality journalism is
worth paying for. And even in cases where a publication like The
Guardian doesn’t enforce one, I still support it, which is why I pay for
a subscription. But the double standard is hard to ignore. AI tools are
built to avoid infringing on paywalled content, yet when it comes to
books, essays, and other creative works, the same respect doesn’t
seem to apply.


Imagine you’ve spent weeks preparing a client report or policy paper,
your voice, your nuance, your unpaid refinement, and then it’s
scraped, digested, and mimicked by a machine doing your job in
seconds. No consent, no attribution. That’s what it feels like when
creative work is quietly repurposed for AI training.

The Scope of the Issue

Most AI developers say they only use publicly accessible content and
avoid subscription sites. But that principle seems to vanish when it
comes to books. My work, carefully edited, published, priced, and
protected, has now been absorbed by a machine that will use it to
generate content for others. Probably for free.

The message is clear: your work is yours, right up until it’s useful to
someone else’s algorithm.

If this were customer data, there would be outrage. But because it’s
creative work, too many people wave it through, ignoring the same
principles of consent, provenance, and data control that are
fundamental to digital safety and trust.

This isn’t about being anti-AI. I’m a tech founder. I work at the
intersection of innovation, policy, and creativity. I use AI tools myself
to draft, organise, and visualise ideas. But I use them with intention,
and always with respect for where the raw material comes from.
That’s the line we all need to draw, not whether we use AI, but how.

Why It Matters to Everyone

You might be thinking, “I’m not an author, why should I care?”
Because this isn’t just about books. It’s about ownership. It’s about the
value of what you create, whether that’s a blog post, a training
manual, a school resource, or a workplace presentation. If we
normalise scraping without consent, we’re handing over more than
words, we’re giving up agency.


And this isn’t just about me. At a time when writers are gathering to
celebrate craft and storytelling at the Sydney Writers’ Festival, it’s
worth asking what future is left for our work if we’re not even told
when it’s being taken.


I searched a dataset of scraped books and found more than 200 titles
had been taken from just 23 members of my writers’ group, a group
that includes some of Australia’s most celebrated, bestselling, and
internationally recognised female authors.


These are women whose work has shaped classrooms, award lists,
bestseller charts, and the cultural conversation, and their words were
taken without permission, credit, or compensation.
That’s not an accident. That’s a system.

Global Backlash and Legal Challenges

Globally, the backlash is growing, but patchy. In the US, lawsuits from
authors like George R.R. Martin, Ta-Nehisi Coates and The New York
Times are now consolidated in federal court. Ziff Davis, which
publishes IGN and PCMag, has also launched a case of its own. But
while litigation is gaining pace, there is still no federal legislation in
place to regulate how copyrighted works can be used to train
generative AI systems.


Just days after the US Copyright Office released a report questioning
whether large-scale AI training on copyrighted works qualifies as fair
use, the head of that office, Shira Perlmutter, was abruptly dismissed
by President Trump, alongside Librarian of Congress Dr Carla Hayden.
Both were widely respected for their independence and expertise.
Their removal sends a clear signal: even modest attempts to protect
creators can be swept aside when they challenge the interests of
powerful tech players.


In the UK, the government initially floated a controversial plan to
exempt AI developers from needing licences to mine copyrighted
works. It was shelved after intense pushback from author
organisations, publishers, and the creative community. A collective
licensing scheme is due to launch in 2025, but the long-term strategy
remains unclear. More recently, over 400 prominent British musicians,
writers, and artists including Elton John, Paul McCartney and Dua
Lipa signed an open letter to Prime Minister Keir Starmer calling for
urgent reform of the UK’s copyright laws. Sir Elton went further in a
BBC interview, calling the government’s approach “thievery on a high
scale” and warning that it would “rob young people of their legacy
and their income.” Their message was unequivocal: the unchecked use
of creative work to train AI systems is a direct threat to the future of
Britain’s creative industries.


In Europe, the AI Act passed in 2024 was a step forward. It requires AI
providers to disclose whether copyrighted material was used in
training, a win for transparency. But critics argue the Act still doesn’t
guarantee compensation or enforcement. It tells us what is being
used, but not whether it should be.

The Situation in Australia

And in Australia? There is silence. The Australian Society of Authors
has briefed policymakers and called for reform. Individual writers,
including Holden Sheppard and Tracey Spicer, have spoken out. But
the federal government has yet to propose a roadmap, initiate
consultation, or clarify whether it views scraping as acceptable.

A Call to Action

We can do better. We must. Innovation doesn’t need to come at the
cost of creators. We need systems that allow for transparency,
licensing, and consent, and policies that protect creative labour from
being reduced to invisible infrastructure for someone else’s product.
If you’re reading this and thinking, “Surely they didn’t use my work,”
don’t be so sure.


Check. Ask. Speak up. And whether you’re a creator or not, care.
Because this digital world we’re building belongs to all of us. The
decisions we make now will shape how future generations share,
create, and connect.


Creators aren’t just content. We are the culture. And we deserve to be
at the table, not just on the training menu.

Final Note

This article is part of a broader campaign to highlight the need for
ethical, transparent, and rights-respecting AI development. You’ll also
find it shared via Medium and LinkedIn, where I welcome public
discussion from creatives, technologists, and policymakers alike.

If we truly believe in building a secure and inclusive digital ecosystem,
then consent, transparency, and respect must apply to all forms of
data, including the cultural and creative expressions that define who
we are.


About the Author:

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.

Cyber (In)Securities – Issue 144

News

  1. Quantum computer threat spurring quiet overhaul of internet security
    Cyberscoop – Greg Otto
  2. Pro-Russia hacktivists bombard Dutch public orgs with DDoS attacks
    BleepingComputer – Bill Toulas
  3. Dems look to close the barn door after top DOGE dog has bolted
    The Register – Brandon Vigliarolo
  4. Canadian Electric Utility Hit by Cyberattack
    SecurityWeek – Eduard Kovacs
  5. Putin’s Cyberattacks on Ukraine Rise 70%, With Little Effect
    Dark Reading – Nate Nelson
  6. Claude AI Exploited to Operate 100+ Fake Political Personas
    The Hacker News – Ravie Lakshmanan
  7. HIVE0117 Group Targets Russian Firms with DarkWatchman Malware
    Security Affairs – Pierluigi Paganini
  8. Ex-NSA cyber-boss: AI will soon be a great exploit coder
    The Register – Jessica Lyons
  9. AI talent heads to EU defence startups
    InnovationAus – Supantha Mukherjee & Michael Kahn
  10. WordPress plugin disguised as security tool injects backdoor
    BleepingComputer – Bill Toulas
  11. Nebulous Mantis targets NATO-linked entities
    The Hacker News – Ravie Lakshmanan
  12. Tariffs could slow replacement of telecom networks
    Cyberscoop – Tim Starks
  13. Ex-CISA chief decries cuts as Trump demands loyalty
    The Register – Jessica Lyons
  14. FBI shares massive list of 42,000 LabHost phishing domains
    BleepingComputer – Bill Toulas
  15. Phishers exploit Iberian blackout in real-time scams
    Dark Reading – Elizabeth Montalbano
  16. DOGE is building a surveillance state
    New York Times – Julia Angwin
  17. Tech Giants propose EOL security disclosure standard
    SecurityWeek – Ryan Naraine
  18. DARPA’s AI Cyber Challenge could upend patching
    Cyberscoop – Greg Otto
  19. Indian court orders Proton Mail block over deepfake claims
    The Hacker News – Ravie Lakshmanan
  20. Pushback against US cyber coordination shake-up
    Cyberscoop – Derek B. Johnson
  21. Fuel tank monitoring systems vulnerable to disruption
    Dark Reading – Jai Vijayan
  22. Hackers ramp up scans for leaked Git secrets
    BleepingComputer – Bill Toulas
  23. France attributes 12 cyberattacks to APT28
    BleepingComputer – Sergiu Gatlan
  24. Reports uncover jailbreaks and insecure AI code
    The Hacker News – Ravie Lakshmanan
  25. Apple ‘AirBorne’ flaws allow zero-click RCE
    BleepingComputer – Sergiu Gatlan
  26. Enterprise tech dominates zero-day exploits
    The Register – Connor Jones
  27. US critical infrastructure still struggles with OT security
    Dark Reading – Becky Bracken
  28. US House criminalizes nonconsensual deepfakes
    Cyberscoop – Derek B. Johnson
  29. Chinese espionage campaign targets SentinelOne
    The Hacker News – Ravie Lakshmanan
  30. Europol creates ‘Violence-as-a-Service’ taskforce
    Infosecurity Magazine – Phil Muncaster
  31. 76% of Australian orgs faced high-impact cyber events
    itWire – Gordon Peters
  32. France says Russian hackers targeted Macron in 2017
    The Guardian – Angelique Chrisafis

Analysis

  1. A Cybersecurity Paradox: Even Resilient Organizations Are Blind to AI Threats
    Dark Reading – Arielle Waldman
  2. New Research Reveals: 95% of AppSec Fixes Don’t Reduce Risk
    The Hacker News
  3. Debunking Security ‘Myths’ to Address Common Gaps
    Dark Reading – Arielle Waldman
  4. World Password Day 2025: Rethinking Security in the Age of MFA and Passkeys
    IT Security Guru – The Gurus
  5. ‘Source of data’: are electric cars vulnerable to cyber spies and hackers?
    The Guardian – Dan Milmo

Member Spotlights

  1. CRD #21: Security Blind Spots and Board-Level Leadership
    CyAN – Henry Röigas
  2. Online Safety for Kids and Teens: Global Platform Shifts
    CyAN – Vaishnavi J

🗓️ Upcoming CyAN (and CyAN Partner) Global Events:

GISEC Logo

📍 Dubai, UAE

GISEC
May 6–8

Read more
Cyber OSPAs Logo

📍 London, UK

Cyber OSPAs
May 8

Read more
CSG Awards Logo

📍 Dubai, UAE

CSG Awards 2025
May 7

Read more
World AI Expo Logo

📍 Dubai, UAE

World AI Technology Expo
May 14–15

Read more

🎉 Celebration

CyAN 10th Anniversary
(Details TBA)

GITEX Europe Logo

📍 Berlin, Germany

GITEX Europe Messe
May 21–23

Read more
MaTeCC Logo

📍 Rabat, Morocco

MaTeCC
June 7–9

Read more

🌐 Online

CyAN Q2 Call (APAC + Gulf)
June 11 – 12:00 GST / 16:00 SGT / 18:00 AEST

🌐 Online

CyAN Q2 Call (EMEA + Americas)
June 11 – 20:00 GST / 18:00 CET / 17:00 UTC / 12:00 EDT


Cyber (In)Securities – Issue 141

News Former cyber official targeted by Trump quits company over moveNBC News – Kevin Collier MITRE’s CVE program given last-minute reprieveitNews – Raphael Satter Whistle Blower: Russian Breach of US Data Through DOGENarativ – Zev Shalev Midnight Blizzard deploys GrapeLoader malwareBleepingComputer – Bill Toulas 4chan