Tag: Trust

Welcome New Member – Sapann Talwar from Australia

Welcome New Member – Sapann Talwar from Australia

Please welcome our newest member from Australia, Sapann Talwar Sapann is a seasoned Cybersecurity and Risk management practitioner with 26+ years of industry experience. He specializes in safeguarding ‘Data’ against evolving cyber threats and has a strong track record in developing and executing security strategies 

“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

The Psychological Impacts of Cyberattacks This is the fifth episode in our ongoing series about the individuals who, in a matter of moments, transition from employees to rescuers in the aftermath of a destructive cyberattack. These are what I call the “Heroes.” Let’s Rewrite the 

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald


Organisations today are being overwhelmed by the volume of AI-enabled tools entering the market. Promises of productivity gains, efficiency boosts and faster decision-making are everywhere. But history tells us this isn’t new. Every few years, technology arrives promising to be the next miracle cure.

AI has potential. But it is not a magic wand. If anything, it increases the need for clarity around systems, controls and accountability. If you are not approaching it with your eyes wide open, you are at risk — technically, legally and strategically.

Before rushing to integrate AI into critical workflows, take a step back and ask some foundational questions:
• Where is your data being processed?
• Is it being cached, and if so, where?
• Could it be used, now or in the future, to train someone else’s model?

If you cannot answer these questions with certainty, you are not in control.
And if your vendor cannot provide clear, auditable responses, then AI is not the next step. Fixing your foundations is.

Start with the Architecture
AI should not be treated as a shortcut to modernisation. It cannot fix fragmented systems or weak data governance. If your underlying architecture is not well understood or managed, layering AI on top may do more harm than good.

You need:
• Clear visibility over how data flows through your systems
• A governance model aligned to your legal, regulatory and ethical obligations
• Infrastructure that can be audited, tested and explained
• Defined rules around data retention, access and deletion

Without this, you cannot implement AI safely or responsibly.

Know What You’re Signing Up For
Several vendors now include terms allowing customer data to be used for AI model training. These clauses are often buried in general language about “service improvement” or “performance optimisation.” But once your data is used for training, there is no reversing it. You’ve contributed to a model that is no longer fully within your control.

This is not theoretical. It is happening now. And unless you’re actively checking these details during procurement and implementation, your organisation is exposed.

Trust doesn’t come from a product label or a vendor webinar. It has to be built into the design, deployment and ongoing operation of every system you use.

Compliance is Still Your Responsibility
Introducing AI into your business does not reduce your legal obligations. It increases them.

Whether you’re operating under the GDPR, Australia’s Privacy Act, or other regulatory frameworks, the burden of proof remains on you. That includes knowing where data sits, who can access it, and how it’s being used — even when processed by third parties.

The regulatory landscape is also shifting rapidly. Governments are developing AI-specific legislation that focuses on transparency, accountability and risk management. Organisations that fail to build those principles into their systems now will find themselves playing catch-up under pressure.

Smarter, Not Shinier
If you are considering an AI-enabled tool, apply the same rigour you would to any other piece of critical infrastructure:
• Minimise data collection. If you don’t need it, don’t collect it.
• Use a zero-trust approach. Never assume access is safe without verification.
• Keep control boundaries tight. Know exactly who sees what, when and why.
• Design your exit plan. If you cannot leave a vendor without significant disruption, your system is not resilient.

None of this should be treated as optional. These principles form the foundation of good system design — with or without AI.

Ask Better Questions
Before you approve a new AI capability, ask:
• Do we know exactly how and where our data will be processed?
• Are we confident that it won’t be retained, reused or repurposed without our knowledge?
• Can we prove that we are meeting our compliance and governance obligations?

If you’re not sure, press pause. Getting the basics right will serve you far better than being first to implement a tool you don’t fully understand.

AI is not a panacea. It is a powerful extension of your existing systems and processes. Used well, it can improve what you already do. Used poorly, it can embed risk deep into your infrastructure.

Don’t be blinded by promises. Be clear about the problems you’re solving, and the systems you’re solving them with. The organisations that benefit from AI won’t be the ones that moved fastest. They’ll be the ones that laid the groundwork.


About the Author:

Michael McDonald is a CTO and global expert in solution architecture, secure data flows, zero-trust design, privacy-preserving infrastructure, and cross-jurisdictional compliance.

Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

They look just like a regular pair of Ray-Bans. But behind the dark lenses?Cameras. Microphones. AI-powered assistants. All quietly recording, analysing, and storing data, sometimes even in real-time. And unless you’ve signed up for a starring role in someone else’s life capture experiment, you probably 

Welcome New Member – Amna Almadhoob from Bahrain

Welcome New Member – Amna Almadhoob from Bahrain

Please welcome our newest member from Bahrain, Amna Almadhoob As a leader in the cybersecurity field, specializing in the financial industry, Amna brings extensive experience in defining strategic direction to secure operations, assets, and products against evolving threats. She has a proven track record in 

Used, Not Consulted: When AI Trains on Our Work Without Consent

Used, Not Consulted: When AI Trains on Our Work Without Consent

CyAN Context

At CyAN, we often talk about trust, governance, and transparency as
pillars of a secure digital future. But what happens when those
principles are ignored, not in a breach or a ransomware attack, but in
the slow, quiet erosion of creator rights?

As a cybersecurity professional and a published author, I’ve found
myself caught at the centre of a disturbing overlap. The very
technologies we celebrate for their potential are being built on
practices that, if applied to personal data, would be called negligent or
even unlawful. But because it is “just” books, creative work, it gets
waved through.

I’ve chosen to share the following reflection across CyAN, Medium,
and LinkedIn to encourage broader engagement across industries and
disciplines. This is a conversation that sits squarely within CyAN’s
mission, because digital safety does not stop at the firewall.

The timing is fitting. While writers are gathering to celebrate craft
and creativity, I’ve been forced to reckon with the quiet
disappearance of my own work into the machinery of AI
. Like
many writers, I recently discovered that one of my books had been
used to train an AI model developed by Meta. No permission sought.
No credit given. No compensation offered.

And when I tried to read a news article about it? I hit a paywall.

I don’t resent that. I pay for journalism. I support The Guardian, even
when it doesn’t require it. But the double standard is hard to ignore:
AI tools avoid scraping paywalled content, yet books, edited, priced,
and protected, are treated like free training data.

The Double Standard

To be clear, I’m not bemoaning the paywall. Quality journalism is
worth paying for. And even in cases where a publication like The
Guardian doesn’t enforce one, I still support it, which is why I pay for
a subscription. But the double standard is hard to ignore. AI tools are
built to avoid infringing on paywalled content, yet when it comes to
books, essays, and other creative works, the same respect doesn’t
seem to apply.


Imagine you’ve spent weeks preparing a client report or policy paper,
your voice, your nuance, your unpaid refinement, and then it’s
scraped, digested, and mimicked by a machine doing your job in
seconds. No consent, no attribution. That’s what it feels like when
creative work is quietly repurposed for AI training.

The Scope of the Issue

Most AI developers say they only use publicly accessible content and
avoid subscription sites. But that principle seems to vanish when it
comes to books. My work, carefully edited, published, priced, and
protected, has now been absorbed by a machine that will use it to
generate content for others. Probably for free.

The message is clear: your work is yours, right up until it’s useful to
someone else’s algorithm.

If this were customer data, there would be outrage. But because it’s
creative work, too many people wave it through, ignoring the same
principles of consent, provenance, and data control that are
fundamental to digital safety and trust.

This isn’t about being anti-AI. I’m a tech founder. I work at the
intersection of innovation, policy, and creativity. I use AI tools myself
to draft, organise, and visualise ideas. But I use them with intention,
and always with respect for where the raw material comes from.
That’s the line we all need to draw, not whether we use AI, but how.

Why It Matters to Everyone

You might be thinking, “I’m not an author, why should I care?”
Because this isn’t just about books. It’s about ownership. It’s about the
value of what you create, whether that’s a blog post, a training
manual, a school resource, or a workplace presentation. If we
normalise scraping without consent, we’re handing over more than
words, we’re giving up agency.


And this isn’t just about me. At a time when writers are gathering to
celebrate craft and storytelling at the Sydney Writers’ Festival, it’s
worth asking what future is left for our work if we’re not even told
when it’s being taken.


I searched a dataset of scraped books and found more than 200 titles
had been taken from just 23 members of my writers’ group, a group
that includes some of Australia’s most celebrated, bestselling, and
internationally recognised female authors.


These are women whose work has shaped classrooms, award lists,
bestseller charts, and the cultural conversation, and their words were
taken without permission, credit, or compensation.
That’s not an accident. That’s a system.

Global Backlash and Legal Challenges

Globally, the backlash is growing, but patchy. In the US, lawsuits from
authors like George R.R. Martin, Ta-Nehisi Coates and The New York
Times are now consolidated in federal court. Ziff Davis, which
publishes IGN and PCMag, has also launched a case of its own. But
while litigation is gaining pace, there is still no federal legislation in
place to regulate how copyrighted works can be used to train
generative AI systems.


Just days after the US Copyright Office released a report questioning
whether large-scale AI training on copyrighted works qualifies as fair
use, the head of that office, Shira Perlmutter, was abruptly dismissed
by President Trump, alongside Librarian of Congress Dr Carla Hayden.
Both were widely respected for their independence and expertise.
Their removal sends a clear signal: even modest attempts to protect
creators can be swept aside when they challenge the interests of
powerful tech players.


In the UK, the government initially floated a controversial plan to
exempt AI developers from needing licences to mine copyrighted
works. It was shelved after intense pushback from author
organisations, publishers, and the creative community. A collective
licensing scheme is due to launch in 2025, but the long-term strategy
remains unclear. More recently, over 400 prominent British musicians,
writers, and artists including Elton John, Paul McCartney and Dua
Lipa signed an open letter to Prime Minister Keir Starmer calling for
urgent reform of the UK’s copyright laws. Sir Elton went further in a
BBC interview, calling the government’s approach “thievery on a high
scale” and warning that it would “rob young people of their legacy
and their income.” Their message was unequivocal: the unchecked use
of creative work to train AI systems is a direct threat to the future of
Britain’s creative industries.


In Europe, the AI Act passed in 2024 was a step forward. It requires AI
providers to disclose whether copyrighted material was used in
training, a win for transparency. But critics argue the Act still doesn’t
guarantee compensation or enforcement. It tells us what is being
used, but not whether it should be.

The Situation in Australia

And in Australia? There is silence. The Australian Society of Authors
has briefed policymakers and called for reform. Individual writers,
including Holden Sheppard and Tracey Spicer, have spoken out. But
the federal government has yet to propose a roadmap, initiate
consultation, or clarify whether it views scraping as acceptable.

A Call to Action

We can do better. We must. Innovation doesn’t need to come at the
cost of creators. We need systems that allow for transparency,
licensing, and consent, and policies that protect creative labour from
being reduced to invisible infrastructure for someone else’s product.
If you’re reading this and thinking, “Surely they didn’t use my work,”
don’t be so sure.


Check. Ask. Speak up. And whether you’re a creator or not, care.
Because this digital world we’re building belongs to all of us. The
decisions we make now will shape how future generations share,
create, and connect.


Creators aren’t just content. We are the culture. And we deserve to be
at the table, not just on the training menu.

Final Note

This article is part of a broader campaign to highlight the need for
ethical, transparent, and rights-respecting AI development. You’ll also
find it shared via Medium and LinkedIn, where I welcome public
discussion from creatives, technologists, and policymakers alike.

If we truly believe in building a secure and inclusive digital ecosystem,
then consent, transparency, and respect must apply to all forms of
data, including the cultural and creative expressions that define who
we are.


About the Author:

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.

Special Feature – 10th Anniversary

Special Feature – 10th Anniversary

Editor-in-Chief Kim Chandler McDonald Co-Founder and CEO of 3 Steps Data Global VP at CyAN An award-winning author and advocate for cybersecurity, compliance, and digital sovereignty. Kim drives global conversations on data governance and user empowerment. Author Saba Bagheri, PhD Cyber Threat Intelligence Manager at 

Welcome New Member –  Samira Marquaille from France

Welcome New Member – Samira Marquaille from France

Please welcome our newest member from France, Samira Marquaille

Samira Marquaille is an IT Project Manager with more than 20 years of experience across both public and private sectors, with a strong focus on data privacy. She is skilled at uniting teams and fostering collaboration to manage projects involving new European regulations around data privacy, cybersecurity (DORA, NIS2, …) and AI (AI Act).

Beyond her project work, Samira actively contributes to professional associations, training initiatives, and public awareness efforts on data privacy. She also volunteers as a mentor, supporting young people and women entering the IT field.

Samira is widely recognised for her rigour, analytical skills, and deep commitment to the field.

It’s good to have you, Samira! We look forward to the expertise you bring and enabling you here at CyAN. Don’t hesitate to reach out or explore Samira’s profile to grow your networks mutually.

Welcome New Member – Andrew Pedroso from Australia

Welcome New Member – Andrew Pedroso from Australia

Please welcome our newest member from the Australia, Andrew Pedroso Andrew Pedroso has committed over a decade to business technology research, advisory, data, and consulting. Now, he has returned to his passion for cybersecurity and zero trust strategy. With deep expertise across key industries including BFSI,