Tag: security

New Podcast – Return of the Bride of Terrorism, With Bjørn Ihler

New Podcast – Return of the Bride of Terrorism, With Bjørn Ihler

Part III in our series on violent extremism and terrorism

Welcome New Member – Sapann Talwar from Australia

Welcome New Member – Sapann Talwar from Australia

Please welcome our newest member from Australia, Sapann Talwar Sapann is a seasoned Cybersecurity and Risk management practitioner with 26+ years of industry experience. He specializes in safeguarding ‘Data’ against evolving cyber threats and has a strong track record in developing and executing security strategies 

“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

The Psychological Impacts of Cyberattacks

This is the fifth episode in our ongoing series about the individuals who, in a matter of moments, transition from employees to rescuers in the aftermath of a destructive cyberattack.

These are what I call the “Heroes.”

Let’s Rewrite the Story of a Cyberattack

“With the support of the CIO, I can say that things got structured very quickly, so we were automatically well supported. After that, we quickly fell back into the ways of the crisis. Management would come

back with priorities, and push for things to come back right away, when we hadn’t even finished putting the basic systems back together…”

Excerpt From the Interview

My book is dedicated to encouraging companies to consider the human aspect in the context of cyber-attacks. But coaching has only been part of my professional practice for the past 4 years. For over 25 years now, my career has been centered on helping customers strengthen their data resilience. This scenario is freely inspired by one of my corporate clients …

In this episode, I will fictionize a cyberattack, but by suing what I call a non-winning scenario. A non-winning scenario is when a company do not consider security as a strategic priority. No goal, no failure until the incident happens.

Typical identification factor: “Zen attitude”

Once upon a time, there was a company without living in complete ignorance of the risks of cyberattacks. While this scenario may seem like the previous one at first glance, the mindset is completely different, it is closer to that of a child living in a fantasy world.

This situation is a lose-lose for the company, which overlooks the importance of IT resilience, mistakenly believing cyberattacks are unlikely. The company has little reason to invest in training. As there is little oversight, best practices are rare or only exist thanks to a few individuals.  As a result, its IT systems become outdated due to inactivity and lack of engagement, with projects left unfinished. Although it may seem trivial, this scenario is dangerous – we’re facing a state of delusional complacency.

A non-winning scenario could be marked by frustration among teams and between management levels due to inconsistencies between stated policies and actual practices. This could create ongoing tension around cybersecurity. Although the IT infrastructure may be effective and efficient, the company’s economic success relies on easy business. Thus, the level of cyber resilience ultimately depends on the technical staff’s motivation. Some individuals may prioritize the protection of IT systems over their own well-being and relationships, creating an unhealthy work-life balance that would need rectification.

In the event of a cyberattack, detection is unlikely unless there are obvious indicators, such as system-wide crashes or explicit warnings. The absence of a well-defined plan often leads to chaos, with leadership responding in fear and frustration. This reaction can be understood, considering their lack of strong alliances with experienced experts. A victim mentality may prevail, with sentiments like “What did I do to deserve this?” or “Why won’t anyone help me when I’m at rock bottom?” The potential consequences of such a scenario are dire, on par with playing Russian roulette with the company’s survival. The ability to recover lost data and the speed at which business applications can be restored will be key factors in determining the outcome.

Managers may suddenly acknowledge their accountability and abruptly alter their position. They will claim to have consistently advocated for security measures, blaming the technical team for not heeding or implementing their suggestions. The technical team is expected to respond with improvement proposals, arguing that they were never funded.

This results in a contradictory period, bordering on schizophrenia, where leaders, who were once held accountable, now adopt the role of saviors. Meanwhile, technicians feel guilty and are burdened with suspicion, potentially being suspected of complicity in the cyberattack. Despite their significant shortcomings and accompanying guilt, these heroes remain committed to their roles, some even developing a deep affection for their computer systems. This devotion pushes them to extraordinary lengths to surmount the crisis. This phase will be characterized by intense emotions, including crying, yelling, and insomnia due to exhaustion. There will also be impulsive actions, mental stress, and conflict within the family.

In the post-incident analysis, it will be stated that the crisis stemmed from a highly unlikely series of events, occurring despite management’s consistent encouragement of IT staff to adopt best practices. Which is a completely unfounded statement that attempts to rewrite the narrative.

Our heroes will face a difficult time. The HR department, which serves as management’s enforcement arm, will strictly penalize those responsible. Those who keep their jobs should count themselves lucky. Any recognition of their efforts will be superficial and insincere. In the future, people will tend to forget about past incidents, but the consequences will persist for years, leading to many resignations and cases of burnout. Some people may suffer physical effects, which will create a sharp contrast between their lives before and after the trauma. They’ll have to cope with the consequences.

The fall of the Heroes!

THINGS TO REMEMBER

There are still many companies who are neglecting to prioritize cyber risk within their strategy. Living carefree is pleasant, but the fall will be all the harder for those affected. This is the worst script.

Stay tuned for the next episode.


About the Author

Didier Annet is an Operational & Data Resilience Specialist and a Certified Professional Coach dedicated to empowering individuals and teams to navigate the complexities of an ever-changing digital landscape.

Find him on LinkedIn: Didier Annet

Learn more in his book:
📖 Guide de survie aux cyberattaques en entreprise et à leurs conséquences psychologiques: Que fait-on des Héros ? (French Edition) – Available on Amazon

English version:
“Survival Guide – The Human Impact of Cyberattacks and the Untold Story of Those Who Respond”
“What Happens to Heroes?”
Available on Amazon

Implicit Privacy is Dead – A Counterpoint (Sort Of)

Implicit Privacy is Dead – A Counterpoint (Sort Of)

A rebuttal: camera sunglasses aren’t the unique adversary you might think they are.

New Podcast – Some More Terrorism, With Bjørn Ihler

New Podcast – Some More Terrorism, With Bjørn Ihler

Bjørn and John return to discuss additional aspects of terrorism and extremism

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald


Organisations today are being overwhelmed by the volume of AI-enabled tools entering the market. Promises of productivity gains, efficiency boosts and faster decision-making are everywhere. But history tells us this isn’t new. Every few years, technology arrives promising to be the next miracle cure.

AI has potential. But it is not a magic wand. If anything, it increases the need for clarity around systems, controls and accountability. If you are not approaching it with your eyes wide open, you are at risk — technically, legally and strategically.

Before rushing to integrate AI into critical workflows, take a step back and ask some foundational questions:
• Where is your data being processed?
• Is it being cached, and if so, where?
• Could it be used, now or in the future, to train someone else’s model?

If you cannot answer these questions with certainty, you are not in control.
And if your vendor cannot provide clear, auditable responses, then AI is not the next step. Fixing your foundations is.

Start with the Architecture
AI should not be treated as a shortcut to modernisation. It cannot fix fragmented systems or weak data governance. If your underlying architecture is not well understood or managed, layering AI on top may do more harm than good.

You need:
• Clear visibility over how data flows through your systems
• A governance model aligned to your legal, regulatory and ethical obligations
• Infrastructure that can be audited, tested and explained
• Defined rules around data retention, access and deletion

Without this, you cannot implement AI safely or responsibly.

Know What You’re Signing Up For
Several vendors now include terms allowing customer data to be used for AI model training. These clauses are often buried in general language about “service improvement” or “performance optimisation.” But once your data is used for training, there is no reversing it. You’ve contributed to a model that is no longer fully within your control.

This is not theoretical. It is happening now. And unless you’re actively checking these details during procurement and implementation, your organisation is exposed.

Trust doesn’t come from a product label or a vendor webinar. It has to be built into the design, deployment and ongoing operation of every system you use.

Compliance is Still Your Responsibility
Introducing AI into your business does not reduce your legal obligations. It increases them.

Whether you’re operating under the GDPR, Australia’s Privacy Act, or other regulatory frameworks, the burden of proof remains on you. That includes knowing where data sits, who can access it, and how it’s being used — even when processed by third parties.

The regulatory landscape is also shifting rapidly. Governments are developing AI-specific legislation that focuses on transparency, accountability and risk management. Organisations that fail to build those principles into their systems now will find themselves playing catch-up under pressure.

Smarter, Not Shinier
If you are considering an AI-enabled tool, apply the same rigour you would to any other piece of critical infrastructure:
• Minimise data collection. If you don’t need it, don’t collect it.
• Use a zero-trust approach. Never assume access is safe without verification.
• Keep control boundaries tight. Know exactly who sees what, when and why.
• Design your exit plan. If you cannot leave a vendor without significant disruption, your system is not resilient.

None of this should be treated as optional. These principles form the foundation of good system design — with or without AI.

Ask Better Questions
Before you approve a new AI capability, ask:
• Do we know exactly how and where our data will be processed?
• Are we confident that it won’t be retained, reused or repurposed without our knowledge?
• Can we prove that we are meeting our compliance and governance obligations?

If you’re not sure, press pause. Getting the basics right will serve you far better than being first to implement a tool you don’t fully understand.

AI is not a panacea. It is a powerful extension of your existing systems and processes. Used well, it can improve what you already do. Used poorly, it can embed risk deep into your infrastructure.

Don’t be blinded by promises. Be clear about the problems you’re solving, and the systems you’re solving them with. The organisations that benefit from AI won’t be the ones that moved fastest. They’ll be the ones that laid the groundwork.


About the Author:

Michael McDonald is a CTO and global expert in solution architecture, secure data flows, zero-trust design, privacy-preserving infrastructure, and cross-jurisdictional compliance.

Welcome New Member – Amna Almadhoob from Bahrain

Welcome New Member – Amna Almadhoob from Bahrain

Please welcome our newest member from Bahrain, Amna Almadhoob As a leader in the cybersecurity field, specializing in the financial industry, Amna brings extensive experience in defining strategic direction to secure operations, assets, and products against evolving threats. She has a proven track record in 

The Human Factor in OT Security Incidents: Understanding Insider Threats and Social Engineering in Critical Infrastructure by Rupesh Shirke

The Human Factor in OT Security Incidents: Understanding Insider Threats and Social Engineering in Critical Infrastructure by Rupesh Shirke

Introduction The human factor is an essential but overlooked security component in Operational Technology (OT) systems within critical infrastructure. However, although many technological defenses have improved, insider threats and social engineering remain serious due to inherent human activity and organizational culture vulnerabilities. Operators of OT 

Used, Not Consulted: When AI Trains on Our Work Without Consent

Used, Not Consulted: When AI Trains on Our Work Without Consent

CyAN Context

At CyAN, we often talk about trust, governance, and transparency as
pillars of a secure digital future. But what happens when those
principles are ignored, not in a breach or a ransomware attack, but in
the slow, quiet erosion of creator rights?

As a cybersecurity professional and a published author, I’ve found
myself caught at the centre of a disturbing overlap. The very
technologies we celebrate for their potential are being built on
practices that, if applied to personal data, would be called negligent or
even unlawful. But because it is “just” books, creative work, it gets
waved through.

I’ve chosen to share the following reflection across CyAN, Medium,
and LinkedIn to encourage broader engagement across industries and
disciplines. This is a conversation that sits squarely within CyAN’s
mission, because digital safety does not stop at the firewall.

The timing is fitting. While writers are gathering to celebrate craft
and creativity, I’ve been forced to reckon with the quiet
disappearance of my own work into the machinery of AI
. Like
many writers, I recently discovered that one of my books had been
used to train an AI model developed by Meta. No permission sought.
No credit given. No compensation offered.

And when I tried to read a news article about it? I hit a paywall.

I don’t resent that. I pay for journalism. I support The Guardian, even
when it doesn’t require it. But the double standard is hard to ignore:
AI tools avoid scraping paywalled content, yet books, edited, priced,
and protected, are treated like free training data.

The Double Standard

To be clear, I’m not bemoaning the paywall. Quality journalism is
worth paying for. And even in cases where a publication like The
Guardian doesn’t enforce one, I still support it, which is why I pay for
a subscription. But the double standard is hard to ignore. AI tools are
built to avoid infringing on paywalled content, yet when it comes to
books, essays, and other creative works, the same respect doesn’t
seem to apply.


Imagine you’ve spent weeks preparing a client report or policy paper,
your voice, your nuance, your unpaid refinement, and then it’s
scraped, digested, and mimicked by a machine doing your job in
seconds. No consent, no attribution. That’s what it feels like when
creative work is quietly repurposed for AI training.

The Scope of the Issue

Most AI developers say they only use publicly accessible content and
avoid subscription sites. But that principle seems to vanish when it
comes to books. My work, carefully edited, published, priced, and
protected, has now been absorbed by a machine that will use it to
generate content for others. Probably for free.

The message is clear: your work is yours, right up until it’s useful to
someone else’s algorithm.

If this were customer data, there would be outrage. But because it’s
creative work, too many people wave it through, ignoring the same
principles of consent, provenance, and data control that are
fundamental to digital safety and trust.

This isn’t about being anti-AI. I’m a tech founder. I work at the
intersection of innovation, policy, and creativity. I use AI tools myself
to draft, organise, and visualise ideas. But I use them with intention,
and always with respect for where the raw material comes from.
That’s the line we all need to draw, not whether we use AI, but how.

Why It Matters to Everyone

You might be thinking, “I’m not an author, why should I care?”
Because this isn’t just about books. It’s about ownership. It’s about the
value of what you create, whether that’s a blog post, a training
manual, a school resource, or a workplace presentation. If we
normalise scraping without consent, we’re handing over more than
words, we’re giving up agency.


And this isn’t just about me. At a time when writers are gathering to
celebrate craft and storytelling at the Sydney Writers’ Festival, it’s
worth asking what future is left for our work if we’re not even told
when it’s being taken.


I searched a dataset of scraped books and found more than 200 titles
had been taken from just 23 members of my writers’ group, a group
that includes some of Australia’s most celebrated, bestselling, and
internationally recognised female authors.


These are women whose work has shaped classrooms, award lists,
bestseller charts, and the cultural conversation, and their words were
taken without permission, credit, or compensation.
That’s not an accident. That’s a system.

Global Backlash and Legal Challenges

Globally, the backlash is growing, but patchy. In the US, lawsuits from
authors like George R.R. Martin, Ta-Nehisi Coates and The New York
Times are now consolidated in federal court. Ziff Davis, which
publishes IGN and PCMag, has also launched a case of its own. But
while litigation is gaining pace, there is still no federal legislation in
place to regulate how copyrighted works can be used to train
generative AI systems.


Just days after the US Copyright Office released a report questioning
whether large-scale AI training on copyrighted works qualifies as fair
use, the head of that office, Shira Perlmutter, was abruptly dismissed
by President Trump, alongside Librarian of Congress Dr Carla Hayden.
Both were widely respected for their independence and expertise.
Their removal sends a clear signal: even modest attempts to protect
creators can be swept aside when they challenge the interests of
powerful tech players.


In the UK, the government initially floated a controversial plan to
exempt AI developers from needing licences to mine copyrighted
works. It was shelved after intense pushback from author
organisations, publishers, and the creative community. A collective
licensing scheme is due to launch in 2025, but the long-term strategy
remains unclear. More recently, over 400 prominent British musicians,
writers, and artists including Elton John, Paul McCartney and Dua
Lipa signed an open letter to Prime Minister Keir Starmer calling for
urgent reform of the UK’s copyright laws. Sir Elton went further in a
BBC interview, calling the government’s approach “thievery on a high
scale” and warning that it would “rob young people of their legacy
and their income.” Their message was unequivocal: the unchecked use
of creative work to train AI systems is a direct threat to the future of
Britain’s creative industries.


In Europe, the AI Act passed in 2024 was a step forward. It requires AI
providers to disclose whether copyrighted material was used in
training, a win for transparency. But critics argue the Act still doesn’t
guarantee compensation or enforcement. It tells us what is being
used, but not whether it should be.

The Situation in Australia

And in Australia? There is silence. The Australian Society of Authors
has briefed policymakers and called for reform. Individual writers,
including Holden Sheppard and Tracey Spicer, have spoken out. But
the federal government has yet to propose a roadmap, initiate
consultation, or clarify whether it views scraping as acceptable.

A Call to Action

We can do better. We must. Innovation doesn’t need to come at the
cost of creators. We need systems that allow for transparency,
licensing, and consent, and policies that protect creative labour from
being reduced to invisible infrastructure for someone else’s product.
If you’re reading this and thinking, “Surely they didn’t use my work,”
don’t be so sure.


Check. Ask. Speak up. And whether you’re a creator or not, care.
Because this digital world we’re building belongs to all of us. The
decisions we make now will shape how future generations share,
create, and connect.


Creators aren’t just content. We are the culture. And we deserve to be
at the table, not just on the training menu.

Final Note

This article is part of a broader campaign to highlight the need for
ethical, transparent, and rights-respecting AI development. You’ll also
find it shared via Medium and LinkedIn, where I welcome public
discussion from creatives, technologists, and policymakers alike.

If we truly believe in building a secure and inclusive digital ecosystem,
then consent, transparency, and respect must apply to all forms of
data, including the cultural and creative expressions that define who
we are.


About the Author:

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.

New Podcast – Let’s Talk Terrorism, With Bjørn Ihler

New Podcast – Let’s Talk Terrorism, With Bjørn Ihler

Bjørn Ihler joins us on the latest installment of our Secure-in-Mind series to share his experience and insights around terrorism, both online and offline.