Tag: regulation
Welcome New Member – Sapann Talwar from Australia
Please welcome our newest member from Australia, Sapann Talwar Sapann is a seasoned Cybersecurity and Risk management practitioner with 26+ years of industry experience. He specializes in safeguarding ‘Data’ against evolving cyber threats and has a strong track record in developing and executing security strategies …
“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

The Psychological Impacts of Cyberattacks
This is the fifth episode in our ongoing series about the individuals who, in a matter of moments, transition from employees to rescuers in the aftermath of a destructive cyberattack.
These are what I call the “Heroes.”
Let’s Rewrite the Story of a Cyberattack
“With the support of the CIO, I can say that things got structured very quickly, so we were automatically well supported. After that, we quickly fell back into the ways of the crisis. Management would come
back with priorities, and push for things to come back right away, when we hadn’t even finished putting the basic systems back together…”
Excerpt From the Interview
My book is dedicated to encouraging companies to consider the human aspect in the context of cyber-attacks. But coaching has only been part of my professional practice for the past 4 years. For over 25 years now, my career has been centered on helping customers strengthen their data resilience. This scenario is freely inspired by one of my corporate clients …
In this episode, I will fictionize a cyberattack, but by suing what I call a non-winning scenario. A non-winning scenario is when a company do not consider security as a strategic priority. No goal, no failure until the incident happens.
Typical identification factor: “Zen attitude”
Once upon a time, there was a company without living in complete ignorance of the risks of cyberattacks. While this scenario may seem like the previous one at first glance, the mindset is completely different, it is closer to that of a child living in a fantasy world.
This situation is a lose-lose for the company, which overlooks the importance of IT resilience, mistakenly believing cyberattacks are unlikely. The company has little reason to invest in training. As there is little oversight, best practices are rare or only exist thanks to a few individuals. As a result, its IT systems become outdated due to inactivity and lack of engagement, with projects left unfinished. Although it may seem trivial, this scenario is dangerous – we’re facing a state of delusional complacency.
A non-winning scenario could be marked by frustration among teams and between management levels due to inconsistencies between stated policies and actual practices. This could create ongoing tension around cybersecurity. Although the IT infrastructure may be effective and efficient, the company’s economic success relies on easy business. Thus, the level of cyber resilience ultimately depends on the technical staff’s motivation. Some individuals may prioritize the protection of IT systems over their own well-being and relationships, creating an unhealthy work-life balance that would need rectification.
In the event of a cyberattack, detection is unlikely unless there are obvious indicators, such as system-wide crashes or explicit warnings. The absence of a well-defined plan often leads to chaos, with leadership responding in fear and frustration. This reaction can be understood, considering their lack of strong alliances with experienced experts. A victim mentality may prevail, with sentiments like “What did I do to deserve this?” or “Why won’t anyone help me when I’m at rock bottom?” The potential consequences of such a scenario are dire, on par with playing Russian roulette with the company’s survival. The ability to recover lost data and the speed at which business applications can be restored will be key factors in determining the outcome.
Managers may suddenly acknowledge their accountability and abruptly alter their position. They will claim to have consistently advocated for security measures, blaming the technical team for not heeding or implementing their suggestions. The technical team is expected to respond with improvement proposals, arguing that they were never funded.
This results in a contradictory period, bordering on schizophrenia, where leaders, who were once held accountable, now adopt the role of saviors. Meanwhile, technicians feel guilty and are burdened with suspicion, potentially being suspected of complicity in the cyberattack. Despite their significant shortcomings and accompanying guilt, these heroes remain committed to their roles, some even developing a deep affection for their computer systems. This devotion pushes them to extraordinary lengths to surmount the crisis. This phase will be characterized by intense emotions, including crying, yelling, and insomnia due to exhaustion. There will also be impulsive actions, mental stress, and conflict within the family.
In the post-incident analysis, it will be stated that the crisis stemmed from a highly unlikely series of events, occurring despite management’s consistent encouragement of IT staff to adopt best practices. Which is a completely unfounded statement that attempts to rewrite the narrative.
Our heroes will face a difficult time. The HR department, which serves as management’s enforcement arm, will strictly penalize those responsible. Those who keep their jobs should count themselves lucky. Any recognition of their efforts will be superficial and insincere. In the future, people will tend to forget about past incidents, but the consequences will persist for years, leading to many resignations and cases of burnout. Some people may suffer physical effects, which will create a sharp contrast between their lives before and after the trauma. They’ll have to cope with the consequences.
The fall of the Heroes!
THINGS TO REMEMBER
There are still many companies who are neglecting to prioritize cyber risk within their strategy. Living carefree is pleasant, but the fall will be all the harder for those affected. This is the worst script.
Stay tuned for the next episode.
About the Author
Didier Annet is an Operational & Data Resilience Specialist and a Certified Professional Coach dedicated to empowering individuals and teams to navigate the complexities of an ever-changing digital landscape.
Find him on LinkedIn: Didier Annet
Learn more in his book:
📖 Guide de survie aux cyberattaques en entreprise et à leurs conséquences psychologiques: Que fait-on des Héros ? (French Edition) – Available on Amazon
English version:
“Survival Guide – The Human Impact of Cyberattacks and the Untold Story of Those Who Respond”
“What Happens to Heroes?”
Available on Amazon
Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

They look just like a regular pair of Ray-Bans. But behind the dark lenses?
Cameras. Microphones. AI-powered assistants. All quietly recording, analysing, and storing data, sometimes even in real-time. And unless you’ve signed up for a starring role in someone else’s life capture experiment, you probably didn’t give your consent.
Welcome to the era of AI smart glasses. From Meta’s Ray-Ban collaboration to Apple’s rumoured 2027 “N50” model, these wearable devices are being marketed as the next great leap in tech-fuelled convenience. But let’s be clear: the privacy and safety implications are vast, and the current framing of “innovation” isn’t just tone-deaf, it’s incompatible with the legal, ethical, and social expectations many of us still cling to.
The Convenience Trap
Tech companies are fond of telling us that the world is our canvas. But when wearable cameras are normalised, the line between public space and personal privacy starts to blur or vanish entirely. A casual conversation in a park, a tired school run, or a fleeting moment of vulnerability can now be captured, stored, uploaded, analysed, and replayed… without your knowledge.
Under the EU’s GDPR, that’s a problem. If you’re identifiable in an image or video, and that data is processed in any meaningful way — boom — you’ve entered the realm of personal data. Consent, or a clearly lawful basis, is required. But a faint LED on someone’s glasses isn’t meaningful notice, let alone consent. And unless you’re willing to interrogate every stranger’s eyewear, your privacy becomes an afterthought.
Whose Safety Are We Prioritising?
The bigger concern here isn’t just the footage or the transcription, it’s who is in control. These aren’t passive devices; they’re active collectors. And if someone uses them to stalk, harass, or surveil? There’s very little in the design of these products — or their policies — to stop them.
For women, children, and vulnerable communities, this isn’t about convenience. It’s about control. About power. About whether you can walk through the world without being turned into content. And as someone deeply committed to trust, safety, and ethical tech, I’m not interested in waiting for the inevitable harms before we act.
Apple’s Coming Glasses: Better by Design?
Apple’s brand is built on privacy. No backdoors. Local processing. “What happens on your iPhone stays on your iPhone.” (For transparency’s sake, let me be clear that that’s one of the reasons I’ve been a loyal Apple user for so long.) So it’s worth noting that while Apple’s smart glasses are reportedly on the way, early leaks suggest they may not include cameras — a deliberate choice that would set them apart from Meta’s model.
Is it a privacy-conscious decision? Or a limitation of current hardware? We can’t know for sure. But it hints at an important truth: these choices are design decisions. Companies can — and should — choose to build safety in from the start.
Normalising Surveillance by Stealth
Let’s not pretend this is inevitable. This is a deliberate normalisation of low-grade, always-on surveillance — often justified as “cool”, “hands-free”, or “the future”. But that future increasingly looks like one where people are filmed and analysed without permission, where opt-out isn’t possible, and where companies quietly collect context-rich data from passers-by, not just users.
And here’s the kicker: when your data is captured by someone else’s glasses, you have no visibility, no access rights, and no ability to delete it. It’s surveillance with plausible deniability. And it sets a chilling precedent.
So What Do We Do?
If we want a future where trust and safety aren’t sacrificed on the altar of “innovation”, we need to draw a line — and soon.
We need:
- Stronger enforcement of privacy laws when it comes to wearable tech.
- Design-led accountability, not disclaimers buried in T&Cs.
- A digital culture that centres consent — not just for the user, but for everyone in frame.
This isn’t a fight against progress. It’s a demand for thoughtful, human-centred design. Because if the only people who get privacy are the ones holding the camera, then what we’re building isn’t the future — it’s a panopticon in designer frames.
And frankly? That’s not a look I’m ready to normalise.
About the Author:
Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.
The Human Factor in OT Security Incidents: Understanding Insider Threats and Social Engineering in Critical Infrastructure by Rupesh Shirke
Introduction The human factor is an essential but overlooked security component in Operational Technology (OT) systems within critical infrastructure. However, although many technological defenses have improved, insider threats and social engineering remain serious due to inherent human activity and organizational culture vulnerabilities. Operators of OT …
Used, Not Consulted: When AI Trains on Our Work Without Consent
CyAN Context At CyAN, we often talk about trust, governance, and transparency aspillars of a secure digital future. But what happens when thoseprinciples are ignored, not in a breach or a ransomware attack, but inthe slow, quiet erosion of creator rights? As a cybersecurity professional …
Special Feature – 10th Anniversary
Editor-in-Chief Kim Chandler McDonald Co-Founder and CEO of 3 Steps Data Global VP at CyAN An award-winning author and advocate for cybersecurity, compliance, and digital sovereignty. Kim drives global conversations on data governance and user empowerment. Author Saba Bagheri, PhD Cyber Threat Intelligence Manager at …