Recent Posts

Week 21 – Multiple high-severity vulnerabilities in VMware Cloud Foundation

19 – 15 May 2025 Multiple high-severity vulnerabilities were responsibly disclosed in VCF by Gustavo Bonito of the NATO Cyber Security Centre. From among these, our #CVEOfTheWeek is CVE-2025-41229. This is a Directory Traversal vulnerability, which might allow a malicious actor with network access to 

Cyber (In)Securities – Issue 150 – Snapshot Edition

Cyber (In)Securities – Issue 150 – Snapshot Edition

You can download this edition using the download icon at the bottom. To enlarge the view, click the fullscreen icon on the bottom right. All article titles inside the flipbook are clickable links.

Cybersec Forum 2025

Cybersec Forum 2025

Our friends at Cyber Made in Poland are holding the 2025 Cybersec Expo & Forum in Krakow, Poland from June 11-12.

Join 2,000 participants and meet over 100 partners and exhibitors at Tauron Arena Kraków.

Check out this great event for hands-on product demos, workshops and trainings, data protection, expert talks, panel debates, and much more!  Register now with the code: CS35OFF and get 35 EUR off your ticket.

 Where? Tauron Arena Kraków
📅 When? June 11–12, 2025
👉 https://2025.cybersecforum.eu

Welcome New Member – Sapann Talwar from Australia

Welcome New Member – Sapann Talwar from Australia

Please welcome our newest member from Australia, Sapann Talwar Sapann is a seasoned Cybersecurity and Risk management practitioner with 26+ years of industry experience. He specializes in safeguarding ‘Data’ against evolving cyber threats and has a strong track record in developing and executing security strategies 

“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

“What Happens to Heroes?” – Episode #5: The Unsung Heroes of the Digital World

The Psychological Impacts of Cyberattacks This is the fifth episode in our ongoing series about the individuals who, in a matter of moments, transition from employees to rescuers in the aftermath of a destructive cyberattack. These are what I call the “Heroes.” Let’s Rewrite the 

Implicit Privacy is Dead – A Counterpoint (Sort Of)

Implicit Privacy is Dead – A Counterpoint (Sort Of)

My CyAN colleague Kim Chandler McDonald recently posted an article about smart glasses and privacy – it is worth a read.

I actually own a pair of the Meta/Ray Ban glasses discussed. No, I did not (and would not) pay for them, they were a gift from a member of the team responsible at a recent event. I use these regularly, and while I fundamentally agree with many of Kim’s points, I believe that the topic is worth discussing in a bit more detail, with additional context.

Facebook – Grr

First, I do not like Facebook/Meta. I believe that, like many (not only) US-based Big Tech firms, it is on the whole an awful organisation, with a history of abuse, manipulation, and cynical, self-serving attitudes to not only their users but also entire societies and their institutions and norms. Not only the Facebook Papers and other leaks, but also many public and observable behaviours both before and since Frances Haugen’s disclosure show, the firm and many of its employees and managers have engaged in activities that has been negligent at best, and actively destructive at worst to many aspects of liberal, pluralistic democracy. The fact that the “Criticism of Facebook” Wikipedia article has almost as many words as the notoriously long-winded article about Pokémon is a highly unscientific but telling measure of controversies the company has faced in its ~20 years of existence.

“People just submitted it. I don’t know why. They ‘trust me’. Dumb fucks.” -Mark Zuckerberg

Without making excuses for Meta, they are by far not the only company guilty of destructive, unethical, even illegal behaviour, by far. Nor is this a new phenomenon, unique to tech companies. History is littered with examples of corporations doing everything from massive fraud, catastrophic pollution, harassment and persecution of individuals, bribery, and even active involvement in the overthrow of sovereign governments, to outright murder and even genocide.

Maybe It’s Just Capitalism

More recently, within the tech space, numerous firms have engaged in obnoxious activities – Google agreeing to censor content in China considered objectionable by the PRC’s government, Microsoft recently shutting off the email address of the ICC’s chief prosecutor (Karim Khan has stepped back due to sexual misconduct allegations, although I cannot find any link between Microsoft’s action and the accusations against Mr. Khan), or NSO Group’s sale of its Pegasus spyware to any number of authoritarian governments are only a few examples across IT product and services providers across all sizes and geographies.

There are few (!) moral absolutes, and sometimes companies can be at least partially excused for making certain compromises – although it’s up to us as consumers, shareholders, employees, and citizens to hold them to account for these as much as is reasonably possible. What cannot be forgiven, though, is when firms knowingly, sometimes even intentionally, do things that violate the average reasonable individual’s

That brings us back to abuse of privacy. Tech (and many governments) have frequently treated information and data as some sort of help-yourself candy bucket. Regulation has often lagged; while the US’ HIPAA was a powerful milestone in how certain personally identifying data (PID) was to be protected, the European Union’s GDPR is currently one of the most powerful, clear tools in fighting abuses of privacy.

Nonetheless, laws are only as strong as how consistently they are respected and enforced. A fine that doesn’t hurt is a fee, and while GDPR makes allowance for fines of up to 4% of an offending firm’s global turnover in the previous year, both judicial precedent and supervisory enforcement of GDPR and other EU consumer protection rules are still developing. That said, GDPR has resulted in some significant, often eye-watering punishments, such as Ireland’s EUR1.2bn fine against Meta in 2023, in addition to several other penalties for data protection and security requirements violations.

Following the enthronement of Donald Trump as president in January of 2025, the United States took an increasingly hostile tone towards the EU and many of its laws. US tech CEOs, including Jeff Bezon (Amazon), Mark Zuckerberg (Facebook), Tim Cook (Apple), and Elon Musk (Twitter) bent the knee, not only at Trump’s inauguration, one assumes in search of US government support against European regulations. European rules have often caused American firms to scream absolute bloody murder. This ranges from the imposition of a mandate to use USB-C or similar “standard” connectors for phone chargers (wah wah why won’t everyone just use Lightning? Because it sucks, for one…) to the Digital Services Act package and other regulations and directives.

The Guilty Glasses

We were talking about sunglasses, weren’t we? I like these.

  • they’re comfortable and well built. Ray-Ban owners EssilorLuxottica are not a great company, but they do make some good (if overpriced) sunglasses. Again, I got these for free, I do not advocate anyone paying for a pair.
  • the audio quality, both of the bone conduction speakers, and of the microphone, is outstanding. I do not have a pair of bluetooth earpieces that match these glasses’ quality for listening to podcasts and taking calls.
  • the camera quality, both for video and audio, is also excellent. They are a phenomenal hands-free recording tool.

The glasses require a Facebook account, but I just created a throwaway one with a disposable email address, not least because when I tried to delete my old “real” account years ago after not having used it for at least 5 years, it was just about impossible. I have no idea how the AI feature works, and I don’t care – I don’t need it, and I don’t feel like using it in any case. The only thing that could possibly motivate me to use any “intelligence” would be some sort of augmented reality projection on the insides of the lenses themselves, which is a long way off.

And while I’m probably showing my age, I’m also showing off the fact that I’m not completely cringe – you will not catch me dead prompting the thing with “Okay Meta”, any more than I will talk to Siri, Alexa, Copilot, eBongo, StalinAI, Horseboy, or any other abominable brain fart of the tech world. The moment someone hacks these glasses to work with a mobile app that replaces Meta’s crappy, unstable software (which is currently only really good for importing images/video), I’m installing it.

What makes these distinct from, say, Google Glass? That is an absolutely legitimate question. My personal opinion is that Google Glass, while technologically very interesting, suffered from the types of people who used it more than the tech per se – there’s a reason for the term “Glassholes”. Also, Glass was specifically designed as an explicit tech accessory, while these are actually pretty good sunglasses.

Lastly, and most importantly, Glass was created in a regulatory and social vacuum. In 2012, GDPR was a long way off, and public consciousness of tech firms’ abuses of data and privacy were still relatively unformed and growing. Unease at public access to small, omnipresent surveillance-capable technology was completely understandable. Plus, they were ugly as sin.

I think there is a place for devices like Google Glass, and it’s a shame they appeared when and how they did. For example, researchers, technicians, doctors, even bus drivers can benefit enormously from the kind of hands-free augmented reality technology the product offered.

Apple – A Paragon of iVirtue?

Kim’s article speaks highly of Apple, and expresses hopes for the privacy features of Apple’s announced alternative to the Meta glasses. I own several iPhones, iPads, and Apple laptops. They are well made, high quality pieces of hardware – until something doesn’t work, at which point Apple’s attitude to customer support is basically, “bite us”. Nor is the Apple fan club any better – any design deficiencies of Apple kit, such as their removal of a headphone jack, are often addressed with “you are stupid for not liking what Apple does”. Still, the OS and user experience are generally thoughtfully designed for a high degree of usability. The published security and trust model also appeals to me, even if Apple’s app store restrictions purely coincidentally allow them much greater control over app revenues.

I don’t trust Apple, but I do distrust them significantly less than other device/software makers. Apple seems to understand that at least part of its attractiveness and ability to command premium prices, more so, now that many rival smartphone makers have entered similar stratospherically insane price ranges for their devices, stems from the perception that user data is safe. They’ve fought back against invasions of privacy, such as the UK’s demand for an encryption backdoor in Apple’s cloud service (see CyAN’s position on this topic) and the US FBI’s push for Apple to enable access to locked smartphones as part of an investigation. Its statement on backdoors is unambiguous.

Meanwhile, the first page of a quick Google search for “Apple privacy issues” yields,

…not so much of a privacy panacea now, is it?

An even more basic issue is that, as I pointed out above, iPhones are exorbitantly expensive. Even the more “affordably” priced 16e is many times more than a basic Android model. I like in a reasonably prosperous, but still relatively low income country, in a region where many people simply don’t have a lot of money to spare. There’s a peculiar arrogance to expecting a house cleaner or farm worker to invest such sums into branded, “secure” tech simply because it supposedly respects their privacy more.

My ultimate point is: I don’t trust any tech companies. Not Meta, not Microsoft, not Apple, not US, not EU, none. Some tech firms have been demonstrably less worse in terms of respecting users privacy as well as applicable laws than others. While I am willing to give Apple credit for its clear stated stance on backdoors, and firmly believe that the company sees rational economic value in being viewed as favouring security and privacy, I also understand that companies are soulless, amoral constructs that will do whatever they can legally get away with in pursuit of shareholder value. Kudos to the rare leader who takes a stand.

Are You Safe? Here’s Some Paranoia

So, if you have (justified) problems with Meta glasses and their privacy issues, at least to the user, here’s a few questions for you:

  • Do you own a mobile phone? I do. Congratulations, you have an always-traceable, always-on piece of recording equipment in your pocket. Even if you trust the hardware and OS it runs on (why?) there is always the likelihood that enabling location services on your map app, or enabling the camera/microphone for your chat tool is allowing the publisher to record information.
  • Do you use WhatsApp? I do. I do not like it, I don’t like that it is owned by Meta, I do not like that it incorporates Meta’s terrible “AI assistant”, I do not trust it. If you live in much of the world, it is inescapable communicating with neighbours, family, even companies.
  • Do you use social media? I do. I’m on LinkedIn and Bluesky. LinkedIn sucks, I hate it, and it’s become progressively more of a spammy, enshittified, unusable walled garden since I joined around 2003.
  • Do you use any other mobile apps? Do you know what their publishers are sending via the “legitimate” channels needed for anything using some sort of connectivity? Sure, you can limit outgoing connections to scammy sites and resources, but if you need to connect somewhere for authentication purposes, to retrieve profile data, anything networked, you have no control over what’s being sent.
  • Do you visit websites? I do. Great. Do you conscientiously always deny tracking cookies, use privacy mode, browse via a VPN, use trusted browsers and plugins? I do what I reasonably can, without impacting functionality beyond what I can sacrifice and still get anything done.
  • Do you have or use a Windows PC? I do. I may get around to upgrading to Windows 11 some day, albeit with any number of the de-shittification scripts that purport to make the operating system usable. Do you know what the OS is sending to Microsoft in the background?
  • Do you have IoT devices in your home? I do. They’re all on a separate, firewalled network, with no obvious video recording devices directed towards any part of our home’s interior except for the webcam on my PC – and that’s pointed at the ceiling when I remember to move it. Do you trust that your Roomba isn’t sending your home layout to iRobot? That your webcam or home assistant from AliExpress isn’t sending information to China? Funny enough, I tried out a cheap Chinese home automation hub for my networked smoke detector – and it wouldn’t work unless you set the regulatory domain to “mainland China”, which speaks of a sort-of-kind-of respect for EU privacy regulation, while nonetheless totally missing the point. I also have a Ring doorbell (Amazon), which I trust exactly zero percent more than the Chinese kit.
  • Do you use cloud services or SaaS? Amazon, Salesforce, Google, Canva? I do (except for Canva, which I find unusable). These are all tech firms with a profit motive and the ability and motivation to track at least some of your activities.

So…Privacy Is No More?

Privacy is a laudable goal. It will always be subject to different social norms; a good example is photography. While in countries like the US, photography in public is almost unlimited, Germany maintains the “right to your own image” (Recht am eigenen Bild), which extends even to the right to blur images of your house’s exterior on public resources like Google Streetview, with exceptions for image and video recording where persons are merely an incidental part of a background scene. Most countries are somewhere in between. Which is “better” is irrelevant, and beyond the scope of this post – the societies that gave rise to the norms behind these rules are just different.

However, progress is, for better or worse, unstoppable. Kim laments that “[…] a faint LED on someone’s glasses isn’t meaningful notice, let alone consent.” First off, the light isn’t all that faint. Second, while I absolutely concede that it’s not consent, it’s at least something. I can currently zoom in and covertly record someone with my phone camera from pretty far away. I own a Canon 70-200 F/2.4 lens with a 2x teleconverter. Big and bulky, but good luck spotting me if I really want to record you from the safety of my car. I can also buy, very cheaply, a tiny easily hidden camera that has no such niceties as alerting someone, however subtly, that there’s recording going on. If you’re a fan of cyberpunk classics, the Zeiss Ikon artificial eyes that appear in William Gibson’s Sprawl series are something I predict on the consumer market within the decade.

For the purposes of this post, there are two kinds of privacy – your own, and that of others. I realise that much of what I’ve written addresses the safety, integrity, security, and privacy of your own information, but what about that of those around you? After all, the topic that led to these articles was that of the challenges posted by wearable recording technology, particularly when supplied by a firm that’s notorious for its awful attitudes to privacy. The main difference between your and others’ privacy is that when you are the one recording, and it’s no longer only your own decision to make. Because of the predominance of surveillance technology, the multitude of companies selling it, and the sheer diversity of organisations and people using it, the discussion comes down to the exact same issues and arguments, with the sole difference that when it’s only about your own information, you (sometimes) have agency.

Do you know who is recording you, when, and why? Do you trust that they’re using their recordings responsibly? I do not. I also don’t really differentiate between some random person I meet on the street with a camera, and a generic company with CCTV pointed at a public place, whether a small mom & pop grocery store or a large megacorporation. Where do we then draw the line? Dashcams? I own one, and while the legal restrictions on their use are often unclear and even contradictory, I firmly assert that they are an increasingly necessary tool.

From the post I’m responding to – “And here’s the kicker: when your data is captured by someone else’s glasses, you have no visibility, no access rights, and no ability to delete it. It’s surveillance with plausible deniability. And it sets a chilling precedent.” 100%. Same for all of the above. That doesn’t make it OK, but singling out one product, even if it’s made by a company with a ridiculously poor, even evil, record on privacy, is illusory.

How Do We Fix This?

In short, we can’t, and we won’t. Keep reading.

I agree with all three of Kim’s calls for how to deal with this:

  • Stronger enforcement of privacy laws when it comes to wearable tech.
  • Design-led accountability, not disclaimers buried in T&Cs.
  • A digital culture that centres consent — not just for the user, but for everyone in frame.

However, I will add some caveats.

First, “design-led accountability” – absolutely. Necessary, when you can enforce it. This includes privacy certification of not only device design and production, but also their sale. As mentioned, the glasses we’re talking about at least have a notification light. Is that sufficient? I don’t know, I’m open to arguments either way. But if someone sells a device that isn’t certified because it doesn’t meet whatever rules we agree on, they should be subjected to fines. This is also a mechanism that will inevitably have a ton of holes – would you fine me if I import a buttonhole camera from a Chinese marketplace that doesn’t give a crap about European privacy rules? That isn’t saying that such a mechanism shouldn’t be tried – but it must be accompanied by a very strong understanding of its limitations.

Second, “digital culture that centres consent” – sure. Absolutely. I am very happy to contribute to this, and tell people to stop filming me in public. I think we all should support stronger norms, and call out others who are behaving like antisocial arseholes. Laws only go so far, public pressure and shaming must always have a place.

Lastly, laws – this is the big one. Companies don’t care about norms. They care about regulations. Regulations must be principles-based, clear, and consistently enforced. And as I wrote above, fines have to hurt in order to matter. And when someone actively breaks the law, whether it’s with camera glasses or a telephoto lens, there’s no difference in how they should be treated from how a company is treated that abuses others’ rights and protections.

It’s All About Risk

None of these will “fix” the “problem”. I put these in quotes, because the problem is only one if we make it so. Circling back to my questions above about what you use – one of my recurring gripes is that almost no average person today understands risk, let alone the value of their data and how it can be abused. Ask any person on the street whether they’re okay with some random fool recording them with glasses, and they’ll probably say “no”. However, that’s a loaded question – would they be okay with wearable tech that makes hands-free recording easier? Maybe. At the risk of some sophistry and moral relativism, it’s all a matter of how you frame the issue.

I’m not advocating a social panopticon, far from it. I’m calling for people and institutions to not only be more conscious of others, but to be more aware of what really matters – and where the real risks lie. It’s utter nonsense to call for an absolute restriction on the recording and use of your private data, image, whatever, when the almost necessary evil compromises (WhatsApp anyone?) we make in order to function in a modern society, and when our own phones themselves and the apps that run on them are often privacy nightmares.

It’s important to understand that you as an individual don’t matter to a company that wants to collect your information. You are a nameless statistical data point, and you exist to be sold to, or as a part of market research. In the worst case, you may end up being the victim of stalking, harassment, fraud, or other abuse by an unscrupulous crazy or criminal. With very few exceptions, this is not the result of them explicitly seeking you out, but simply finding something about you among a whole bunch of people that makes you appear attractive or vulnerable as a target – similarly to a scammer blasting out millions of phishing emails and only investing time in that vanishingly small percentage of people who actually reply. Do what you can to gum up the works, whether it’s lying on surveys, or disabling cookies. If you are a particularly bloody-minded child as I am, sites like 419eater.com are a wonderful way to waste scammers’ time.

We are surrounded by surveillance and data collection. That does not mean we should be OK with it – but we must understand which of our data really matters, and to pick our fights. Someone being a jerk or even violating data privacy laws with their Meta glasses is not OK and should not be tolerated. But understand that this is far from the only, let alone the worst, way in which your privacy is being invaded on a daily basis – and don’t make the mistake of assuming some other product is going to somehow fix the issue.

In short, inform yourself and stay vigilant. Easy, huh?

New Podcast – Some More Terrorism, With Bjørn Ihler

New Podcast – Some More Terrorism, With Bjørn Ihler

Bjørn and John return to discuss additional aspects of terrorism and extremism

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald

AI Can’t Fix What’s Fundamentally Broken by Michael McDonald


Organisations today are being overwhelmed by the volume of AI-enabled tools entering the market. Promises of productivity gains, efficiency boosts and faster decision-making are everywhere. But history tells us this isn’t new. Every few years, technology arrives promising to be the next miracle cure.

AI has potential. But it is not a magic wand. If anything, it increases the need for clarity around systems, controls and accountability. If you are not approaching it with your eyes wide open, you are at risk — technically, legally and strategically.

Before rushing to integrate AI into critical workflows, take a step back and ask some foundational questions:
• Where is your data being processed?
• Is it being cached, and if so, where?
• Could it be used, now or in the future, to train someone else’s model?

If you cannot answer these questions with certainty, you are not in control.
And if your vendor cannot provide clear, auditable responses, then AI is not the next step. Fixing your foundations is.

Start with the Architecture
AI should not be treated as a shortcut to modernisation. It cannot fix fragmented systems or weak data governance. If your underlying architecture is not well understood or managed, layering AI on top may do more harm than good.

You need:
• Clear visibility over how data flows through your systems
• A governance model aligned to your legal, regulatory and ethical obligations
• Infrastructure that can be audited, tested and explained
• Defined rules around data retention, access and deletion

Without this, you cannot implement AI safely or responsibly.

Know What You’re Signing Up For
Several vendors now include terms allowing customer data to be used for AI model training. These clauses are often buried in general language about “service improvement” or “performance optimisation.” But once your data is used for training, there is no reversing it. You’ve contributed to a model that is no longer fully within your control.

This is not theoretical. It is happening now. And unless you’re actively checking these details during procurement and implementation, your organisation is exposed.

Trust doesn’t come from a product label or a vendor webinar. It has to be built into the design, deployment and ongoing operation of every system you use.

Compliance is Still Your Responsibility
Introducing AI into your business does not reduce your legal obligations. It increases them.

Whether you’re operating under the GDPR, Australia’s Privacy Act, or other regulatory frameworks, the burden of proof remains on you. That includes knowing where data sits, who can access it, and how it’s being used — even when processed by third parties.

The regulatory landscape is also shifting rapidly. Governments are developing AI-specific legislation that focuses on transparency, accountability and risk management. Organisations that fail to build those principles into their systems now will find themselves playing catch-up under pressure.

Smarter, Not Shinier
If you are considering an AI-enabled tool, apply the same rigour you would to any other piece of critical infrastructure:
• Minimise data collection. If you don’t need it, don’t collect it.
• Use a zero-trust approach. Never assume access is safe without verification.
• Keep control boundaries tight. Know exactly who sees what, when and why.
• Design your exit plan. If you cannot leave a vendor without significant disruption, your system is not resilient.

None of this should be treated as optional. These principles form the foundation of good system design — with or without AI.

Ask Better Questions
Before you approve a new AI capability, ask:
• Do we know exactly how and where our data will be processed?
• Are we confident that it won’t be retained, reused or repurposed without our knowledge?
• Can we prove that we are meeting our compliance and governance obligations?

If you’re not sure, press pause. Getting the basics right will serve you far better than being first to implement a tool you don’t fully understand.

AI is not a panacea. It is a powerful extension of your existing systems and processes. Used well, it can improve what you already do. Used poorly, it can embed risk deep into your infrastructure.

Don’t be blinded by promises. Be clear about the problems you’re solving, and the systems you’re solving them with. The organisations that benefit from AI won’t be the ones that moved fastest. They’ll be the ones that laid the groundwork.


About the Author:

Michael McDonald is a CTO and global expert in solution architecture, secure data flows, zero-trust design, privacy-preserving infrastructure, and cross-jurisdictional compliance.

Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

They look just like a regular pair of Ray-Bans. But behind the dark lenses?Cameras. Microphones. AI-powered assistants. All quietly recording, analysing, and storing data, sometimes even in real-time. And unless you’ve signed up for a starring role in someone else’s life capture experiment, you probably