Knock Knock…Who’s There? AI. AI Who? AIm Taking Yer Jerbs.

Knock Knock…Who’s There?  AI.  AI Who?  AIm Taking Yer Jerbs.

“Here’s the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.”

A colleague who’s spending a lot of time playing with multiple LLMs as part of their governance and board training, recently sent me an article titled “Something Big Is Happening” by Matt Shumer about the ongoing and growing impact of AI on jobs.

Aside from the fact that this reads very much like it was at least partially written by AI (the staccato overly dramatic short paragraph structure is a pretty good tell), it’s an interesting take. It’s highly relevant to our industry, not least because of its increasing use in areas like

  • the plethora of AI-enabled, -enhanced, or -focused security tools – threat intel analytics, vulnerability scanning, log parsing, you name it
  • Code and configuration development and management
  • Even strategy and organisational planning
  • The problem is not that LLMs are doing veteran professionals’ jobs better than us – they’re not, and they never will. AGI is a myth, and the very term “AI” is marketing nonsense.

The first massive issue is the uninformed bystander, which includes not only Joe Schmoe Averageslob, but also your CEO, CIO, and most importantly, the CFO, can’t tell the difference. They will inevitably conflate the kind of routine grunt work that AI is good at supporting with “AI can do ALL THE THINGS”, and they see the final cost delta because paying expensive, knowledgeable people and less expensive (in the short term) LLMs. Tech companies, particularly AI shops, certainly aren’t going to disabuse them of that notion.

The second, and in the long term more damaging challenge is that, precisely by taking on the kinds of entry-level routine tasks that they’re good at, LLMs remove an important learning tool for the next generation of professionals. Whether companies no longer want to pay and train entry-level career aspirants, or the kids themselves no longer have any motivation to do the grunt work that’s part of acquiring foundational skills, I predict we’re going to see a massive skills shortage across not only the cybersecurity sector within the next 5-10 years (including a decline in critical thinking abilities), but most areas of economic activity.

In case you haven’t figured it out yet, this is really, really bad – because until we have true AGI (again, not anytime soon) the machine’s always going to make mistakes. The only way to effectively use these tools, which is all they are, is for a knowledgeable expert to understand not only how to sanity-check what they vomit out, but also to actually prompt them effectively in the first place. I often estimate that it takes at least 5, more like 10 years to create a “good” security professional – the sort of cross-domain and contextual knowledge, inter-personal relationships, and grasp of a wide range of factors ranging from psychology to geopolitics and finance, is not something you acquire through a training course. It requires curiosity, encouragement, and an environment that is conducive to getting your hands dirty playing with new toys and concepts. What happens when we take away that opportunity?

Also, the risk manager in me gets all itchy at the thought of handing over this much operational control to a third party – especially when it involves the kind of “trust me, bro” attitude often evinced by the likes of Sam Altman of OpenAI and other tech oligarchs. Remember, the cloud is just someone else’s computer. So is your LLM provider. Even if you’re running everything in-house (unlikely), do you truly have the kind of transparency you need to be sure that you’re not introducing anything from critical single points of failure to outright malicious code?

I am an avid LLM user for a wide range of tasks. Whether it’s helping me research and summarise regulatory frameworks, provide me with ideas for presentation and document structures, create straw man strategy papers, or reviewing my work just in case I missed something obvious, AI is a great tool and timesaver. That’s all it is, though – a tool, and like any tool, it has its place, and its way using it.

This is where I like the article’s conclusions – these apply nicely to the information security and risk management industry as they do to any other area where AI services are making inroads. It’s already here, and it’s already throwing people under the bus and will continue to do so. There are jobs that absolutely need to disappear, something that’s been true for every industrial innovation from the wheel to the printing press to the mechanical loom and onward. While it’s up to us as a society and as responsible professionals to understand how to both cushion the impact of AI-as-innovation, and how to mitigate the damage that can potentially be done by cynical tech firms that don’t care about the outcome of uncontrolled AI adoption, the best way for anyone to deal with this is to learn what’s involved, and adapt accordingly – when someone hands you a hammer, learn how to use it.