AI Can’t Fix What’s Fundamentally Broken by Michael McDonald

Organisations today are being overwhelmed by the volume of AI-enabled tools entering the market. Promises of productivity gains, efficiency boosts and faster decision-making are everywhere. But history tells us this isn’t new. Every few years, technology arrives promising to be the next miracle cure.
AI has potential. But it is not a magic wand. If anything, it increases the need for clarity around systems, controls and accountability. If you are not approaching it with your eyes wide open, you are at risk — technically, legally and strategically.
Before rushing to integrate AI into critical workflows, take a step back and ask some foundational questions:
• Where is your data being processed?
• Is it being cached, and if so, where?
• Could it be used, now or in the future, to train someone else’s model?
If you cannot answer these questions with certainty, you are not in control.
And if your vendor cannot provide clear, auditable responses, then AI is not the next step. Fixing your foundations is.
Start with the Architecture
AI should not be treated as a shortcut to modernisation. It cannot fix fragmented systems or weak data governance. If your underlying architecture is not well understood or managed, layering AI on top may do more harm than good.
You need:
• Clear visibility over how data flows through your systems
• A governance model aligned to your legal, regulatory and ethical obligations
• Infrastructure that can be audited, tested and explained
• Defined rules around data retention, access and deletion
Without this, you cannot implement AI safely or responsibly.
Know What You’re Signing Up For
Several vendors now include terms allowing customer data to be used for AI model training. These clauses are often buried in general language about “service improvement” or “performance optimisation.” But once your data is used for training, there is no reversing it. You’ve contributed to a model that is no longer fully within your control.
This is not theoretical. It is happening now. And unless you’re actively checking these details during procurement and implementation, your organisation is exposed.
Trust doesn’t come from a product label or a vendor webinar. It has to be built into the design, deployment and ongoing operation of every system you use.
Compliance is Still Your Responsibility
Introducing AI into your business does not reduce your legal obligations. It increases them.
Whether you’re operating under the GDPR, Australia’s Privacy Act, or other regulatory frameworks, the burden of proof remains on you. That includes knowing where data sits, who can access it, and how it’s being used — even when processed by third parties.
The regulatory landscape is also shifting rapidly. Governments are developing AI-specific legislation that focuses on transparency, accountability and risk management. Organisations that fail to build those principles into their systems now will find themselves playing catch-up under pressure.
Smarter, Not Shinier
If you are considering an AI-enabled tool, apply the same rigour you would to any other piece of critical infrastructure:
• Minimise data collection. If you don’t need it, don’t collect it.
• Use a zero-trust approach. Never assume access is safe without verification.
• Keep control boundaries tight. Know exactly who sees what, when and why.
• Design your exit plan. If you cannot leave a vendor without significant disruption, your system is not resilient.
None of this should be treated as optional. These principles form the foundation of good system design — with or without AI.
Ask Better Questions
Before you approve a new AI capability, ask:
• Do we know exactly how and where our data will be processed?
• Are we confident that it won’t be retained, reused or repurposed without our knowledge?
• Can we prove that we are meeting our compliance and governance obligations?
If you’re not sure, press pause. Getting the basics right will serve you far better than being first to implement a tool you don’t fully understand.
AI is not a panacea. It is a powerful extension of your existing systems and processes. Used well, it can improve what you already do. Used poorly, it can embed risk deep into your infrastructure.
Don’t be blinded by promises. Be clear about the problems you’re solving, and the systems you’re solving them with. The organisations that benefit from AI won’t be the ones that moved fastest. They’ll be the ones that laid the groundwork.
About the Author:
Michael McDonald is a CTO and global expert in solution architecture, secure data flows, zero-trust design, privacy-preserving infrastructure, and cross-jurisdictional compliance.