Most People Think They Understand Their Rights. They Don’t.

Recent conversations around intelligence powers and surveillance frameworks in Australia, particularly proposed changes to ASIO’s compulsory questioning powers, have sparked predictable reactions. Some are alarmed, others are dismissive, and most fall somewhere in between, unsure what is real, what is exaggerated, and what it all means for them. Stepping back from the noise, however, reveals something more important than the immediate debate.
The gap nobody talks about
There is a quiet gap that sits beneath almost every discussion about technology, privacy, and governance. It is not about what systems can do, but what people think they can do. Right now, that gap is wider than most Australians realise, and it is becoming more significant as legal frameworks and technical capabilities continue to evolve in parallel but not always in alignment.
When the temporary becomes permanent
We are watching the slow normalisation of exceptional powers. This is not a new phenomenon. In the early 2000s, governments around the world introduced expanded intelligence capabilities in response to a rapidly changing threat landscape. Australia was no different, and these powers were introduced with safeguards, including sunset clauses designed to force reconsideration over time. That detail matters more than it might initially appear.
A sunset clause is not simply a legislative mechanism. It is an acknowledgement that a power sits outside the ordinary boundaries of a democratic system. It creates a built-in pause and an opportunity for collective reflection. What we are seeing now is not just the continuation of those powers, but their quiet transition into permanence, and that shift deserves far more attention than it is currently receiving.
Why permanence changes behaviour
The real question is not whether these powers exist, nor whether they have been used frequently or sparingly. The more important question is what happens when something designed to be temporary becomes part of the permanent architecture. Permanence changes behaviour, even when the powers themselves are rarely exercised.
The existence of these capabilities shapes how people think, how organisations operate, and how trust is formed. It introduces a layer of uncertainty that is difficult to quantify but impossible to ignore. People do not need to expect to be targeted for that shift to occur. It is enough that the boundaries of what could happen become less clear over time.
Where this meets cyber and trust and safety
This is where the conversation intersects directly with cybersecurity and trust and safety. Cybersecurity is not simply about preventing breaches or responding to incidents. At its core, it is about creating environments where boundaries around access, control, and accountability are clearly defined and consistently understood. Trust and safety extends that idea further by focusing on whether systems behave in ways that are predictable, explainable, and proportionate to the risks they are designed to address.
When those boundaries are unclear, trust does not disappear overnight. It erodes gradually, often without a single triggering event. Importantly, that erosion can occur even when nothing goes wrong, because uncertainty itself becomes a factor in how people engage with systems and institutions.
The problem with the usual framing
This is where the broader discussion often drifts off course. It becomes framed as a binary choice between security and freedom, or between safety and individual rights. That framing may be emotionally compelling, but it is too simplistic to be useful in practice. The more relevant question is how systems are designed and how that design shapes the need for trust.
What happens when the systems we rely on require trust in order to function, but provide limited visibility into how that trust is exercised? This is where tension begins to build, and it is also where many existing frameworks begin to show their age.
Risk no longer sits in one place
We are now operating in an environment where data moves quickly, relationships are distributed, and individuals are often connected to events not by intent but by proximity. The idea that only clearly defined “targets” are affected by systems is increasingly outdated. In practice, the impact of these frameworks is often felt by those adjacent to an issue, not just those at the centre of it.
That dynamic will feel familiar to anyone working in cybersecurity. Supply chains, third-party risk, identity systems, and data sharing models all operate on the same principle. Risk is rarely isolated and instead propagates through connections, often in ways that are not immediately visible.
This is a systems question, not a panic moment
This is not a flaw in the system. It is a characteristic of how modern systems operate. However, it does mean that our understanding of risk, accountability, and control needs to evolve alongside those systems. Without that evolution, the gap between perception and reality will continue to widen.
None of this is an argument against the role of intelligence agencies or preventative capability. Governments have a responsibility to act in the interest of national security, and that has always been the case. The question is not whether these powers should exist, but how they are bounded, how they are overseen, and how clearly they are understood by the people they ultimately affect.
Where the real risk sits
The real risk is not always found in the use of a power. It often lies in the gradual shift of what becomes accepted without question. Over time, that shift can reshape expectations, behaviours, and the level of trust people place in the systems around them.
Closing the gap between perception and reality does not require outrage. It requires clarity. It requires a willingness to move beyond surface-level debates and engage with the structure of the systems themselves, including how they are designed, how they evolve, and where accountability genuinely sits.
Designing for trust, not assuming it
In cybersecurity and in trust and safety, one principle continues to hold true. Trust should not be assumed. Where possible, it should be designed out of the system, supported instead by clear controls, transparency, and verifiable accountability.
Most people assume that their rights are fixed, well defined, and widely understood. The reality is more complex, and that complexity is not always visible until something shifts.
If there is one thing worth paying attention to right now, it is not just what is changing, but how quietly those changes are taking place.
About the Author
Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.
She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.