Reflections on Day Two of the 2025 Neurotechnology Summit: Capability, Governance and the Questions We Must Answer

If Day One explored what neurotechnology means for individual minds, Day Two confronted what it will mean for societies, states and the systems that govern them. The conversations shifted from personal rights, clinical ethics and emerging applications to national capability, infrastructure, defence, public policy and commercial pathways. And yet, the undercurrent remained the same: if neurotechnology is to be trusted, it must be shaped by people who understand risk, governance and how fragile public confidence can be.

Walking into the second day on behalf of CyAN, I felt the same energy I sensed on Day One: an instinctive recognition that cybersecurity, trust and safety and governance thinking cannot sit at the edges of neurotechnology. They must be part of its foundation.

Below are the tensions that defined Day Two.

1. National Ambition vs Societal Trust

Anchored in “Neurotechnology and Public Policy”.

Day Two opened with a shift in scale. If Day One asked us to consider the ethics of individual minds, Day Two asked us to consider the governance of entire populations. The public policy session made it clear that Australia is beginning to articulate a national ambition around neurotechnology, but it has not yet earned the societal trust that ambition requires.

Much of the discussion revolved around governments treating neurotechnology as future infrastructure: something that could underpin healthcare, workforce transformation, national security, education and even aspects of the digital economy. But ambition alone cannot carry a sector as sensitive as this one. Neurotechnology touches identity, cognition, autonomy and vulnerability in ways no other technology does. Trust is not an accessory. It is load-bearing.

The room kept circling the same problem: policy desire is racing ahead of public confidence. People will not accept neurotechnology that is introduced quietly, opaquely or without safeguards. Nor will they accept policy frameworks that treat neurotechnology as a simple extension of AI governance. It isn’t. It requires its own rights language, regulatory architecture and ethical imagination.

Cybersecurity, governance and trust-and-safety professionals understand how quickly trust erodes and how hard it is to rebuild. If Australia wishes to lead in neurotechnology, it must earn trust at the same speed it develops capability. That begins with frameworks that protect cognitive liberty, neuroprivacy and consent before deployment — not after.

2. Clinical Promise vs Clinical Boundaries

Anchored in “Neurotechnology in Healthcare” and “Neuroethics and Emerging Challenges”.

The healthcare and neuroethics sessions traced a delicate line between therapeutic possibility and the boundaries clinical practice must hold. Neurotechnology is moving from research environments into medical settings, and with that comes a shift in what “care” means, who delivers it and how responsibility is shared.

The potential is extraordinary: improved communication for people with paralysis, new treatments for neurological conditions, enhanced quality of life and clinical breakthroughs once thought impossible. For clinicians, this work is often profoundly hopeful.

But clinical promise casts a long ethical shadow. Neurotechnologies do not behave like traditional implants. They generate data that can reveal emotional states, cognitive patterns and behavioural changes long before a patient understands what is being inferred. Consent becomes more complex. Withdrawal becomes fraught, and the line between help and intrusion grows thin.

The neuroethics discussion brought this into focus. How do we explain inference to vulnerable patients? How do we ensure withdrawal of consent remains meaningful when an implant or BCI is integrated into daily function? What happens when neural data reveals information unrelated to treatment but impossible to ignore?

The tension is not between optimism and caution, it is the space where both coexist. Neurotechnology can transform lives, but only if the people whose lives are being transformed are protected through frameworks that match the pace and depth of innovation. Clinical optimism must sit alongside humility, clarity and safeguards that recognise just how exposed patients become once their neural signals enter a system.

3. Data Ownership vs Capability Hunger

Anchored in “The Future of Neurotechnology” and “Investment and Commercialisation”.

If earlier sessions focused on care and ethics, the future-focused panels exposed a deeper tension: Australia wants to build capability rapidly, but it has not yet resolved who owns, or should own, the data that will underpin that capability.

This surfaced during a discussion on national neurotechnology pathways, where the value of large-scale access to medical imaging, such as MRIs, was raised as an example of how discovery and diagnostics could be accelerated. The intention was not necessarily controversial; it reflected genuine enthusiasm for what becomes possible with richer datasets. However, it revealed a deeper structural gap.

Australia does not have a settled framework for neural or neuro-adjacent data ownership. Not in healthcare, research, commercial settings or national infrastructure. If we cannot clearly articulate who should access something as established as medical imaging, how will we govern the next generation of neural signals that reveal far more intimate information?

Investment conversations amplified this tension. Founders want certainty. Policymakers want capability. Investors want clarity. Researchers want infrastructure. But capability without governance is not capability, it is exposure. Neural data is not simply “another dataset”. It is a cognitive fingerprint, revealing internal states people may never have meant to express.

If these questions feel unresolved in healthcare, research and commercial settings, they become unavoidable when neurotechnology enters national security. What feels ambiguous in civilian life takes on far greater weight once command structures, duty of care and national responsibility are involved. The same uncertainties around data ownership, sharing, inference and consent do not disappear — they intensify.

4. National Security Advancements vs Ethical Oversight

Anchored in “Defence and National Security”.

The defence session introduced another layer of complexity. Neurotechnology in national security is no longer speculative — it is already shaping research programs, training environments and strategic planning. The conversation focused on grounded capabilities rather than science-fiction tropes: cognitive load monitoring, fatigue detection, decision-support tools and enhanced human-machine teaming.

But there were moments when the discussion edged into frontier territory. Concepts like neural data poisoning, cognitive manipulation or altering emotional states such as aggression were raised not as fantasies but as foreseeable capabilities, prompting a visible pause in the room. Not because they were implausible, but because their ethical implications ripple far beyond defence.

If neurotechnology influences emotional regulation in high-risk environments, what responsibility exists once personnel return to civilian life — for example, when someone conditioned for heightened vigilance or controlled aggression in combat environments carries those patterns back into everyday family and community settings. Who carries responsibility — and liability — for unintended shifts in behaviour? What duty of care is owed to individuals whose neural patterns have been shaped, even unintentionally, by a system they did not fully understand?

These questions touch families, communities, healthcare systems and public trust. Defence technologies do not stay within defence — their consequences travel.

The tension was not innovation versus fear, but innovation versus governance. Defence requires capability, but it also requires legitimacy, ethical oversight, human rights compliance and transparent accountability. Responsibility does not sit with defence alone. Technologists, clinicians, policymakers, ethicists and cybersecurity experts all share in defining what responsible deployment means before systems are fielded, not after.

5. Investment Optimism vs Real-World Constraints

Anchored in “Investment and Commercialisation” and the Closing Keynote.

The commercialisation conversation carried real enthusiasm. Australia has talent, ideas and an appetite to build sovereign neurotechnology capability. Investors see opportunity. Founders see momentum. Policymakers see strategic advantage.

But neurotechnology is not a software sector. It sits at the intersection of clinical trials, manufacturing capacity, regulatory maturity, long research timelines and exceptionally sensitive data governance. The mismatch between aspiration and infrastructure was hard to ignore. Vision alone cannot turn prototypes into scalable, ethical, commercially viable products. It requires coordinated investment in capability, regulation, ethics, clinical pathways and cybersecurity foundations that global partners now expect as baseline.

The final keynote brought this into sharp relief: Australia does not suffer from a shortage of imagination, only a shortage of scale. To lead in neurotechnology, we must invest not just in technology but in the ecosystem that surrounds it.

Optimism is fuel. It is not, on its own, a framework.

Closing Reflection

What stayed with me after Day Two was the seriousness of the intent in the room. These discussions were not about abstract futures but about the infrastructures, rights, risks and responsibilities that neurotechnology will reshape. It also struck me how open many people were to working across disciplines. Not in a performative way, but in a practical one. There was a quiet acknowledgement that no single organisation, sector or profession has the full picture, and that the only way to navigate this well is together.

That openness surfaced again at the close of the Summit, with an open call to contribute to ongoing research and thinking at Cortical Labs, linked to recent peer-reviewed work exploring the future direction of neurotechnology. It felt like a fitting place to end. The questions raised over two days are too complex to be “owned” by any one group, and meaningful progress will come from shared effort rather than isolated breakthroughs.

For those interested in contributing, a link to the collaboration survey can be found in the comments below.

I was grateful to be in the room representing CyAN. Across both days of the Summit, people repeatedly emphasised that cyber expertise is not the last step in this journey, it is the starting point. For once, cyber was not positioned as the “fix it later” function. It was understood as a pillar of public confidence, ethical design and national capability.

Australia sits at a moment of opportunity. The question is whether we will build neurotechnology with the same care, clarity and courage we expect from any system that touches identity, autonomy and the fabric of society.

These reflections follow a Chatham House–style approach. The insights draw on the themes and atmosphere of the day rather than verbatim quotations.


About the Author

Kim Chandler McDonald

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.

She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.