The Compliance Theatre: When Red Tape Meets Cybersecurity by Nick Kelly

The Suffocating Embrace of Accumulated Law
The Government (I speak of the US Government in this article, although the principle argument is as good as a blueprint for many other governments globally) has developed a peculiar affliction over the past half-century: the inability to throw anything away. Rather like a hoarder whose home has become impassable due to accumulated newspapers and defunct appliances, modern government has layered law upon law, regulation upon regulation, until the original floor is no longer visible and movement has become nearly impossible. The Interstate Highway Act of 1956 ran to 29 pages and delivered the entire system in roughly 15 years. The Affordable Care Act of 2010 sprawled across 2,700 pages and its implementation remains contentious more than a decade later. One might observe a certain inverse relationship between page count and efficacy.
Philip K. Howard, lawyer and founder of Common Good, has spent decades documenting this phenomenon with the enthusiasm of a forensic archaeologist examining societal decay. In a recent appearance on The Economist‘s podcast, Howard articulated the fundamental problem with characteristic clarity: government requires spring cleaning. Not the superficial tidying that involves moving problems from one cupboard to another, nor the ‘taking a chainsaw approach’ of the short-lived DOGE (an utter catastrophe in this author’s opinion, with dire societal consequences – see the uprooting of USAID leading to likely thousands of deaths, the culling of staff in Cybersecurity and Infrastructure Security Agency leading to a weakening of a key national security agency in the states, etc.), but a strategic decluttering that requires acknowledging that most of what we’ve accumulated no longer serves any useful purpose and should be consigned to the skip.
The mechanism of dysfunction is straightforward. Each crisis, each scandal, each failure prompts the addition of new requirements designed to prevent that specific failure from recurring. No one removes the old requirements, which were themselves responses to previous failures. And whilst the premise that ushers in new laws and requirements is made in good faith, the result is what Howard describes as:
“all these countless requirements that don’t leave room for officials to make trade off judgements and so when you have a mandatory system of regulation so it’s just about compliance rather than judgement, whatever’s left, will almost always stop the project”.
This is not theoretical dysfunction. It is the reason that building a single offshore wind farm near Cape Cod required a decade of study by 17 agencies, with 18 lawsuits pending to delay it another decade. It is why federal law and regulation now comprises approximately 150 million words, compared to the 7,500 words of the U.S. Constitution. It is why government has become, in Howard’s words, a “legal quicksand” where movement in any direction requires navigating a labyrinth of procedural requirements, each of which was doubtless sensible in isolation but which collectively form an impenetrable thicket.
The insight that compliance-based systems inevitably prioritise process over outcome should not be revolutionary, yet it apparently is. When every action must be justified by reference to specific regulatory authority, and every decision must be documented to demonstrate compliance with enumerated requirements, the rational bureaucrat learns to focus on the documentation rather than the outcome. Success becomes defined not as “did we achieve the goal” but as “did we follow the correct procedure”. The procedure becomes the goal, a transformation that would be merely amusing if it weren’t so consequential.
The Theatre of Cybersecurity Compliance
This same dynamic has metastasised into the world of cybersecurity with predictable results. Consider the contemporary ritual of cybersecurity certification. Organisations pursuing government contracts or handling sensitive data must navigate a bewildering array of frameworks: FedRAMP for U.S. federal systems, GDPR for European data protection, SOC 2 for service organisations, ISO 27001 for information security management, and countless others depending on industry and jurisdiction. Each framework requires extensive documentation, regular audits, and continuous monitoring to maintain compliance.
The process is exhaustive and exhausting. To achieve FedRAMP authorisation, for instance, organisations must document their compliance with hundreds of controls across multiple families: access control, awareness and training, audit and accountability, security assessment, system and communications protection, and on it goes. The Security Assessment Report alone typically runs to thousands of pages. The documentation requirements are so extensive that a cottage industry of compliance consultants has emerged, specialists whose entire professional existence revolves around helping organisations navigate the procedural maze.
And what does all this achieve? In too many cases, it achieves compliance without security. The third-party risk questionnaire exemplifies this perfectly. When Organisation A wishes to engage Organisation B’s services, Organisation A dispatches a questionnaire comprising dozens (sometimes hundreds) of questions about Organisation B’s security practices. Does Organisation B encrypt data in transit? Does Organisation B conduct regular penetration testing? Does Organisation B have an incident response plan? Organisation B’s security team dutifully completes the questionnaire, providing the required assurances, and the contract proceeds.
This is theatre. Pure, unadulterated theatre. The questionnaire does not assess whether Organisation B’s encryption implementation is competent or whether their incident response plan has ever been tested under realistic conditions. It assesses whether Organisation B knows what answers to provide to satisfy the questionnaire. A sophisticated attacker could compromise Organisation B’s systems whilst Organisation B maintains perfect compliance with every questionnaire requirement. Indeed, several high–profile breaches have involved organisations that were, at the time of compromise, certified compliant with relevant security frameworks.
The fundamental problem mirrors Howard’s observation about government regulation: we have created mandatory systems focused on compliance rather than judgement. The security professional who identifies a genuine risk that falls outside the enumerated controls faces a dilemma. Addressing the risk requires time and resources that could be spent on compliance activities. Compliance activities are measurable, auditable, and career-safe. Addressing novel risks requires judgement, involves uncertainty, and offers no protection if the judgement proves incorrect. The rational actor focuses on compliance.
This is not to suggest that the individuals operating within these systems are foolish or malicious. They are responding rationally to perverse incentives. The CISO who achieves and maintains FedRAMP authorisation has a tangible accomplishment to report to the board. The CISO who identified and mitigated three novel attack vectors that never materialised into actual breaches has nothing to show for it except the absence of incidents, which is indistinguishable from having been lucky. Our systems reward the visible and measurable whilst remaining indifferent to the actual provision of security.
The Necessary Tension: Judgement Versus Checklists
At this point, the astute reader might object that this argument proves too much. Surely there are domains where rigid procedural compliance is not merely bureaucratic theatre but essential to safety and security. And this objection is correct. The question is not whether checklists and mandatory procedures have value, but rather where and how they provide value, and when they become obstacles to the very goals they purport to serve.
Consider aviation, a field where procedural rigour is legendary and largely justified. Before every takeoff, pilots execute a preflight checklist covering dozens of items. They verify fuel quantities, control surfaces, instrument functionality, and countless other parameters. This is not bureaucratic excess; it is engineering rigour applied to high-stakes operations where small oversights can be catastrophic. The checklist exists because human memory is fallible and because the consequences of forgetting to verify a single critical parameter can be catastrophic.
Yet even in aviation, where procedural compliance is perhaps more justified than in any other civilian domain, judgement remains essential. On 15 January 2009, US Airways Flight 1549 struck a flock of geese shortly after takeoff from LaGuardia Airport, destroying both engines. Captain Chesley “Sully” Sullenberger and First Officer Jeffrey Skiles had approximately 208 seconds to assess the situation, evaluate options, and execute a decision. The aircraft’s quick reference handbook did not contain a checklist for “dual engine failure at low altitude over densely populated urban area with no suitable airport within gliding range”. (If you haven’t heard the cockpit recording of this incident, do yourself a favour and listen here – no words except that the pilots, the crew, and the first responders are simply incredible – legends all.)
Sullenberger’s decision to ditch in the Hudson River was an exercise in judgement, not compliance. It was informed by decades of experience, thorough knowledge of the aircraft’s capabilities, and rapid assessment of the available alternatives. The successful outcome (all 155 people aboard survived) vindicated his judgement, but the decision itself was made under conditions of profound uncertainty with no procedural guidance to validate it. Had the ditching gone catastrophically wrong, the same decision would have been subject to extensive criticism. This is the nature of judgement: it involves personal accountability for decisions made under uncertainty.
The lesson is not that checklists are unnecessary but rather that they are insufficient. The preflight checklist serves a crucial function in verifying that the aircraft is in proper condition before flight. It does not, and cannot, prepare the pilot for every contingency that might arise during flight. What prepares pilots for novel emergencies is training, experience, and the cultivation of sound judgement. The most rigorous checklist is useless if the operator lacks the judgement to recognise when circumstances have moved beyond the checklist’s scope.
Similarly, in engineering domains, methodical checks serve essential functions. The structural engineer who fails to verify load calculations is not exercising judgement; they are committing malpractice. The software engineer who fails to validate input is not being innovative; they are creating vulnerabilities. Certain checks are non-negotiable because the consequences of error are severe and the correct approach is well-established. These are not the problematic compliance requirements. The problematic requirements are those that mandate specific methods rather than outcomes, that multiply checks beyond the point of marginal utility, and that transform reasonable verification into baroque ritual.
The distinction is between checklists that verify critical parameters (necessary and valuable) and compliance regimes that mandate specific approaches to problems where multiple valid approaches exist (often counterproductive). The aviation preflight checklist specifies what must be verified, not how the pilot must think about flying. It provides a safety net against forgetting critical items whilst leaving the pilot’s judgement intact for the decisions that matter. By contrast, compliance regimes often specify not merely what must be achieved but precisely how it must be achieved, eliminating the discretion necessary for effective operation.
FedRAMP, GDPR, and the Compliance Industrial Complex
Return now to cybersecurity, armed with this distinction. FedRAMP, the Federal Risk and Authorization Management Program, provides a standardised approach to security assessment for cloud services used by U.S. federal agencies. Its stated purpose is to promote the adoption of secure cloud solutions by providing a “do once, use many times” framework. Rather than each federal agency conducting its own security assessment of potential cloud providers, FedRAMP provides a centralised authorisation that any agency can accept.
In principle, this is sensible. Redundant security assessments waste resources and provide minimal additional assurance. A centralised framework that establishes baseline security requirements and verifies compliance through independent assessment should reduce costs and improve security. In practice, the framework has become notorious for its complexity, expense, and glacial pace. Achieving FedRAMP authorisation typically requires 18 to 24 months and costs between $2 million and $5 million for initial authorisation, with ongoing annual costs of $500,000 to $1 million for continuous monitoring. Many cloud service providers, particularly smaller organisations, find these barriers insurmountable and simply decline to pursue federal contracts.
More troubling is what FedRAMP measures. The framework comprises hundreds of security controls covering access control, incident response, contingency planning, and myriad other security domains. Organisations must document their implementation of each control, provide evidence of effectiveness, and submit to regular audits to maintain authorisation. This process measures the presence of security controls and the quality of documentation. It does not directly measure security.
An organisation can achieve FedRAMP authorisation whilst maintaining substandard security if it excels at documentation and control implementation. Conversely, an organisation might have genuinely robust security that does not align perfectly with FedRAMP’s enumerated controls. The framework rewards compliance with its specific requirements rather than the achievement of actual security. This is the same dynamic Howard identifies in government more broadly: the substitution of procedural compliance for substantive achievement. (Whilst not specifically FedRAMP related, the ‘Telemessage‘ debacle highlights in brutal fashion how adherence to compliance – which can help bags numerous Federal Government departments – does not equate to security.)
GDPR, the European Union’s General Data Protection Regulation, presents a different but related problem. GDPR establishes requirements for the processing of personal data, including provisions for data subject rights, breach notification, data protection impact assessments, and the appointment of data protection officers for certain organisations. Unlike FedRAMP, which specifies detailed controls, GDPR is largely principles-based, establishing what organisations must achieve whilst leaving considerable discretion regarding how they achieve it.
This should, in theory, provide the balance between accountability and judgement that Howard advocates. In practice, GDPR’s enforcement has often focused on procedural compliance rather than substantive data protection. Organisations have invested heavily in obtaining consent for cookie placement on websites whilst continuing practices that involve extensive data sharing with third parties. The regulation has spawned an industry of consent management platforms, privacy policies, and compliance consultants. It has also triggered a cascade of consent requests that users routinely accept without reading, training users to click “Accept” reflexively to access desired content.
The result is compliance theatre of a different sort. Organisations can demonstrate that they obtained consent, conducted assessments, and appointed officers, but the regulation has not prevented major data breaches or curbed the data collection practices of large platforms. The sophisticated operators have learned to structure their practices to fit within GDPR’s requirements whilst continuing largely as before. Meanwhile, smaller organisations struggle with compliance costs and uncertainty about requirements.
The pattern repeats across cybersecurity compliance frameworks. ISO 27001, SOC 2, PCI DSS (for payment card data), HIPAA (for healthcare information), and countless others each establish requirements that organisations must satisfy to demonstrate security. Each framework requires substantial investment in documentation, assessment, and ongoing maintenance. The cumulative effect is what might be termed the Compliance Industrial Complex: a self-sustaining ecosystem of auditors, consultants, frameworks, and certifications that has become largely decoupled from the provision of actual security.
Consider the organisation that holds multiple certifications: FedRAMP, ISO 27001, SOC 2, and PCI DSS compliance. This organisation has invested millions in achieving these certifications and employs a team whose primary function is maintaining them. Does this organisation have better security than a comparable organisation that has invested the same resources directly in security capabilities rather than compliance? Maybe and even probably yes, but maybe, potentially not also. How can it be known from a filled questionnaire? The certifications provide no answer to this question. They verify that the organisation has implemented the controls required by each framework and maintained the required documentation. Whether those controls are effective, whether they address the organisation’s actual risk profile, whether the investment yielded security commensurate with cost: these questions remain unanswered.
The Uncomfortable Conclusion
We arrive at an uncomfortable conclusion: the compliance frameworks are simultaneously necessary and overwhelming, valuable and counterproductive. They are necessary because organisations left to their own devices often fail to implement even basic security measures, because some standardisation facilitates communication and assessment, and because accountability requires measurable standards. They are overwhelming because the proliferation of frameworks with overlapping and sometimes conflicting requirements consumes resources that could be directed to actual security improvements. They are valuable when they establish minimum standards and verify implementation of essential controls. They are counterproductive when they mandate specific approaches, multiply requirements beyond marginal utility, and create incentives to prioritise compliance over security.
What’s left is an ocean of acronyms with organisations drowning – submerged in compliance requirements whilst breaches continue unabated. Security professionals spend more time documenting their security programme than improving it. The frameworks proliferate because each new requirement seems reasonable in isolation (surely we should verify that access controls are implemented properly; surely we should document our incident response procedures – no contention here), whilst no mechanism exists for removing requirements that have become obsolete or questioning whether the cumulative burden has exceeded the value provided.
What Howard proposes for government (spring cleaning that carefully discards accumulated requirements, simplification that focuses on goals rather than methods, and trust in officials to exercise judgement within an accountability framework) seems equally applicable to cybersecurity compliance. We need frameworks that establish clear security outcomes rather than mandating specific controls. We need assessment methods that measure actual security posture rather than documentation quality. We need to create space for security professionals to exercise judgement about their organisation’s risk profile and appropriate countermeasures, whilst holding them accountable for outcomes. We need, above all, to foster expertise that augments better judgement – understanding topics to the nth degree by training.
This is, admittedly, more difficult than it sounds. Outcomes-based regulation requires subjective assessment, which is more difficult to audit and more vulnerable to gaming than compliance with enumerated requirements. Trusting professionals to exercise judgement requires that we accept some failures (the judgement will sometimes be wrong) and distinguish between reasonable judgements that happened to be incorrect and negligent failures to apply basic competence. Our current frameworks emerged partly because outcomes-based approaches were perceived as inadequate. The question is whether the cure has become worse than the disease.
The alternative to the current morass is not the abandonment of standards or oversight. It is, rather, a substantial rethinking of how we structure accountability. Howard’s formulation bears repeating: all these countless requirements don’t leave room for officials to make trade-off judgements, and when you have a mandatory system of regulation focused on compliance rather than judgement, whatever’s left will almost always stop the project. In cybersecurity, “stopping the project” manifests as organisations that achieve perfect compliance whilst maintaining inadequate security, or organisations that decline to pursue opportunities because the compliance burden is prohibitive, or security professionals who spend their careers documenting controls rather than implementing them.
We need frameworks, yes. We need standards, certainly. We need verification that organisations are implementing basic security measures, absolutely. But we also need to acknowledge that security outcomes are also heavily influenced by judgement: understanding one’s risk profile, identifying and prioritising threats, making informed decisions about resource allocation, and adapting to evolving circumstances – these are judgement calls. No compliance framework can substitute for this judgement, and frameworks that attempt to do so succeed only in creating the illusion of security whilst consuming resources that could be directed to its provision.
The pilot’s preflight checklist does not teach the pilot how to fly, nor does it prepare the pilot for every contingency. It verifies that critical parameters are within acceptable ranges before flight begins. Our cybersecurity compliance frameworks should do something similar: verify that essential security controls are present whilst leaving practitioners the judgement necessary to address their specific circumstances. That we have instead created frameworks that specify in minute detail how security must be implemented and documented reflects the same pathology that has paralysed government more broadly. We have substituted compliance for competence, procedure for judgement, and documentation for achievement.
Both are necessary. Both are overwhelming. Until we develop mechanisms for pruning accumulated requirements and refocusing on outcomes rather than methods, we will continue to invest enormous resources in compliance theatre whilst breaches continue apace. The question is whether we possess the institutional capacity to recognise that the problem is systemic rather than simply requiring one more framework, one more requirement, one more certification to finally achieve the security that has thus far proven elusive.
Postscript: The Judgement Problem
If compliance frameworks are inadequate and judgement is necessary, we face an immediate problem: how do we assess judgement? The appeal of compliance-based systems is precisely that they provide objective, measurable criteria. Either the organisation has implemented the required controls or it has not. Either the documentation is complete or it is not. Judgement, by contrast, is inherently subjective and can be evaluated only retrospectively, once outcomes are known.
This suggests an unconventional proposal, one that sounds absurd until one considers the stakes involved. For critical systems (those whose compromise could threaten life, critical infrastructure, or national security), perhaps we need assessment methods that focus on evaluating the judgement and character of those responsible for security decisions rather than merely documenting the controls they have implemented.
Imagine a certification process that included not just technical assessment but also something resembling a personality evaluation or, more provocatively, a polygraph examination focused specifically on judgement in complex scenarios. The assessment would present a series of ethically and technically intricate situations involving trade-offs between security and operational requirements, or between different security approaches with different risk profiles. The critical question is not whether the candidate selects the “correct” answer (there often isn’t one) but rather how they reason through the trade-offs, what factors they consider, and whether they demonstrate the capacity for sound judgement under uncertainty.
Such an assessment might evaluate:
- Whether the individual can articulate the trade-offs involved in security decisions rather than retreating to simplistic formulations
- Whether they demonstrate awareness of their own uncertainty and the limits of their knowledge
- Whether they can identify when circumstances have moved beyond standard procedures and novel judgement is required
- Whether they show evidence of learning from previous errors rather than repeating them
- Whether they demonstrate accountability (willingness to defend decisions and accept responsibility for outcomes) rather than hiding behind compliance
This sounds simultaneously promising and terrifying. Promising because it would represent a fundamental shift from assessing what controls exist to assessing who is making decisions about those controls. Terrifying because it involves subjective evaluation that is vulnerable to bias, because it places enormous weight on individual assessment, and because it implies that some individuals would be deemed unsuitable for security responsibility in critical systems despite possessing relevant technical credentials.
The aviation industry provides a partial model. Pilots undergo not just technical training but also psychological assessment, simulator training that evaluates judgement under various emergency scenarios, and regular check rides where their decision-making is evaluated by experienced examiners. The system is not perfect (poor judgement sometimes remains undetected until disaster strikes), but it represents an attempt to assess not just whether pilots know the procedures but whether they possess the judgement to handle situations that fall outside procedural guidance.
Could something similar work for cybersecurity in critical systems? Perhaps. The challenges are substantial. Unlike aviation, where simulator training can replicate many emergency scenarios with high fidelity, cybersecurity scenarios are more diverse and harder to simulate realistically. Unlike piloting, where poor judgement often has immediate and catastrophic consequences that validate or invalidate the decision, security judgements may take months or years to prove correct or incorrect, and even then attribution is uncertain (was the organisation compromised because of a poor security decision, or despite a reasonable one?).
Moreover, there is something deeply uncomfortable about empowering some assessment body to make determinations about individuals’ suitability for security roles based on subjective evaluation of judgement. Who evaluates the evaluators? What prevents such a system from becoming yet another compliance regime, where individuals learn to provide the responses that the assessment body wants to hear rather than demonstrating genuine judgement? How do we prevent bias from infecting the assessment?
These are not rhetorical questions. They are genuine challenges that would need to be addressed before any such system could be implemented. Yet the alternative is to continue with frameworks that assess the presence of controls rather than the judgement of those implementing them, frameworks that have demonstrably failed to prevent major breaches even in organisations with multiple compliance certifications.
Perhaps the answer lies somewhere between pure compliance-based assessment and pure judgement-based assessment. Critical systems might require both: verification that essential controls are present (the compliance component) and assessment that those responsible for security decisions demonstrate sound judgement (the character component). The compliance component ensures baseline competence; the judgement component ensures the capacity to handle novel situations and make appropriate trade-offs.
This would still leave the question of who evaluates judgement and according to what criteria. But it would at least acknowledge that security in critical systems depends not just on the presence of controls but on the quality of the people responsible for implementing and managing those controls. Our current frameworks treat this as irrelevant or assume that compliance with controls is sufficient. The evidence suggests otherwise. Perhaps it is time to acknowledge that in security, as in aviation, as in government, judgement matters, and that systems which eliminate judgement in favour of compliance inevitably fail at precisely the moment when judgement is most needed.
Whether we possess the courage and wisdom to implement such a system remains, of course, an open question. But the current approach is failing, the breaches continue, and the compliance burden grows ever more onerous. At some point, continuing the current path becomes less defensible than experimenting with alternatives, however uncomfortable they may be. That point may be closer than we think.
Connect on LinkedIn