In 2021, the OECD Working Party on Security in the Digital Economy published a report for policy makers on encouraging vulnerability treatment. Among other things, the report provides information on digital vulnerabilities and how they tie into product security, and issues recommendations not only for the establishment of common vulnerability disclosure (CVD) programs, but also for how to create a constructive environment where ethical hackers can search for, report, and help remediate software security bugs without fear of legal retaliation – civil or criminal.
This is a big deal; the OECD already recognized the need for better secure-by-design principles in parallel reports mentioned in the paper  . In the meantime, the European Union’s Cyber Resilience Act (CRA) places major requirements on software publishers, importers, and distributors for enhancing the security of digital products – both during the design and aftermarket phase. See my quick and dirty overview of this law here. Together with increased cybersecurity and resilience requirements such as the significant supply chain risk management requirements placed on critical economic sectors by the NIS 2 directive and other recent rules, the EU has signaled a major growth in attention to, and understanding of, digital security and the need to protect society and the economy by making software more robust.
Policies to protect ethical hackers and thus ensure timely information about new security vulnerabilities before the bad guys can find and abuse them have also seen a slow shift towards more pragmatic legislation. The Good Faith Cybersecurity Researchers Coalition (GFCRC), a not for profit industry initiative supported by CyAN, with the objective of coordinating industry action and education for better shielding of ethical hackers, has been tracking numerous moves by governments around the world in this direction.
Legal protections, from both criminal (arrest, prosecution) and civil 1(harassment, lawsuits) jeopardy, go hand in hand with constructive resources and guidance for ethical researchers. These consist primarily of two major elements:
- Bug bounty platforms and other formal CVD mechanisms
- Officially sanctioned, actionable, and clear guidance and resources from public sector entities such as national security centres, national CERTs, law enforcement agencies, responsible ministries, and other bodies that are empowered to represent legal policy
A wealth of material and services in both categories exist around the world, as Nick Kelly and I point out in our 2023 article “Protecting Responsible Cybersecurity Vulnerability Research” for the European Cybersecurity Journal on this topic (ECJ Volume 9 (2023) Issue 1). However, there are some significant gaps.
Specifically, the European Union Agency for Cybersecurity (ENISA) could, and should play a key role in informing EU policy that influences both Europe-wide legislation and national laws, to be more accommodating towards good faith vulnerability research.
ENISA has two great strengths that build on its good reputation in the industry – it has a strong track record in issuing good practices guidance (my favorite example of this is the excellent ISAC in a Box toolkit), and it is the best placed organization in Europe to coordinate and encourage public-private cooperation to help secure European digital society and institutions. It also provides occasional original vulnerability research. The agency has a reasonable track record of supporting and working with private sector work such as the EU ISACs, but like many EU institutions (speaking as a committed Europhile), it could do more to strengthen and share proven techniques and initiatives.
CVD tools and materials are a great example of where I believe ENISA should and could provide much more active leadership, as well as maturing and disseminating good practices. There are some materials that provide a decent start to this, but all are in need of updates.
The Good Practice Guide on Vulnerability Disclosure dates all the way back to 2015 – especially given ENISA’s recently announced and highly welcome closer cooperation with CISA, it would make sense to revisit this guide and ensure it’s up to date. The State of Vulnerabilities report dates from 2018/19, and while the methodologies and recommendations described in the paper remain valid, it does not take into account the massive spate of supply chain vulnerabilities and attacks experienced globally in 2020/21, such as SolarWinds, Accellion, the four zeroday CVEs lay at the root of the 2021 Microsoft Exchange server breach, and others. The overview of Coordinated Vulnerability Disclosure Policies in the EU is reasonably complete and up to date, as of April 2022. However, it does not mention e.g. the Belgian cybersecurity legal reform of February 2023, a major milestone for EU member countries.
None of these papers mention the OECD digital economy working party’s recommendations; this is a major gap, considering three key EU member countries plus the EU itself are members of the G-20. Furthermore, given the need for ethical researchers and firms alike to have access to up-to-date information about what laws apply to them right now, it would make sense for an entity like ENISA to provide more up-to-date guidance of national laws, similarly to Global Legal Group’s list of national cybersecurity laws.
In my view, the biggest issue is the lack of a direct, easily accessible path to the correct vulnerability disclosure policy or process. Security bug hunting is a highly technical process, requiring a great deal of skill, time, and dedication. ENISA, like so many other organizations providing (mostly) correct and thorough information, falls into the trap of “all the information is there”. Yes, it is. However, like many national cybersecurity agencies’ good practices offerings for small businesses, there has to be more of a balance between correctness/thoroughness and usability/accessibility – especially for non-subject matter experts and people less experienced or familiar with process documentation, or non native English speakers.
ENISA is not an operational body, meaning that even though it conducts some original technical research, it does not perform incident response or vulnerability management functions. Even though it is closely connected to CERT-EU with whom it collaborates on the publication of some cyber-threats and -vulnerabilities, the scope of that body itself is limited to EU institutions and agencies. As a result, it is unfair to expect ENISA to provide a Europe-wide CVD process…or is it?
In my anecdotal experience, there are significant cybersecurity gaps among European institutions between the admittedly excellent guidance that agencies like ENISA provide, and the often lackluster operational capabilities of member states and national agencies. I am fully aware of the challenges of European multistakeholder politics, and the need to strictly respect boundaries established by European rules, such as the European Cybersecurity Act. However, given the fast-moving nature of cybersecurity attacks, and the ongoing risks from critical vulnerabilities, bureaucratic niceties should not limit industry and society’s ability to quickly respond to evolving threats.
The inflexible nature of this rules-based, formal approach to collaborative cybersecurity was perfectly illustrated in an industry working group discussion of mandatory incident notification under the NIS2 Directive, chaired by ENISA representatives a few years ago. The EU approach to critical incident reporting is very tidy, incorporating national competent authorities and some to-be-defined central repository. Nobody was able to satisfactorily answer questions about what would actually be done with such incident reports (e.g. used to create playbooks for TIBER-EU or exercise scenarios so others can learn from them?) or how they would be securely stored.
Especially given the lack of CVD policies or ethical hacking-friendly laws in many countries (see again the Coordinated Vulnerability Disclosure Policies paper mentioned earlier), it would make enormous sense for the agency to
a) provide an easy way for a researcher to find and navigate to the correct way to report a new, critical flaw, and
b) if such a channel does not exist (for example due to lack of a national capability), to provide it, and to funnel the information to the right place.
Furthermore, ENISA already chairs the EU CSIRTs network. Unlike FIRST.org, this group pointedly excludes all but “official” CSIRTs and CERTs appointed by member states, and CERT-EU. Absent many national CVD reporting mechanisms and rules, it would be beneficial for ENISA to take a more flexible approach to non-member state CERTs/CSIRTs and other trusted operational cybersecurity entities. There is already precedent for this, in the form of ENISA’s engagement with the EU ISACs community, even though its rules about whether or not a member ISAC must be “European” are somewhat confusing. In this way, the agency would have even more reach to communicate reported vulnerability information to the correct body.
EU institutions move slowly, and unless multiple critical industry actors (such as large firms in the many economic sectors defined by the NIS2 Directive) collectively exert pressure via ENISA-chaired working groups or via their national authorities, the situation is unlikely to change soon. We can only hope that, while more and more governments adopt legal reform to protect ethical hackers, and cybersecurity agencies develop and implement CVD policies and processes, European industry itself can work together to ensure fast, unbureaucratic access to emerging cybersecurity vulnerability information, so we can fix bugs as quickly as possible.