Recent Posts
🔍 Exploring the Nexus: NIST Framework vs. DORA Regulation in the Financial Sector 🌐💼
In the ever-evolving landscape of cybersecurity and compliance, it’s crucial for professionals in the financial sector to navigate the intricacies of frameworks and regulations.
Today, let’s delve into the intriguing parallels and distinctions between the NIST Framework and the DORA Regulation.
🌐Common Ground : Both NIST (National Institute of Standards and Technology) and DORA (Digital Operational Resilience Act) share the overarching goal of fortifying the cybersecurity posture of financial institutions. They act as guideposts, offering a structured approach to managing risks and bolstering the resilience of digital systems.
💡Key Similarities:
- Risk Management Emphasis: Both frameworks underscore the significance of a robust risk management strategy, urging organizations to identify, assess, and mitigate potential threats to their digital infrastructure.
- Holistic Approach: NIST and DORA adopt a comprehensive perspective, acknowledging that cybersecurity isn’t merely a technological challenge but a multifaceted issue that demands attention to people, processes, and technology.
- Continuous Improvement: Continuous monitoring and improvement are pivotal components. Regular assessments, feedback loops, and adaptability are endorsed to keep pace with the dynamic nature of cyber threats.
🔄 Points of Divergence :
- Geographical Focus: One of the notable distinctions lies in the geographical scope. While NIST is a U.S.-centric framework, DORA has a broader jurisdiction, impacting financial institutions operating within the European Union.
- Regulatory Specificity: DORA, being a regulation, carries a more prescriptive nature compared to the voluntary guidance offered by NIST. Financial entities under DORA are obliged to adhere to specific requirements, adding a layer of regulatory compliance.
- Incident Reporting: DORA will introduce a standardized incident reporting mechanism, ensuring a unified approach across the EU. NIST, on the other hand, provides guidelines, leaving the implementation to the discretion of organizations.
🚀 Strategic Synergy : To navigate this intricate terrain effectively, financial institutions might find value in integrating the strengths of both frameworks. By amalgamating the flexibility of NIST with the regulatory clarity of DORA, organizations can sculpt a resilient cybersecurity strategy tailored to their unique operational landscape.
In conclusion, understanding the nuanced interplay between the NIST Framework and DORA Regulation is pivotal for financial sector professionals. It’s not merely a compliance exercise but a strategic imperative to safeguard digital assets and uphold the trust of stakeholders in an increasingly interconnected world.
Let’s continue the dialogue on #Cybersecurity and #FinancialResilience 💻🌐 #NIST #DORA #CyberRiskManagement #FinanceTech #Compliance
Resilience Building Regulations and the Financial Sector
The financial sector handles sensitive data and transactions that affect our economy and society. It is a critical sector and is vulnerable to cyberattacks. The SolarWinds, Colonial Pipeline, and Kaseya attacks to name a few, have exposed the weaknesses and gaps in our cybersecurity practices …
Video/Podcast – The Paradoxes of Personalization, Regulation, and Trust
Kojo Osei Amoyaw-Osei Presents his Thesis
Kojo Osei Amoyaw-Osei is a master’s candidate at EM-Lyon Business School. He joins us today to discuss his thesis project for the MSc programme in Cybersecurity and Defence Management.
Businesses face a growing set of challenges when building their information security maturity – specifically, Kojo has identified three core paradoxes in his research:
1) Personalisation – delivering personalised experiences while respecting privacy preferences
2) Regulation – balancing regulatory compliance with data-driven strategies and innovation
3) Trust – earning and maintaining trust by adopting transparent data practices, implementing robust data security measures, and demonstrating responsible data use

This episode of the CyAN Secure-in-Mind video and podcast series turns our usual format around, as Kojo interviews John Salomon, the usual host of these sessions, based on his extensive experience in the industry, as part of his thesis research.
EM Lyon MsC in Cybersecurity and Defence Management: https://em-lyon.com/en/news/who-will-you-learn-msc-cybersecurity-defense-management-program
Kojo on LinkedIn: https://www.linkedin.com/in/kojooseiamoyawosei/
Check out the rest of CyAN’s media channels on https://cybersecurityadvisors.network/media – and visit us at https://cybersecurityadvisors.network
Intro/outro music courtesy of Studio Kolomna via Pixabay: https://pixabay.com/users/studiokolomna-2073170/
Disinformation and AI – a Growing Challenge
I recently had the pleasure of joining Dr. Egor Zakharov of the AIT Lab at the Swiss Federal Polytechnic University, Zurich (ETHZ) for a fireside chat at the ITBN conference in Budapest, Hungary. Egor is an accomplished researcher and author on the topic of AI-generated …
New Secure-in-Mind Videos/Podcasts!
We’ve added a number of great new episodes to our Secure-in-Mind podcast/video interview series: Florian Hantke, PhD candidate at CISPA Helmholtz in Germany and CyAN mentorship programme participant, on pen testing and vulnerability research Remy Bertot, founder & CTO as Passbolt, on privacy, encryption, and …
The Tale of Two Approaches to Artificial Intelligence – EU AI Act & U.S. Executive Order on Safe, Secure, and Trustworthy AI
Artificial Intelligence (AI) is one of the most powerful and transformative technologies of our time and it also poses significant challenges and risks for safety, security, human rights, and democracy. How can governments regulate AI to protect the public interest and values while fostering trust and innovation? I will briefly compare these two policy initiatives, with a focus on their implications for AI startups and companies, innovation of AI, and its implementation. I will then comment on its cybersecurity implications for the U.S. and the EU and conclude with a provocative open-ended question on potential threats, technology and innovation creating environments.
The EU AI Act was established in April 2021 when it was proposed by the European Commission and has gone through the legislative process which in December 2022 the Council of Europe adopted its “common position” and in June 2023 the European Parliament adopted its “negotiating position”. It will become law when the European Parliament and the Council of Europe meet to agree on a common version. The EU AI ACT is a legislative proposal and is part of a broader package of digital regulations whose goal is to create a harmonized legal framework for AI across the EU covering all sectors except for the military. It establishes a governance structure for AI oversight, enforcement, and coordination at the EU and national levels and it introduces a risk-based approach to AI regulation, where different levels of obligations apply depending on the potential impact of the AI system on fundamental rights, safety, and security. The EU AI Act’s cornerstone is its classification system that determines the level of risk an AI system could pose to the health and safety or fundamental rights of a person.
The October 2023 U.S. Executive Order on Safe, Secure and Trustworthy AI is a policy directive to regulate AI by establishing standards for AI safety and security, and by imposing the requirement that the most powerful AI systems need to be extensively tested by third parties to reduce the chance of unintended consequences. It directs new standards for AI safety and security, protection of American’s privacy, advancement of equity and civil rights, protection of consumers and workers, promotion of innovation and competition, and the advancement of American leadership.
The EU and U.S. approaches are similar in that they both share some common goals and principles such as promoting responsible and trustworthy AI, protecting human rights, and safety, fostering innovation and competition, and advancing global leadership and cooperation. They are different in the following ways:
- Scope: The U.S executive order covers a wider range of AI applications and issues, while the EU AI Act focuses on specific categories of AI system that are classified as high risk or prohibited.
- Specificity and details: The U.S. executive order sets broad principles and goals for AI development and use, while the EU AI Act provides more detailed and prescriptive requirements and rules for AI providers and users.
- Enforceability and legal authority: The U.S. executive order is an administrative action that can be modified or revoked by future administrations, while the EU AI Act is a legislative proposal that once approved by the European Parliament and the Council of Europe becomes legally binding.
- Certification and the role of standards: The U.S. executive order directs federal agencies to develop standards, tools, and tests for AI safety and security but does not mandate compliance or certification for AI systems. The EU AI Act requires high risk AI systems to undergo conformity assessments and obtain certificates before being placed on the market or put into service.
- International partners and stakeholder involvement: The U.S. executive order encourages public participation and consultation with stakeholders and experts on AI issues, as well as collaboration with allies and partners on global AI governance. The EU AI Act envisages consultation and cooperation mechanisms with stakeholders and third countries, but also emphasizes the need to protect the EU’s values and interests in AI.
These differences are a manifestation of our different legal systems, political cultures, and strategic priorities. They also have different effects in terms of AI startups/companies, AI innovation and implementation in both EU and the U.S.
- For AI startups and companies, the U.S executive order could create more opportunities for startups and companies to access resources and markets as it encourages public-private partnerships and international cooperation on AI. The broad and evolving principles and standards set by the government, could, however be a challenge for AI startups and companies. Across the Atlantic, the EU AI Act could create more barriers for AI startups and companies to enter and compete in the EU market due to its strict and costly requirements for high-risk AI systems. On the other hand, it could also create more incentives for AI startups and companies to innovate and differentiate themselves by offering trustworthy and ethical AI solutions.
- For AI innovation, the U.S. executive order could foster more innovation in AI by promoting fair, open, and competitive ecosystem, as well as supporting research and development in AI. It could however, stifle innovation in AI by limiting the scope and scale of Large Language Models (LLMs), which are key drivers of AI breakthroughs. The EU AI Act could stifle innovation in AI by creating a complex and fragmented regulatory environment, as well as discouraging experimentation and risk taking in AI. It could however foster innovation in AI by creating a harmonized and predictable legal framework, as well as encouraging human-centric and value-based design in AI.
- For AI implementation, the U.S. executive order could facilitate more implementation of AI by enhancing the safety, security, and trustworthiness of AI systems, as well as supporting workers and consumers affected by AI. However, it could also complicate the implementation of AI by creating uncertainty and inconsistency in the enforcement and oversight of AI regulations. The EU AI Act could complicate the implementation of AI by imposing high compliance costs and liabilities for high-risk AI systems, as well as restricting the use of certain data and functionalities in AI. On the inverse, it could also facilitate the implementation of AI by enhancing transparency, accountability, and quality of AI systems, as well as protecting the rights and interests of users and affected parties.
Artificial Intelligence systems can pose threats to the security and privacy of data, systems, and users. This makes cybersecurity an important consideration for any AI regulation. What are cybersecurity implications of these two similar yet different approaches to AI regulations?
The EU AI Act in Europe could enhance the cybersecurity of high-risk AI systems by its imposition of strict and harmonized rules and standards across the EU. But it could also create challenges for European companies and users to accept or adopt innovative or beneficial AI systems that do not meet the EU criteria or are prohibited by the EU. Additionally, it could increase the administrative burden and compliance costs for European providers and users of high-risk AI systems.
The U.S. executive order could foster more innovation and flexibility in the development and use of AI systems in the United States by promoting a voluntary and collaborative approach to cybersecurity. However, it could also create risks for U.S. companies and users to face cyberattacks or breaches from malicious actors or adversaries that exploit vulnerabilities or loopholes of AI systems. It could also reduce the trustworthiness or accountability of U.S. providers and users of AI systems.
These differences are not exhaustive or definitive, as they depend on how the U.S. executive order and the EU AI Act are interpreted and implemented, nor are they mutually exclusive or contradictory, as they reflect different trade-offs, and balances between competing objectives and values in regulating AI as well as different mindsets towards the future of AI.
In closing, artificial intelligence is a dynamic, developing field of technologies that bring benefits and opportunities and require caution and responsibility. Is regulating potential threats of a technology appropriate to create an innovative environment for AI to flourish in the EU and in the United States? How do we strike a balance between regulation and innovation and between protection and promotion?
Enhancing Resilience: The Role of DORA in Business Continuity and Operational Resilience
In today’s regulatory landscape, navigating various regulations related to risk management can be a daunting challenge for financial institutions. However, the Digital Operational Resilience Act (DORA) offers a unique perspective. DORA not only aligns with existing best practices and regulations but also presents opportunities for …