Recent Posts

New Video/Podcasts – The State of (Cyber) War

New Video/Podcasts – The State of (Cyber) War

Join James Briscoe and John Salomon for our new conversation series on “cyber warfare” and all it entails. In our new playlist, part of CyAN’s Secure-in-Mind media series, we address many of the challenging and fascinating issues surrounding disinformation, national and regional cyberdefence policy, threat 

Cybersecurity Year in Review 2023: Key Events, Learnings, and Takeaways

As 2023 comes to a close, it’s essential to look back at the major cybersecurity events of the year and extract crucial learnings and takeaways. This year has been marked by significant incidents that have reshaped our understanding of digital security, privacy, and cyber resilience. 

Striking a Balance between Values and Laws, Innovation and Regulation – Artificial Intelligence

The blog “The Tale of Two Approaches to Artificial Intelligence – EU AI Act & U.S. Executive Order on Safe, Secure, and Trustworthy AI” was a balanced look at the similarities and difference in approaches to AI.  The divergence of approach is a manifestation of our different legal systems, political cultures, and strategic priorities. This opinion piece is an extension of that blog focusing on the EU AI Act.  How might we strike a balance between innovation and regulation for both the big tech and the Small and Medium Sized Enterprise (SME)? Indeed there is a race to govern AI. Are we focusing on the real transformative power of this technology?

The European Union’s AI Act highlights its commitment to setting a global standard for the deployment of ethical AI. The delay is a natural by-product of our democratic values and character. It is a complex and impactful legislation that is attempting to delicately balance regulation and innovation with an aim to ensure the safety of AI systems that respect our values and laws, and yet, it must also avoid curbing the innovation that is critical to our economic vitality and technological progress.

The trilogue talks will commence and from a policy perspective perhaps creating a more nuanced category of AI applications which allow for a tiered approach to regulation is the way forward. Not all AI applications would need to be subjected to the same level of scrutiny, which reduces the burden on less risky AI activities. However, it still leans towards a protective regulatory regime. The compliance costs associated with it could disproportionately impact SMEs. The core of the challenge is ensuring this regulatory framework is robust to protect citizens and their rights without placing an undue burden on the smaller industry players.

To ensure that SMEs are not hindered by the Act, exemptions or tiered compliance requirements based on enterprise size or AI application scope would be prudent. Government funded programs or incentives for compliance could also relieve some of the financial weight on SMEs along side having a clear and accessible framework for compliance through the use of online portals or dedicated support teams to help SMEs navigate the regulatory landscape efficiently and effectively.

Innovation is not only the purview of big tech firms, it is in fact, the engines of breakthroughs and novel applications are often the SMEs. Therefore, the policy should be shaped in a manner that nurtures the innovative spirit inherent in these smaller enterprises.

Reflecting on the potential impact of the EU AI Act on the global stage, it is clear that the way we govern AI today will have an acute implication for the competitive dynamics of tomorrow.  

How could we address the concerns of the potential impact of the Act towards SMEs? the notion of big tech – big responsibility which echoes the principles of proportionality and fairness, recognizing that the giants of the tech industry have the resources to bear a greater share of the regulatory burden is compelling and interesting to explore. It is an approach that could be helpful in fostering a more equitable innovation ecosystem where SMEs can thrive without the overshadowing burden of compliance costs. The stakes are high in regards to foundation models and their applications that have an immense potential to alter industries and societies. It is incumbent upon the larger players, who have the capacity to develop and deploy AI at scale, ensuring their innovations do not cause negative societal impacts. They should be the ones primarily responsible for the rigorous testing, robust quality management and the ethical considerations that come with AI deployment.

For the SMEs whose AI applications are specialized and limited in scope, a big tech – big responsibility model would allow them to continue to innovate within their niche without the disproportionate burden of compliance. I am not advocating that SMEs should be exempt from regulation, rather the regulatory framework should be scalable and adaptable, reflecting the size of the company and the potential impact of the AI tool. A regulatory environment that is responsive to the scale and scope of the AI application encourages innovation across the board.

Artificial Intelligence is indeed a transformative technology. The transformation I am referring to is a paradigm shift from a culture of proprietary dominance to one of collaborative stewardship. Collaboration is not without its challenges, but it is key. Monetization is a significant hurdle. Big tech companies are beholden to their shareholders and operate within an economic model that awards intellectual property and competitive advantage. The willingness to share foundational models  and tools is contingent upon a business model that can reconcile the open dissemination of technology with the need to generate profits.

This is a challenge that requires the exploration of novel business models that incentivize collaboration without compromising financial sustainability of big tech firms. A tiered access model or a form of a revenue-sharing agreement where SMEs contribute to development and refinement of AI models in exchange for access to the technology could be one way. It is a complex issue that needs a multifaceted approach, which includes policy incentives, industry standards and most importantly perhaps, a cultural shift within the tech industry towards a more cooperative and socially responsible ethos.

The evolution of technology which can impact society in a profound way must not only prioritize innovation and market dominance but also social responsibility and ethical considerations. This is a pivotal cultural shift that requires a significant realignment of values and incentives, encouraging big tech to view their role through a lens of stewardship and societal benefit, rather than solely through the lens of profit maximization. Practically, this compels a rethinking of corporate governance structures to reward long-term, socially responsible innovation. Measuring success metrics would need to be recalibrated, moving away from short term financial gain to include long term impacts on society and environment.

This is not just about the willingness of big tech to share but also the mechanism by which they might do so in a manner that promotes sustainable, inclusive growth. Licensing agreements that allow SMEs to use AI technologies at a reduced cost, or collaborative research initiatives that pool resources and share findings, could also be transformative.

The role of government and international bodies in fostering this cultural shift through policies that incentivize ethical practices, such as tax breaks for companies that engage in responsible AI development, or grants for collaborative projects between big tech and SMEs could be an instrument to facilitate this cultural shift.

What novel business models can you think about that incentivizes collaboration without compromising financial sustainability? 

Does the big tech companies have a larger share of the moral and social obligation to ensure AI systems are ethical, fair, accountable and transparent?

The Power of Fully Homomorphic Encryption in the Fight Against Ransomware

The Power of Fully Homomorphic Encryption in the Fight Against Ransomware

A repost of an article one of our members wrote for a client of his, regarding the use case of fully homomorphic encryption as a safeguard against ransomware-borne data exfiltration and various forms of extortion.

CyAN Mentorship Pilot Wrap-Up

CyAN Mentorship Pilot Wrap-Up

CyAN just completed a pilot of its new mentorship programme, comprising 6 candidates from various universities around the world, and 5 mentors from the CyAN community.

Explorons les Tendances Actuelles des Menaces Cybernétiques et Comment s’en Protéger

Explorons les Tendances Actuelles des Menaces Cybernétiques et Comment s’en Protéger

Étienne Bryan Botog est candidat dans le programme Masters en Cybersécurité de l’École High-Tech à Rabat, Maroc. Dans le cadre de son participation dans le programme d’encadrement (“mentorship programme”) CyAN, Étienne a publi une analyse des trends principaux actuels sur le terrain des menaces cybernétiques, et des mesures de protection et de prévention.

Voici le résumé de son oeuvre – l’article se trouve en pièce jointe (format PDF) à l’article. Nous sommes fiers d’acceullir Étienne comme collègue et membre de la communauté globale CyAN!

Notre récente plongée dans le paysage complexe de la cybersécurité a révélé des tendances cruciales qui façonnent le monde des menaces informatiques. Au cœur de cette analyse se trouvent des points clés que tout utilisateur et professionnel de la cybersécurité devrait comprendre.

Les Menaces Actuelles : Nous avons identifié diverses menaces, des attaques sophistiquées utilisant l’intelligence émotionnelle à l’émergence de logiciels malveillants évasifs. Ces dangers exigent une vigilance accrue et des stratégies de défense adaptatives.

Les Techniques Émergentes : Les cybercriminels ne cessent d’innover, utilisant des techniques telles que le quishing (phishing via QR codes) pour tromper les utilisateurs. Comprendre ces méthodes est essentiel pour renforcer nos défenses.

La Sensibilisation : Au-delà des outils de sécurité, la sensibilisation demeure notre meilleur atout. L’article explore l’importance cruciale de sensibiliser les utilisateurs aux risques et aux bonnes pratiques pour créer une cybercommunauté plus sûre.

Prévention et Protection : Enfin, des mesures de prévention et de protection sont discutées. Des conseils pratiques sont donnés pour contrer ces menaces, de la vérification des sources à l’utilisation d’applications de sécurité dédiées.

Cet article offre un aperçu approfondi, mais pour une compréhension exhaustive, nous vous invitons à explorer le document complet. Notre objectif est d’armer la communauté cybernétique avec les connaissances nécessaires pour se défendre dans ce paysage en constante évolution.

CyAN Mentorship Programme Report – Nils Eiling

CyAN Mentorship Programme Report – Nils Eiling

CyAN mentorship pilot member Nils Eiling shares his experiences on how his collaboration with mentor and CyAN member Boris Taratine contributed to his research and academic development.

The EU Cyber Resilience Act – A Brief-ish and Sloppy Overview

The EU Cyber Resilience Act – A Brief-ish and Sloppy Overview

The EU’s Cyber Resilience Act (CRA) recently gained political agreement, and is in the process of being adopted by the parliament. This expansive regulation will deeply affect how ICT products are designed, sold, and maintained in a more secure manner throughout the EU.

🔍 Exploring the Nexus: NIST Framework vs. DORA Regulation in the Financial Sector 🌐💼

In the ever-evolving landscape of cybersecurity and compliance, it’s crucial for professionals in the financial sector to navigate the intricacies of frameworks and regulations.

Today, let’s delve into the intriguing parallels and distinctions between the NIST Framework and the DORA Regulation.

🌐Common Ground : Both NIST (National Institute of Standards and Technology) and DORA (Digital Operational Resilience Act) share the overarching goal of fortifying the cybersecurity posture of financial institutions. They act as guideposts, offering a structured approach to managing risks and bolstering the resilience of digital systems.

💡Key Similarities:

  1. Risk Management Emphasis: Both frameworks underscore the significance of a robust risk management strategy, urging organizations to identify, assess, and mitigate potential threats to their digital infrastructure.
  1. Holistic Approach: NIST and DORA adopt a comprehensive perspective, acknowledging that cybersecurity isn’t merely a technological challenge but a multifaceted issue that demands attention to people, processes, and technology.
  1. Continuous Improvement: Continuous monitoring and improvement are pivotal components. Regular assessments, feedback loops, and adaptability are endorsed to keep pace with the dynamic nature of cyber threats.

🔄 Points of Divergence :

  1. Geographical Focus: One of the notable distinctions lies in the geographical scope. While NIST is a U.S.-centric framework, DORA has a broader jurisdiction, impacting financial institutions operating within the European Union.
  1. Regulatory Specificity: DORA, being a regulation, carries a more prescriptive nature compared to the voluntary guidance offered by NIST. Financial entities under DORA are obliged to adhere to specific requirements, adding a layer of regulatory compliance.
  1. Incident Reporting: DORA will introduce a standardized incident reporting mechanism, ensuring a unified approach across the EU. NIST, on the other hand, provides guidelines, leaving the implementation to the discretion of organizations.

🚀 Strategic Synergy : To navigate this intricate terrain effectively, financial institutions might find value in integrating the strengths of both frameworks. By amalgamating the flexibility of NIST with the regulatory clarity of DORA, organizations can sculpt a resilient cybersecurity strategy tailored to their unique operational landscape.

In conclusion, understanding the nuanced interplay between the NIST Framework and DORA Regulation is pivotal for financial sector professionals. It’s not merely a compliance exercise but a strategic imperative to safeguard digital assets and uphold the trust of stakeholders in an increasingly interconnected world.

Let’s continue the dialogue on #Cybersecurity and #FinancialResilience 💻🌐 #NIST #DORA #CyberRiskManagement #FinanceTech #Compliance

The Growing Threat of Quantum Supremacy in The Era Of Digital Civilization

The Growing Threat of Quantum Supremacy in The Era Of Digital Civilization

Aliasgar Eranpurwala, a graduate of the CyAN mentorship programme pilot, writes about his work on post-quantum cryptography and quantum key distribution to secure satellite communications