Tag: Cybersecurity experts

Weekly Digest Issue #85 – July 25, 2024

CyAN’s weekly digest of cybersecurity news from around the globe. Find the links to the full articles below. LinkedIn version and discussion available here. If there is a topic you would like to see more of, do not hesitate to get in touch! Cybersecurity News 1.Crowdstrike: 

Weekly Digest Issue #75 – May 16, 2024

CyAN’s weekly digest of cybersecurity news from around the globe. Find the links to the full articles below. LinkedIn version and discussion available here. If there is a topic you would like to see more of, do not hesitate to get in touch! Part 1 Cybersecurity News Part 2: Analysis 

Cybersecurity Year in Review 2023: Key Events, Learnings, and Takeaways

As 2023 comes to a close, it’s essential to look back at the major cybersecurity events of the year and extract crucial learnings and takeaways. This year has been marked by significant incidents that have reshaped our understanding of digital security, privacy, and cyber resilience.

Major Cybersecurity Incidents of 2023

Some statistics for reference:

Number of incidents in 2023: 1,404*

Number of breached records in 2023: 5,951,612,884*

(*) as of this writing.

  1. Global Ransomware Surge

   The year saw a dramatic increase in ransomware attacks targeting both private and public sector organisations. Notable among them was the attack on MGM Resorts, which resulted in substantial financial losses and highlighted the need for better ransomware preparedness and response strategies.

  1. Data Breaches and Privacy Concerns

   Numerous data breaches occurred, exposing the personal information of millions. The 23andMe breach was particularly alarming due to the sensitivity of the data involved. This event underscored the ongoing challenges in protecting personal information in the digital age.

  1. State-Sponsored Cyber Attacks

   Geopolitical tensions led to an uptick in state-sponsored cyber activities. Kyivstar, Ukraine’s largest mobile network operator, suffered a cyber-attack, one of the highest-impact disruptive cyber-attacks on Ukrainian networks since the start of Russia’s full-scale invasion. The cyber-attack also reportedly disrupted air raid sirens, some banks, ATMs, and point-of-sale terminals. signalling a new era of digital warfare.

  1. AI and Deepfake Misuse

   The misuse of AI technologies, especially deepfakes, posed new threats. The somewhat ease of deepfake use as a social engineering tool raises concerns about the potential use of AI for misinformation and manipulation, especially coming into the US Presidential Elections in 2024.

Learnings and Takeaways

Enhancing Cyber Resilience

   The events of 2023 have shown that cyber resilience is not just about preventing attacks but also about having robust recovery and response plans. Organizations need to invest in both preventive measures and recovery strategies.

The Importance of Cyber Hygiene

   Basic cyber hygiene practices, like regular software updates, strong passwords, and multi-factor authentication, remain vital. Many of the year’s breaches could have been mitigated or avoided with better hygiene practices.

Need for Greater Collaboration

   Cybersecurity is no longer a solitary endeavour. The year highlighted the importance of collaboration between private companies, government agencies, and international bodies to combat cyber threats effectively.

AI and Cybersecurity

   With the rise of AI-powered threats, there’s an urgent need for AI-centric security solutions. Organizations must understand and prepare for the unique challenges posed by AI in the cybersecurity domain.

Privacy and Data Protection Laws

   The data breaches of 2023 have prompted calls for stronger privacy and data protection laws. There is a growing need for legislation that keeps pace with the evolving digital landscape.

Focusing on Human Factors

   Human error continues to be a significant factor in cybersecurity incidents. Training and awareness programs are crucial in mitigating this risk.

Looking Ahead

As we move into 2024, the lessons learned in 2023 will undoubtedly shape our approach to cybersecurity. The key is to adapt and evolve continuously in the face of emerging threats and challenges. Building a cyber-resilient future requires vigilance, innovation, and collective effort.

Striking a Balance between Values and Laws, Innovation and Regulation – Artificial Intelligence

The blog “The Tale of Two Approaches to Artificial Intelligence – EU AI Act & U.S. Executive Order on Safe, Secure, and Trustworthy AI” was a balanced look at the similarities and difference in approaches to AI.  The divergence of approach is a manifestation of 

Resilience Building Regulations and the Financial Sector

The financial sector handles sensitive data and transactions that affect our economy and society. It is a critical sector and is vulnerable to cyberattacks. The SolarWinds, Colonial Pipeline, and Kaseya attacks to name a few, have exposed the weaknesses and gaps in our cybersecurity practices 

The Tale of Two Approaches to Artificial Intelligence – EU AI Act & U.S. Executive Order on Safe, Secure, and Trustworthy AI

Artificial Intelligence (AI) is one of the most powerful and transformative technologies of our time and it also poses significant challenges and risks for safety, security, human rights, and democracy.  How can governments regulate AI to protect the public interest and values while fostering trust and  innovation? I will briefly compare these two policy initiatives, with a focus on their implications for AI startups and companies, innovation of AI, and its implementation. I will then comment on its cybersecurity implications for the U.S. and the EU and conclude with a provocative open-ended question on potential threats, technology and innovation creating environments.

The EU AI Act was established in April 2021 when it was proposed by the European Commission and has gone through the legislative process which in December 2022 the Council of Europe adopted its “common position” and in  June 2023 the European Parliament adopted its “negotiating position”. It will become law when the European Parliament and the Council of Europe meet to  agree on a common version.  The EU AI ACT is a legislative proposal and is part of a broader package of digital regulations whose goal is to create a harmonized legal framework for AI across the EU covering all sectors except for the military. It establishes a governance structure for AI oversight, enforcement, and coordination at the EU and national levels and it introduces a risk-based approach to AI regulation, where different levels of obligations apply depending on the potential impact of the AI system on fundamental rights, safety, and security. The EU AI Act’s cornerstone is its classification system that determines the level of risk an AI system could pose to the health and safety or fundamental rights of a person.

The October 2023 U.S. Executive Order on Safe, Secure and Trustworthy AI is a policy directive to regulate AI by establishing standards for AI safety and security, and by imposing the requirement that the most powerful AI systems need to be extensively tested by third parties to reduce the chance of unintended consequences. It directs new standards for AI safety and security, protection of American’s privacy, advancement of equity and civil rights, protection of consumers and workers, promotion of innovation and competition, and the advancement of American leadership.

The EU and U.S. approaches are similar in that they both share some common goals and principles such as promoting responsible and trustworthy AI, protecting human rights, and safety, fostering innovation and competition, and advancing global leadership and cooperation. They are different in the following ways:

  • Scope: The U.S executive order covers a wider range of AI applications and issues, while the EU AI Act focuses on specific categories of AI system that are classified as high risk or prohibited.
  • Specificity and details: The U.S. executive order sets broad principles and goals for AI development and use, while the EU AI Act provides more detailed and prescriptive requirements and rules for AI providers and users.
  • Enforceability and legal authority: The U.S. executive order is an administrative action that can be modified or revoked by future administrations, while the EU AI Act is a legislative proposal that once approved by the European Parliament and the Council of Europe becomes legally binding.
  • Certification and the role of standards: The U.S. executive order directs federal agencies to develop standards, tools, and tests for AI safety and security but does not mandate compliance or certification for AI systems. The EU AI Act requires high risk AI systems to undergo conformity assessments and obtain certificates before being placed on the market or put into service.
  • International partners and stakeholder involvement: The U.S. executive order encourages public participation and consultation with stakeholders and experts on AI issues, as well as collaboration with allies and partners on global AI governance. The EU AI Act envisages consultation and cooperation mechanisms with stakeholders and third countries, but also emphasizes the need to protect the EU’s values and interests in AI.

These differences are a manifestation of our different legal systems, political cultures, and strategic priorities.  They also have different effects in terms of AI startups/companies, AI innovation and implementation in both EU and the U.S.

  • For AI startups and companies, the U.S executive order could create more opportunities for startups and companies to access resources and markets as it encourages public-private partnerships and international cooperation on AI. The broad and evolving principles and standards set by the government, could, however be a challenge for AI startups and companies. Across the Atlantic, the EU AI Act could create more barriers for AI startups and companies to enter and compete in the EU market due to its strict and costly requirements for high-risk AI systems. On the other hand, it could also create more incentives for AI startups and companies to innovate and differentiate themselves by offering trustworthy and ethical AI solutions.
  • For AI innovation, the U.S. executive order could foster more innovation in AI by promoting fair, open, and competitive ecosystem, as well as supporting research and development in AI. It could however, stifle innovation in AI by limiting the scope and scale of Large Language Models (LLMs), which are key drivers of AI breakthroughs. The EU AI Act could stifle innovation in AI by creating a complex and fragmented regulatory environment, as well as discouraging experimentation and risk taking in AI.  It could however foster innovation in AI by creating a harmonized and predictable legal framework, as well as encouraging human-centric and value-based design in AI.
  • For AI implementation, the U.S. executive order could facilitate more implementation of AI by enhancing the safety, security, and trustworthiness of AI systems, as well as supporting workers and consumers affected by AI. However, it could also complicate the implementation of AI by creating uncertainty and inconsistency in the enforcement and oversight of AI regulations. The EU AI Act could complicate the implementation of AI by imposing high compliance costs and liabilities for high-risk AI systems, as well as restricting the use of certain data and functionalities in AI. On the inverse, it could also facilitate the implementation of AI by enhancing transparency, accountability, and quality of AI systems, as well as protecting the rights and interests of users and affected parties.

Artificial Intelligence systems can pose threats to the security and privacy of data, systems, and users.  This makes cybersecurity an important consideration for any AI regulation.  What are cybersecurity implications of these two similar yet different approaches to AI regulations?

The EU AI Act in Europe could enhance the cybersecurity of high-risk AI systems by its imposition of strict and harmonized rules and standards across the EU. But it could also create challenges for European companies and users to accept or adopt innovative or beneficial AI systems that do not meet the EU criteria or are prohibited by the EU.  Additionally, it could increase the administrative burden and compliance costs for European providers and users of high-risk AI systems.

The U.S. executive order could foster more innovation and flexibility in the development and use of AI systems in the United States by promoting a voluntary and collaborative approach to cybersecurity. However, it could also create risks for U.S. companies and users to face cyberattacks or breaches from malicious actors or adversaries that exploit vulnerabilities or loopholes of AI systems.  It could also reduce the trustworthiness or accountability of U.S. providers and users of AI systems.

These differences are not exhaustive or definitive, as they depend on how the U.S. executive order and the EU AI Act are interpreted and implemented, nor are they mutually exclusive or contradictory, as they reflect different trade-offs, and balances between competing objectives and values in regulating AI as well as different mindsets towards the future of AI.

In closing, artificial intelligence is a dynamic, developing field of technologies that bring benefits and opportunities and require caution and responsibility.  Is regulating potential threats of a technology appropriate to create an innovative environment for AI to flourish in the EU and in the United States? How do we strike a balance between regulation and innovation and between protection and promotion?

Enhancing Resilience: The Role of DORA in Business Continuity and Operational Resilience

Enhancing Resilience: The Role of DORA in Business Continuity and Operational Resilience

In today’s regulatory landscape, navigating various regulations related to risk management can be a daunting challenge for financial institutions. However, the Digital Operational Resilience Act (DORA) offers a unique perspective. DORA not only aligns with existing best practices and regulations but also presents opportunities for 

SolarWinds of Change – How the SEC Ruling Affects the Future of InfoSec Officers

Cybersecurity is more than a technical issue as it has legal and financial implications for companies and investors.  The recent U.S. Securities and Exchange Commission (SEC) charges levied against SolarWinds Corporation and its chief information security officer illustrates the serious consequences of failing to disclose 

Streamlining Operations: The Efficiency Gains from Cybersecurity

In the previous parts of our series, “Cybersecurity: The Unsung Hero of Revenue Protection,” we’ve looked at cybersecurity as a strategic business asset, the financial implications of cyber threats, and its crucial role in fostering customer trust. This fourth instalment examines another crucial aspect of cybersecurity—its ability to enhance operational efficiency.

As we navigate our globally networked, data-rich environment, cybersecurity tools do more than just secure data; they play a significant role in improving operational efficiency.

Cybersecurity Tools: Enhancing Operational Efficiency

Modern cybersecurity tools are increasingly intelligent, agile, and capable of seamlessly integrating with business operations. These tools can monitor network traffic, detect anomalies, perform routine security checks, and respond to threats in real-time. By automating these processes, businesses can focus on their core activities, reducing downtime, and enhancing productivity.

Moreover, effective cybersecurity measures can prevent disruptions caused by cyber incidents, such as data breaches or malware attacks. Such disruptions can lead to significant operational and financial setbacks. By proactively avoiding these incidents, companies can ensure smooth operations and business continuity.

The Power of Automation in Cybersecurity

Automation is becoming increasingly central to cybersecurity. By automating routine security tasks, such as patch management and threat detection, companies can significantly reduce the manual workload of their cybersecurity teams. This, in turn, allows their cybersecurity personnel to focus on more strategic, higher-value tasks, enhancing their productivity and effectiveness.

Furthermore, automated cybersecurity processes are less prone to human error, which is a significant cause of security vulnerabilities. This can result in fewer security incidents and a more secure operational environment.

Case Study: Achieving Efficiency Through Cybersecurity

To truly grasp the impact of cybersecurity on operational efficiency, let’s consider the example of a global logistics company.

Facing an increasingly complex threat landscape, this company decided to overhaul its cybersecurity practices. They implemented advanced cybersecurity tools that automated routine tasks and provided real-time threat intelligence.

These new tools helped the company reduce the time spent on manual security tasks, freeing up their IT staff to focus on strategic initiatives. Additionally, the real-time threat intelligence enabled them to quickly identify and address potential security threats, reducing the risk of disruptive cyber incidents.

The result? Improved operational efficiency, fewer disruptions, and a more secure environment. The company was able to redirect resources to strategic initiatives, leading to innovation and growth. This case underscores the significant efficiency gains that can come from investing in modern, automated cybersecurity tools.

In conclusion, cybersecurity is more than a protective measure—it’s a key driver of operational efficiency. By investing in modern, automated cybersecurity tools, businesses can streamline their operations, reduce disruptions, and free up resources for strategic initiatives.

Stay tuned for our final instalment in this series, where we’ll explore how cybersecurity facilitates and safeguards business innovation.

New Secure-in-Mind Episodes

We have published a number of new videos/podcasts in our Secure-in-Mind series, featuring a wide range of distinguished and exciting guests. Whether you’re interested in fraud/cybercrime, education, incident response, policy, diversity, cyber risk insurance – the CyAN Secure-in-Mind channel is a great place for informed