Artificial Intelligence (AI) is one of the most powerful and transformative technologies of our time and it also poses significant challenges and risks for safety, security, human rights, and democracy. How can governments regulate AI to protect the public interest and values while fostering trust and innovation? I will briefly compare these two policy initiatives, with a focus on their implications for AI startups and companies, innovation of AI, and its implementation. I will then comment on its cybersecurity implications for the U.S. and the EU and conclude with a provocative open-ended question on potential threats, technology and innovation creating environments.
The EU AI Act was established in April 2021 when it was proposed by the European Commission and has gone through the legislative process which in December 2022 the Council of Europe adopted its “common position” and in June 2023 the European Parliament adopted its “negotiating position”. It will become law when the European Parliament and the Council of Europe meet to agree on a common version. The EU AI ACT is a legislative proposal and is part of a broader package of digital regulations whose goal is to create a harmonized legal framework for AI across the EU covering all sectors except for the military. It establishes a governance structure for AI oversight, enforcement, and coordination at the EU and national levels and it introduces a risk-based approach to AI regulation, where different levels of obligations apply depending on the potential impact of the AI system on fundamental rights, safety, and security. The EU AI Act’s cornerstone is its classification system that determines the level of risk an AI system could pose to the health and safety or fundamental rights of a person.
The October 2023 U.S. Executive Order on Safe, Secure and Trustworthy AI is a policy directive to regulate AI by establishing standards for AI safety and security, and by imposing the requirement that the most powerful AI systems need to be extensively tested by third parties to reduce the chance of unintended consequences. It directs new standards for AI safety and security, protection of American’s privacy, advancement of equity and civil rights, protection of consumers and workers, promotion of innovation and competition, and the advancement of American leadership.
The EU and U.S. approaches are similar in that they both share some common goals and principles such as promoting responsible and trustworthy AI, protecting human rights, and safety, fostering innovation and competition, and advancing global leadership and cooperation. They are different in the following ways:
- Scope: The U.S executive order covers a wider range of AI applications and issues, while the EU AI Act focuses on specific categories of AI system that are classified as high risk or prohibited.
- Specificity and details: The U.S. executive order sets broad principles and goals for AI development and use, while the EU AI Act provides more detailed and prescriptive requirements and rules for AI providers and users.
- Enforceability and legal authority: The U.S. executive order is an administrative action that can be modified or revoked by future administrations, while the EU AI Act is a legislative proposal that once approved by the European Parliament and the Council of Europe becomes legally binding.
- Certification and the role of standards: The U.S. executive order directs federal agencies to develop standards, tools, and tests for AI safety and security but does not mandate compliance or certification for AI systems. The EU AI Act requires high risk AI systems to undergo conformity assessments and obtain certificates before being placed on the market or put into service.
- International partners and stakeholder involvement: The U.S. executive order encourages public participation and consultation with stakeholders and experts on AI issues, as well as collaboration with allies and partners on global AI governance. The EU AI Act envisages consultation and cooperation mechanisms with stakeholders and third countries, but also emphasizes the need to protect the EU’s values and interests in AI.
These differences are a manifestation of our different legal systems, political cultures, and strategic priorities. They also have different effects in terms of AI startups/companies, AI innovation and implementation in both EU and the U.S.
- For AI startups and companies, the U.S executive order could create more opportunities for startups and companies to access resources and markets as it encourages public-private partnerships and international cooperation on AI. The broad and evolving principles and standards set by the government, could, however be a challenge for AI startups and companies. Across the Atlantic, the EU AI Act could create more barriers for AI startups and companies to enter and compete in the EU market due to its strict and costly requirements for high-risk AI systems. On the other hand, it could also create more incentives for AI startups and companies to innovate and differentiate themselves by offering trustworthy and ethical AI solutions.
- For AI innovation, the U.S. executive order could foster more innovation in AI by promoting fair, open, and competitive ecosystem, as well as supporting research and development in AI. It could however, stifle innovation in AI by limiting the scope and scale of Large Language Models (LLMs), which are key drivers of AI breakthroughs. The EU AI Act could stifle innovation in AI by creating a complex and fragmented regulatory environment, as well as discouraging experimentation and risk taking in AI. It could however foster innovation in AI by creating a harmonized and predictable legal framework, as well as encouraging human-centric and value-based design in AI.
- For AI implementation, the U.S. executive order could facilitate more implementation of AI by enhancing the safety, security, and trustworthiness of AI systems, as well as supporting workers and consumers affected by AI. However, it could also complicate the implementation of AI by creating uncertainty and inconsistency in the enforcement and oversight of AI regulations. The EU AI Act could complicate the implementation of AI by imposing high compliance costs and liabilities for high-risk AI systems, as well as restricting the use of certain data and functionalities in AI. On the inverse, it could also facilitate the implementation of AI by enhancing transparency, accountability, and quality of AI systems, as well as protecting the rights and interests of users and affected parties.
Artificial Intelligence systems can pose threats to the security and privacy of data, systems, and users. This makes cybersecurity an important consideration for any AI regulation. What are cybersecurity implications of these two similar yet different approaches to AI regulations?
The EU AI Act in Europe could enhance the cybersecurity of high-risk AI systems by its imposition of strict and harmonized rules and standards across the EU. But it could also create challenges for European companies and users to accept or adopt innovative or beneficial AI systems that do not meet the EU criteria or are prohibited by the EU. Additionally, it could increase the administrative burden and compliance costs for European providers and users of high-risk AI systems.
The U.S. executive order could foster more innovation and flexibility in the development and use of AI systems in the United States by promoting a voluntary and collaborative approach to cybersecurity. However, it could also create risks for U.S. companies and users to face cyberattacks or breaches from malicious actors or adversaries that exploit vulnerabilities or loopholes of AI systems. It could also reduce the trustworthiness or accountability of U.S. providers and users of AI systems.
These differences are not exhaustive or definitive, as they depend on how the U.S. executive order and the EU AI Act are interpreted and implemented, nor are they mutually exclusive or contradictory, as they reflect different trade-offs, and balances between competing objectives and values in regulating AI as well as different mindsets towards the future of AI.
In closing, artificial intelligence is a dynamic, developing field of technologies that bring benefits and opportunities and require caution and responsibility. Is regulating potential threats of a technology appropriate to create an innovative environment for AI to flourish in the EU and in the United States? How do we strike a balance between regulation and innovation and between protection and promotion?