The blog “The Tale of Two Approaches to Artificial Intelligence – EU AI Act & U.S. Executive Order on Safe, Secure, and Trustworthy AI” was a balanced look at the similarities and difference in approaches to AI. The divergence of approach is a manifestation of our different legal systems, political cultures, and strategic priorities. This opinion piece is an extension of that blog focusing on the EU AI Act. How might we strike a balance between innovation and regulation for both the big tech and the Small and Medium Sized Enterprise (SME)? Indeed there is a race to govern AI. Are we focusing on the real transformative power of this technology?
The European Union’s AI Act highlights its commitment to setting a global standard for the deployment of ethical AI. The delay is a natural by-product of our democratic values and character. It is a complex and impactful legislation that is attempting to delicately balance regulation and innovation with an aim to ensure the safety of AI systems that respect our values and laws, and yet, it must also avoid curbing the innovation that is critical to our economic vitality and technological progress.
The trilogue talks will commence and from a policy perspective perhaps creating a more nuanced category of AI applications which allow for a tiered approach to regulation is the way forward. Not all AI applications would need to be subjected to the same level of scrutiny, which reduces the burden on less risky AI activities. However, it still leans towards a protective regulatory regime. The compliance costs associated with it could disproportionately impact SMEs. The core of the challenge is ensuring this regulatory framework is robust to protect citizens and their rights without placing an undue burden on the smaller industry players.
To ensure that SMEs are not hindered by the Act, exemptions or tiered compliance requirements based on enterprise size or AI application scope would be prudent. Government funded programs or incentives for compliance could also relieve some of the financial weight on SMEs along side having a clear and accessible framework for compliance through the use of online portals or dedicated support teams to help SMEs navigate the regulatory landscape efficiently and effectively.
Innovation is not only the purview of big tech firms, it is in fact, the engines of breakthroughs and novel applications are often the SMEs. Therefore, the policy should be shaped in a manner that nurtures the innovative spirit inherent in these smaller enterprises.
Reflecting on the potential impact of the EU AI Act on the global stage, it is clear that the way we govern AI today will have an acute implication for the competitive dynamics of tomorrow.
How could we address the concerns of the potential impact of the Act towards SMEs? the notion of big tech – big responsibility which echoes the principles of proportionality and fairness, recognizing that the giants of the tech industry have the resources to bear a greater share of the regulatory burden is compelling and interesting to explore. It is an approach that could be helpful in fostering a more equitable innovation ecosystem where SMEs can thrive without the overshadowing burden of compliance costs. The stakes are high in regards to foundation models and their applications that have an immense potential to alter industries and societies. It is incumbent upon the larger players, who have the capacity to develop and deploy AI at scale, ensuring their innovations do not cause negative societal impacts. They should be the ones primarily responsible for the rigorous testing, robust quality management and the ethical considerations that come with AI deployment.
For the SMEs whose AI applications are specialized and limited in scope, a big tech – big responsibility model would allow them to continue to innovate within their niche without the disproportionate burden of compliance. I am not advocating that SMEs should be exempt from regulation, rather the regulatory framework should be scalable and adaptable, reflecting the size of the company and the potential impact of the AI tool. A regulatory environment that is responsive to the scale and scope of the AI application encourages innovation across the board.
Artificial Intelligence is indeed a transformative technology. The transformation I am referring to is a paradigm shift from a culture of proprietary dominance to one of collaborative stewardship. Collaboration is not without its challenges, but it is key. Monetization is a significant hurdle. Big tech companies are beholden to their shareholders and operate within an economic model that awards intellectual property and competitive advantage. The willingness to share foundational models and tools is contingent upon a business model that can reconcile the open dissemination of technology with the need to generate profits.
This is a challenge that requires the exploration of novel business models that incentivize collaboration without compromising financial sustainability of big tech firms. A tiered access model or a form of a revenue-sharing agreement where SMEs contribute to development and refinement of AI models in exchange for access to the technology could be one way. It is a complex issue that needs a multifaceted approach, which includes policy incentives, industry standards and most importantly perhaps, a cultural shift within the tech industry towards a more cooperative and socially responsible ethos.
The evolution of technology which can impact society in a profound way must not only prioritize innovation and market dominance but also social responsibility and ethical considerations. This is a pivotal cultural shift that requires a significant realignment of values and incentives, encouraging big tech to view their role through a lens of stewardship and societal benefit, rather than solely through the lens of profit maximization. Practically, this compels a rethinking of corporate governance structures to reward long-term, socially responsible innovation. Measuring success metrics would need to be recalibrated, moving away from short term financial gain to include long term impacts on society and environment.
This is not just about the willingness of big tech to share but also the mechanism by which they might do so in a manner that promotes sustainable, inclusive growth. Licensing agreements that allow SMEs to use AI technologies at a reduced cost, or collaborative research initiatives that pool resources and share findings, could also be transformative.
The role of government and international bodies in fostering this cultural shift through policies that incentivize ethical practices, such as tax breaks for companies that engage in responsible AI development, or grants for collaborative projects between big tech and SMEs could be an instrument to facilitate this cultural shift.
What novel business models can you think about that incentivizes collaboration without compromising financial sustainability?
Does the big tech companies have a larger share of the moral and social obligation to ensure AI systems are ethical, fair, accountable and transparent?