This article discusses the ongoing efforts to regulate AI across the world, trends in AI legislation in the EU and the U.S. and the challenges ahead.
Since the commercialization of AI, a need for regulation of the technology has emerged. The democratization of a product or service is a key reason for governments to identify potential dangers in the sector and develop a framework to mitigate risk. The more people who are exposed to the product, the more dire the need. As of November 2023, ChatGPT had 100 million weekly users.
Ever since its deployment in November of 2022, governments around the world have been trying to find the appropriate approach to regulate the technology without hindering its innovative capabilities. An avalanche of regulatory frameworks soon followed from guidelines, soft law, proposals and legislative initiatives. At this point, it is important to take a step back and review the approaches that different countries and organizations have taken, the main regulatory texts that have moved forward and their impact on an industry that will dominate global markets.
The U.S. Approach
For anyone who has been following the American federal government’s legislative pattern in the past decades, it is safe to say that regulation of industries has been a last resort, as the fear of hindering economic growth has taken precedence. Especially when it comes to the tech sector, the United States has always been a hub of innovation, which has followed a techno-libertarian governance approach that faceted the dogma of “Innovation first — regulation later”. However, fear of hindering innovation has kept regulators at bay on many issues, where admittedly, and in retrospect, much more government oversight was needed. AI is not a similar case. Far from moving forward with something like the EU AI Act, the Biden Administration has, nonetheless, understood the importance of getting in on the public policy debate early and playing a role in shaping the global discussion.
U.S Regulatory Texts and Industry Implications
AI Labeling Act of 2023 (In committee) & Advancing American AI Act (Introduced)
The AI Labeling Act was introduced by Sens. Brian Schatz (D-HI) and John Kennedy (R-LA). It states that AI Generated content has to be labeled and requires AI developers and third-party licensees to take action to prevent systemic publication of content without disclosures. On the other hand, the Advancing American AI Act requires specified federal agencies to take steps to promote artificial intelligence (AI) while aligning with U.S. values, such as the protection of privacy, civil rights and civil liberties. For example, the Department of Homeland Security (DHS) must outline policies and procedures related to the acquisition and use of AI and considerations for risks and impacts related to AI-enabled systems.
President Biden’s Executive Order on AI
The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued by President Joe Biden, represents a significant step forward in outlining a potential regulatory framework for various sectors. The order contains a particular emphasis on National Security, National Economic Security and National Healthcare Safety. While it also covers a range of sectors, including civil rights, education and healthcare, it notably places a substantial focus on addressing the challenges and opportunities presented by AI within the context of National Security. Primarily, it emphasizes the importance of safeguarding civil liberties and ensuring transparency in AI-generated content.
This emphasis suggests a strategic approach by the Biden Administration in examining AI and prioritizing security and safety concerns. As a result, it provides valuable insights into the potential direction of future legislation in this area. It also signals a recognition of the critical role AI plays in national security and underscores the need for proactive measures to ensure its responsible development and deployment.
Moving forward, it is likely that legislative efforts will continue to prioritize the security implications of AI, while also addressing broader societal and ethical considerations. This could involve establishing regulatory frameworks that balance innovation with safeguards to mitigate risks associated with AI technologies. As the Biden Administration and Congress delve further into this issue, we can expect ongoing discussions and initiatives aimed at shaping a comprehensive approach to AI governance. This Executive Order also offers valuable insights into the U.S. government’s approach to AI governance. While it does not prescribe a specific regulatory framework, it serves as a foundational document shaping the direction of responsible AI use.
Through its directives to different actors within the federal government, the Executive Order lays the groundwork for collaborative efforts to address the complexities of AI governance. By engaging various stakeholders, including industry experts, civil society and academia, it seeks to foster a holistic approach to AI regulation that prioritizes safety, transparency, and accountability. The Administration released a document showcasing the progress of the issued guidelines as they have been adopted by respective federal agencies.
The EU Approach and the EU AI Act
The comparison between the U.S. and EU approaches to regulating AI highlights contrasting philosophies and strategies toward governing emerging technologies.
The EU has taken a proactive stance with its AI Act, which aims to establish a comprehensive regulatory framework for AI. This legislation adopts a risk-based approach, categorizing AI systems based on their potential impact and implementing corresponding levels of oversight. The EU’s approach recognizes the multifaceted nature of AI applications.
The EU AI risk-based approach categorizes AI on 4 levels.
- Unacceptable risk: Use of AI is considered a threat to fundamental human rights and will be banned within the EU. This includes biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race), untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and educational institutions and social scoring based on social behavior or personal characteristics among others.The only exemption on biometric identification pertains to the use by law endorsement and sets a clear and restrictive framework for its use.
- High risk: The second level refers to AI systems with significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. It pertains to infrastructure projects, medical devices, certain systems in the field of law enforcement, administration of justice and democratic processes. There is a strenuous process of mandatory compliance obligations for the above systems.
- Limited risk: This category includes chatbots and certain emotion recognition and biometric categorization systems that require minimal transparency obligations, such as informing the user that they are interacting with AI.
- Minimal/No risk: This includes other AI systems such as spam filters or AI-enabled recommended systems with only voluntary codes of conduct.
Moreover, the EU’s approach includes provisions for exemptions and the establishment of AI “sandboxes” to facilitate experimentation and innovation while maintaining regulatory oversight. This agile and risk-based approach represents a departure from previous regulatory strategies, aiming to strike a balance between fostering innovation and safeguarding against potential risks.
Global Asymmetry and the Effort of Harmonization – A Global Licensing Authority
Despite regulatory efforts in both the EU and the U.S., numerous considerations remain unaddressed. Without comprehensive harmonization and widely accepted standards for regulating AI, the specter of asymmetric competition looms. Nations with scant regard for human rights and a history of infringing upon civil liberties could exploit AI for nefarious purposes, including targeting political adversaries, launching attacks on other states, undermining democratic processes worldwide and fostering tech-authoritarianism.
The U.S. has taken the lead in pushing for a resolution through the UN General Assembly, garnering support from all 193 voting states. This initiative underscores the perils associated with unregulated AI. The proposed resolution seeks to compel all countries to commit to deploying AI in alignment with the principles enshrined in the UN’s foundational documents and the Universal Declaration of Human Rights. However, a significant obstacle remains: China’s reluctance to incorporate human rights considerations into AI development and deployment.
In the absence of a global regulatory body dedicated to upholding democratic values in AI development, the idea of establishing an independent global licensing authority merits consideration. This entity would be tasked with issuing licenses for AI usage to countries and companies, contingent upon rigorous testing across diverse scenarios, comprehensive impact assessments regarding human rights and civil liberties and a trial period during which the agency would conduct ongoing oversight to ensure compliance with UN resolutions.
Undoubtedly, such a proposal would impose constraints on innovation. Yet, given the uncharted nature of AI’s evolution and the myriad potential risks it poses, including the possibility of AI falling into the wrong hands or being wielded for malicious purposes such as mass radicalization by terrorist groups, caution is paramount. By adhering to the adage of hoping for the best while preparing for the worst, it is imperative to ensure that this emerging technology serves to safeguard our shared global public interests.