Amended EU AI Act Takes Aim at American Open-Source AI Models and API Access

Samuel Oba
Coinmonks

--

The European Union’s amended AI Act, voted out of committee on Thursday, 11th May 2023, is set to have significant implications for American companies such as OpenAI, Amazon, Google, and IBM. If passed, the act would ban these companies from providing API access to generative AI models in the EU, and would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe.

The amended act includes open-source exceptions for traditional machine learning models but expressly forbids safe-harbor provisions for open-source generative systems. Any model made available in the EU without first passing extensive and expensive licensing would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Open-source developers, and hosting services such as GitHub, would be liable for making unlicensed models available. The act could essentially be seen as an order for large American tech companies to put American small businesses out of business, and it threatens to sanction important parts of the American tech ecosystem.

Enforcement of the AI Act would be out of the hands of EU member states. Under the act, third parties could sue national governments to compel fines. The act has extraterritorial jurisdiction, which means that a European government could be compelled by third parties to seek conflict with American developers and businesses.

The amended AI Act has very broad jurisdiction, including providers and deployers of AI systems that have their place of establishment or are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union. Therefore, any non-EU company or individual providing AI services or systems in the EU, including open-source developers, would need to comply with the provisions of the AI Act.

One of the primary provisions of the AI Act requires companies to register their “high-risk” AI projects or foundational models with the government. Projects will be required to register the anticipated functionality of their systems, and systems that exceed this functionality may be subject to recall. This will be a problem for many of the more anarchic open-source projects. Registration will also require disclosure of data sources used, computing resources (including time spent training), performance benchmarks, and red teaming.

Moreover, companies would need to undergo expensive risk testing required by the EU states, who will carry out third-party assessments in each country, on a sliding scale of fees depending on the size of the applying company. Tests must be benchmarks that have yet to be created, and post-release monitoring is required, presumably by the government. Recertification is required if models show unexpected abilities. Recertification is also required after any substantial training.

The list of risks in the amended AI Act includes risks to such things as the environment, democracy, and the rule of law. However, the risks are vaguely defined, leading to questions about what constitutes a risk to democracy. Some critics have even raised concerns that the act itself could be a risk to democracy.

Open-source foundational models are not exempt from the AI Act, and the programmers and distributors of the software have legal liability. For other forms of open-source AI software, liability shifts to the group employing the software or bringing it to market. The act essentially bans APIs, which allow third parties to implement an AI model without running it on their hardware. Some implementation examples include AutoGPT and LangChain. Under these rules, if a third party, using an API, figures out how to get a model to do something new, that third party must then get the new functionality certified.

If an American open-source developer placed a model or code using an API on GitHub, and the code became available in the EU, the developer would be liable for releasing an unlicensed model.

The act also requires high-risk AI projects or foundational models to be registered with the government, expensive risk testing to be carried out, risks vaguely defined, open source LLMs not exempt, APIs essentially banned, open source developers liable, and LoRA essentially banned. If enacted, enforcement would be out of the hands of EU member states, and third parties could sue national governments to compel fines.

You can read more about the EU AI act on the offical website here: https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

--

--

Samuel Oba
Coinmonks

Disecting Web3 developer tools, blockchain innovations, and trends | Digital Nomad | Tech Writer