Artificial intelligence: pioneering role or brake on innovation? The EU AI Act in a global context
Peter Kuhn
Author: Peter Kuhn, Senior Ecosystem Manager – MedTech & AI, IT- and Projectmanagement Expert @ 5-HT Chemistry & Health
Introduction
Artificial intelligence (AI) has the potential to change many aspects of our lives, from the way we work to the diagnosis and treatment of diseases. Given the enormous potential of this technology, on March 14, 2024, the European Union (EU) created the world's first comprehensive legal framework for the application of artificial intelligence with the AI Act. This law, which was adopted by the EU Council on 21.05.2024, places numerous requirements on manufacturers, but the question remains: is the AI Act a global role model or a brake on innovation?
The global situation
Several studies show that the USA is still considered a pioneer when it comes to artificial intelligence. The annual AI Index Report published by Stanford University confirms this impression: 61 of the machine learning models classified as "notable" in 2023 were developed in the USA, followed by the EU with 21 and China with 15 models[1].
Fig.1: Number of notable machine learning models by select geographic area, 2003-23
With regard to the topic of regulation, which is also included in the report, it is clear that even the USA, as a pioneer in the field of AI, is not acting without regulation. This was recently expanded with the "Executive Order on Safe, Secure, and Trustworthy AI".
Fig. 2: Number of AI-related regulations in the European Union by approach, 2017-23
Fig. 3: Number of AI-related regulations in the United States by approach, 2016-23
Contrary to the pure statistics from the AI Index Report, however, the EU AI Act is seen as the much more restrictive regulation.
Both aim to ensure the responsible and trustworthy use of AI without neglecting safety and human rights. However, while the Executive Order sets out principles and objectives for the development and use of AI, the AI Act contains strict requirements and rules for providers and users. Furthermore, the Executive Order does not require a declaration of conformity and certification, as is the case in the AI Act for high-risk systems[2][3].
The EU AI Act
The AI Act categorizes AI applications into four risk classes: unacceptable, high, limited and low/minimal risk. An AI system that poses a clear threat to security, livelihood or human rights is considered an unacceptable risk and falls under the prohibited AI practices under Article 5 of the AI Act.
For example, the EU sees a limited risk in AI systems that interact with people, such as chatbots. Transparency obligations apply here to ensure that such systems are recognizable as AI.
Systems that pose neither unacceptable, high nor limited risks fall into the low-risk category and are not subject to any further requirements. An example of this is a spam filter.
Fig. 4: Risk classes in the EU AI Act
Requirements for high-risk systems
High-risk AI products must meet special requirements in the areas of risk and quality management as well as data quality. Classes IIa, IIb and III according to the EU Medical Device Regulation MDR fall into this category. The requirements include:
Risk management system: must be established and updated throughout the life cycle of the system.
Data and data governance: Training, validation and test data must meet appropriate quality criteria.
Technical documentation: Must demonstrate that the requirements for high-risk systems are met.
Record-keeping: Automatic logs for traceability throughout the life cycle.
Transparency and information: Users must be able to interpret and use the results; clear instructions for use are required.
Human oversight: Systems must be able to be effectively supervised by humans.
Accuracy, robustness and cyber security: Protection against errors, malfunctions and attacks must be guaranteed. This applies in particular to AI-specific vulnerabilities such as manipulation of training data (data poisoning) or model evasion.
However, there are many parallels to the EU MDR, particularly in the area of medical devices, more on this later.
Implementation and effects
The regulation will be implemented gradually. It enters into force 20 days after publication in the Official Journal of the EU and is fully applicable 24 months after entry into force. There are exceptions:
Ban on AI applications with unacceptable risk: 6 months after entry into force
Codes of conduct: 9 months after entry into force
AI systems with a general purpose: 12 months after entry into force
Obligations for high-risk systems: 36 months after entry into force[4]
Fig. 5: Timeline of the implementation of the EU AI Act Regulation
Are data protection and regulation only for healthy people?
"A boy saw 17 doctors for 3 years for chronic pain. ChatGPT found the diagnosis," was the headline of the article on Today.com on September 11, 2023 [5].
This is a very concrete example of the careless handling of health data when using artificial intelligence, but it leads to the question of what risks people are willing to take when they themselves or close relatives are seriously ill. Am I prepared to entrust my patient file to an AI or the company behind it so that I can be helped? Would I prefer an algorithm that is not compliant with the EU AI Act or EU GDPR to a compliant algorithm if it means I live longer?
At first glance, this concrete example may seem to be a plea against data protection and regulation, because a sick person will always opt for the algorithm that helps them. But this is precisely where the problem lies. An algorithm that gives me a plausible diagnosis is more "helpful" than one that does not.
However, various studies show that diagnoses from AI systems should be treated with caution. Although a study by Mass General Brigham ChatGPT in August 2023 certified a diagnostic accuracy of just under 72% [6], a study by the Cohen Children's Medical Center in New York in January 2024, which explicitly examined cases in children, diagnosed only 17% of cases correctly [7].
These figures clearly show how important it is that AI systems that decide on the treatment of patients are.
EU AI Act and medical devices
The AI Act is likely to hit manufacturers of medical devices less hard than other high-risk systems, as the requirements often overlap with the EU MDR. However, manufacturers should check whether extensions and adjustments are necessary to ensure the protection of fundamental rights.
Overlaps can be found, for example, in the articles on:
Risk management system (Art. 9)
Technical documentation (Art. 11)
Transparency and provision of information (Art. 13)
Accuracy, robustness and cybersecurity (Art. 15)
Quality management system (Art. 17)
Duty to provide information and cooperation with competent authorities (Art. 20, 21)
Conformity assessment (Art. 47)
Post-market surveillance (Art. 72)
System for reporting serious incidents (Art. 73)
The above points are already fully or partially fulfilled by the MDR. However, manufacturers of medical devices should also critically re-examine these points with regard to the AI Act, as extensions and adjustments may need to be made here, since in addition to the MDR, which mainly relates to safety and performance requirements, the AI Act also incorporates aspects such as the protection of fundamental rights [8][9].
Fig. 6: Overlapping of requirements MDR and AIAct
Opportunities and risks
Strengthening the internal market?
The MDR was initially criticized as over-regulation, but had positive effects on the EU internal market, as many Asian manufacturers, among others, withdrew from the market due to the increased requirements. If the EU AI Act does not serve as a model for global regulation, a similar effect could occur here, too.
Global role model?
Initiatives such as the International Code of Conduct for Advanced AI Systems in the Hiroshima Process or the OECD Principles for Artificial Intelligence show a global interest in regulating artificial intelligence.
There are some aspects of the AI Act that could make it a potential global model. As the first comprehensive legal framework for AI worldwide, it places a strong focus on the ethical aspects of AI, including the protection of fundamental rights and the avoidance of discrimination. Thirdly, it promotes transparency and accountability of AI systems, which could help to increase public trust in these technologies. As an EU law, the AI Act is applied in the third strongest AI region after the USA and China and is therefore already strongly represented. According to the AI Index Report, almost 20% of notable models fall under the AI Act.
A brake on innovation?
There are concerns that the AI Act could put the brakes on innovation. Strict regulatory requirements could lead to overregulation or be perceived as such, inhibiting the development of new AI technologies. A legal text peppered with prohibitions and criminal offenses can fuel fears, especially among small companies and start-ups.
Another risk that makes the AI Act a potential brake on innovation lies in the bureaucratic hurdles if the member states fail to create an EU-wide regulatory framework and the implementation of the AI Act ends up as a federal patchwork quilt.
To date, no national supervisory authorities have been designated to review the implementation of the requirements set out in the AI Act. By the time the Act comes into force, there is an urgent need to create bodies with sufficient capacity and expertise to deal with the coming flood of applications. Here, too, parallels can be drawn with the MDR, which led to massive bottlenecks at the designated bodies when it came into force.
Conclusion
In conclusion, it can be said that the EU AI Act represents a significant development in the regulation of artificial intelligence. Its introduction marks an important step towards the responsible use of this technology and the safeguarding of fundamental rights and ethical standards. However, it remains to be seen whether it will serve as a global role model or rather be perceived as a brake on innovation and will largely depend on its implementation and the impact on AI development and the market. It is crucial that the EU AI Act helps to strengthen trust in AI systems without hindering the industry's innovative strength. Ultimately, it is up to the players in politics, business and society to seize the opportunities and minimize the risks in order to unleash the full potential of artificial intelligence in line with the fundamental values of our society.
Sources
[1] https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_2024_AI-Index-Report.pdf
[2] https://cybersecurityadvisors.network/2023/11/08/the-tale-of-two-approaches-ai/
[3] https://kpmg.com/xx/en/home/insights/2024/05/setting-the-ground-rules-the-eu-ai-act.html
[5] https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843
[6] https://www.jmir.org/2023/1/e48659/
[7] https://jamanetwork.com/journals/jamapediatrics/article-abstract/2813283
[9] https://de-mdr-ivdr.tuvsud.com/EU-Medizinprodukteverordnung.html
Related posts
You may be interested in these articles:
5-HT Chemistry & Health Newsletter
Want the latest tech and industry news, events, relevant info from the ecosystem and more?
Subscribe to 5-HT Newsletter now Subscribe to 5-HT Newsletter now
Become part of the 5-HT Chemistry & Health
Exchange ideas with innovative startups and future-oriented companies in our ecosystem. We look forward to meeting you!