AI in the EU: a closer look at WorkTech and EdTech AI in light of the AI Act
Rubio’s lonka Jankovich (supported by Paul de Ruijter) explored the recently approved European Union AI Act and its impact on WorkTech and EdTech startups. They outlined the general framework of the AI Act and to understand the practical implications they have used a real-world example from our portfolio, SkillLab.
A Closer Look at the AI Act
The goal of the AI Act is to ensure that AI applications are developed and used in a way that is safe, ethical, and respects fundamental human rights. The legislation covers all AI systems and categorizes them based on the potential risks they pose to individuals and society.
Under the EU AI Act, AI systems are classified according to four categories:
(1) Unacceptable risk
(2) High-risk
(3) Limited risk
(4) Minimal risk
Systems that are recognized as an ‘unacceptable risk’, such as real-time biometric identification, social crediting systems, or other systems that exploit people’s vulnerabilities, are banned immediately. The Act focuses primarily on ‘high-risk’ AI systems. These include AI technologies used in critical sectors such as healthcare, finance, transportation, education, and employment. AI applications that pose a ‘limited’ or ‘minimal risk’ are subject to fewer regulatory requirements. While these applications are considered ‘low-risk’, the Act mandates that they are developed and used in a way that maintains user trust and safety. For example, users need to be notified if they are interacting with an AI system. ‘Minimal risk’ systems are simple AI systems that perform a simple task without any risk of manipulation, such as a spam-filter. General Purpose AI models, such as ChatGPT, have special obligations, regarding copyright and training data sets.
Influence of AI Act on Startups in EdTech and WorkTech Sectors
The introduction of the Act will have an important impact on startups, including those operating in the EdTech and WorkTech sectors. These startups will need to navigate the new regulatory landscape and understand how to adapt their AI strategies and development accordingly.
EdTech startups using AI to enhance learning experiences must consider the AI Act’s regulations. If classified as “high-risk,” their AI systems need to ensure data quality, transparency, and human oversight. However, depending on usage, some AI systems might be deemed “low-risk,” requiring compliance with only parts of the regulation. For instance, an AI system used for student assessment must be transparent in its decision-making process and avoid biases, ensuring fair treatment for all students.
WorkTech startups can leverage AI to streamline work processes, enhance productivity, and improve workplace experiences. These startups will need to ensure their AI systems, especially those used for critical functions like employee assessment or recruitment, comply with the Act. This means developing systems that respect worker autonomy, prevent harm, and maintain transparency.
If your company creates, distributes, or uses AI systems in the EU, follow these concrete steps to ensure your future compliance with the AI Act:
(1) assess the risks associated with your AI systems according to the EU AI Act
(2) raise awareness in your organization about legislation and its implications
(3) design ethical systems from the start to support future compliance
(4) assign responsibilities within your organization
(5) stay up-to-date regarding updates to legislation and the AI landscape
(6) establish formal governance within your organization
By following these steps right now, your organization will be ready for compliance with the AI Act and equipped to navigate the evolving regulatory landscape with confidence!
Let’s Look at an Example from Rubio’s Portfolio: SkillLab
Let’s take a closer look at SkillLab, one of Rubio’s portfolio companies, and their process to determine an appropriate strategy for their AI system in light of the AI Act. SkillLab is active in the WorkTech sector, on a mission to empower people to turn their skills into careers. They are leveling the playing field by providing equal opportunity to every person, focusing on their skills instead of their job and education titles.
- First, SkillLab assessed whether their system is defined as an AI system, per the definition provided in the AI Act.
- As SkillLab is based in the EU, and aims to deploy their system in the Union, they need to adhere to the AI legislation provided by the EU, according to the scope of the AI Act.
- Then, SkillLab assesses their AI system from a risk perspective, in a top-down manner, starting at ‘unacceptable risk’ linked to prohibited AI practices. They conclude that their AI system does not pose an ‘unacceptable risk’, and is thus not prohibited.
- SkillLab assesses if their AI system falls under the ‘high-risk’ category. Although many AI systems active in the employment sector are categorized as ‘high-risk’, the AI system developed and used by SkillLab does not make any decisions that influence access to employment, screen or select natural persons. SkillLab aims to ‘advise, not decide’. All definitions of high-risk AI systems can be found in annex III of the AI Act.
- As the AI system is not a General Purpose AI as defined in Article 51 of the act, SkillLab then came to the conclusion that their AI system falls under the category of ‘limited risk’. SkillLab’s AI system does not have to adhere to many requirements, but does need to ensure traceability and explainability, and be transparent to end-users if they are interacting with an AI system.
- SkillLab aims to adhere to requirements for high-risk systems, even when they are not mandated by law. Additionally, they strive to comply with voluntary codes of conduct provided by the EU. This will be a sign of good governance, in line with their user-first approach.
- Within their organization, SkillLab has designated individuals who are responsible for ensuring compliance with the AI Act and staying up-to-date with developments and the regulatory landscape. Their strategy requires multiple teams to work together on AI governance, from UX design to cybersecurity.
Looking Ahead: Future Implications of the AI Act
The AI Act sets a new standard for AI development and deployment, and its influence will undoubtedly extend beyond the EU. As the first legal framework of its kind, it is expected to inspire similar legislation in other regions. For startups, it is an opportunity to align their AI strategies with ethical and responsible practices, setting the stage for sustainable success in the future. By proactively embracing the principles outlined in the Act, startups can set themselves apart and gain a competitive edge in the market.
Curious to find out how the EU AI Act will affect you? Use the following EU AI Act Compliance Checker, or browse the act yourself, to find out exactly what the AI Act means for your situation. The tools provided are created by the Future of Life Institute.
Are you as excited about the latest AI developments as we are? We would love to hear your opinions on the implications of the AI Act, or just have a chat. Feel free to reach out to members of the team!