The EU AI Act and its Potential Impact on Enterprises Harnessing the Power of AI

Artificial intelligence (AI) is everywhere and is becoming increasingly present in our daily lives. From personalized recommendations on Netflix to virtual assistants on our smartphones, AI is changing the way we interact with technology. With our growing reliance on AI comes important questions about how it should be regulated and used. In this blog, we’ll take a look at the EU AI Act in particular, and explore what it means for enterprises and consumers.

What is the EU AI Act?

The European Union is very often at the forefront of the regulatory line when it comes to new technologies. 

The EU AI Act is a proposed regulation by the European Commission to govern the development and deployment of artificial intelligence (AI) systems across the European Union (EU). It’s the first-ever law on AI by a major regulator anywhere, and positions Europe to be a leader in the artificial intelligence space. The regulation is currently being reviewed and will be coming into effect as early as 2023.

The Act proposes strict requirements for organizations that develop and use AI related to transparency, accountability, and human oversight. It emphasizes the importance of keeping humans as a part of the process of artificial intelligence so that human values can be taken into consideration. The proposal also requires providers and users of high-risk AI systems to comply with rules on:

1. data and data governance

2. documentation and record-keeping

3. transparency and provision of information to users

4. human oversight

5. robustness, accuracy, and security

The regulation proposes a risk-based approach to classifying AI systems. It defines four levels of risk: minimal, limited, high, and unacceptable. Adequate compliance of the regulatory requirements suggested for each of these levels is going to require comprehensive risk management frameworks.

Why are AI rules needed?

Setting universal regulatory frameworks is no easy task. However with the speed that the technology is evolving, it’s become even more difficult to regulate. 

As AI continues to evolve and advance, it’s important to establish a set of guidelines to ensure its ethical and responsible use. Setting rules for the technology can help prevent it being used in a way that can be harmful for individuals or society as a whole. The rules can also make sure that AI is developed in a transparent manner, which will ultimately minimize risk and promote trust in the technology.

What would happen if companies are not prepared or don’t comply?

Companies that fail to comply could face heavy consequences like fines, reputational damage, and the suspension or withdrawal of the right to operate an AI system. This could result in loss of trust from stakeholders and customers.

There is a three level structure of fines for organizations that don’t comply, depending on the level of risk. The maximum fines of €‎30 million, or 6% turnover would go to those violating the prohibition of specific AI systems.

Given the potential risks associated with non-compliance, it is essential for companies to closely monitor the progress of the EU AI Act and prepare for its implementation well in advance.

What does the EU AI Act mean for Seldon Core users?

Even though Seldon Core is an open-source platform, the EU AI Act will mean a number of changes for its users. First, Seldon Core users will need to make sure their AI systems comply with EU AI Act requirements. This is particularly important for systems that are considered high-risk. 

Risk assessments will need to be carried out, so that steps can be taken to mitigate any risks identified. Secondly, Seldon Core users have to be able to demonstrate that their AI systems are being used in a way that aligns with the ethical principles set out in the EU AI Act. AI systems will need to be fair, transparent, and accountable. 

Core users will be required to build out supporting features to comply with the EU AI Act, some of these features are already built and included in Seldon Enterprise Platform, such as role based access control. 

What can Seldon Customers do to prepare for the EU AI Act?

The EU AI Act represents a significant shift on the regulatory landscape of AI in the European Union. Seldon users should start planning now to make sure that their AI systems comply with the requirements set by the EU AI Act.

Here’s what Seldon users can do right now to prepare to comply with the EU AI Act:

  1. Carry out a risk assessment: Identify your AI system’s purpose and scope and evaluate its technical performance. Once potential risks are identified, strategies can be developed to mitigate those risks.
  2. Evaluate how ethical your AI systems are: Consider if there is any potential for bias or discrimination in your AI system’s decision-making. Make sure that your AI systems are fair, transparent, and accountable so they can be in line with the ethical principles in the Act.
  3. Familiarize yourself with the EU AI Act requirements: The Act is a complex piece of legislation, so it is important to understand the requirements that will apply to your AI systems. Find the requirements in more detail here.
  4. Work with your legal team: Whether you have an internal team or partner with external legal and compliance experts, they can help you understand the requirements of the Act. Then they can help you develop a plan for regulatory compliance and liability protections. 

Seldon’s features that help with EU AI Act compliance

Seldon Enterprise Platform is designed to support compliance with the EU AI Act and other regulatory frameworks. here are a number of features from Seldon that will help organizations remain within the regulations when using medium or high-risk applications:

Audit logs of what model is deployed, when and by whom

– The ability to create custom policies and rules for model deployment and usage

Full history of model versioning and predictions to/from those models

Model catalog + support for extended metadata, meaning we can track the lineage of a prediction all the way back to the training data that was used to build the model (dependent on other tooling providing it)

Moving Forward with Compliance

The EU AI Act will undoubtedly have a major impact on the use of AI in the EU. This will affect both organizations who are based in the EU as well as global organizations who operate in the EU. Enterprises that are harnessing the power of AI and machine learning should start planning now to ensure that their AI systems comply with the EU AI Act requirements.

By taking steps to ensure (near) future regulatory compliance, enterprises can be confident that their AI systems are used in a safe, ethical, and responsible way. Are you worried about staying compliant?

Seldon can help! Our comprehensive suite of tools and services can help you assess your AI systems for risk, mitigate potential risk, and make sure your AI systems align with the EU AI Act’s ethical principles. 

Book a demo with us today to learn more about how Seldon can help you stay compliant with the EU AI Act.

Contents