Open Access Government: Ensuring artificial intelligence is humane and regulated

Alejandro Saucedo, Member of the European Commission’s High Level Expert Group Reserve List on AI and Engineering Director at Seldon, argues that to ensure artificial intelligence is humane, it must be regulated

Artificial intelligence (AI) is set to bring about both transformational change and cost savings in virtually every sector. In fact, the improvements in productivity and innovation unlocked by AI are currently unleashing a “Fourth Industrial Revolution”, which PwC estimates will contribute over $15.7 trillion to the global economy and add at least 9.9% to Northern Europe’s GDP by 2030.

However, if our society doesn’t implement this technology properly, AI threatens to unleash a plethora of social injustices. The EU doesn’t currently have specific standards or legislative instruments to regulate the development and implementation of AI so as to help prevent it from being a source of injustice. In the coming years, I believe we need to see legislation — similar in scope to GDPR — come into play in this field.

The risks of AI

Despite the massive potential upsides, there are reasonable and legitimate fears regarding the widespread adoption of AI. The use of AI by bad faith actors raises a host of risks for personal and organisational cybersecurity, and AI’s use in predicting individual decisions raises a swathe of privacy concerns. We also know that the job displacement AI will bring to some industries may threaten livelihoods, let alone the less discussed mental health challenges that AI will bring.

More insidiously, improper practices with AI can lead to it perpetuating undesirable biases in society. The data that AI systems are “trained” with by data scientists and developers have a direct effect on the findings and decisions they output, meaning that feeding an AI model biased data can lead to it making socially unjust decisions. For example, many predictive policing AIs across the U.S. have been “trained” with falsified data that biases their decision-making against ethnic minorities. This is highly problematic and highlights the importance of training AI models with carefully-assessed and representative data.

Further, there is the risk that individuals and organisations can delegate decision-making entirely to AI systems, which reduces accountability and increases ambiguity. This is dangerous, as decisions made by AI models can influence major decisions, such as if a person receives a loan, gets hired for a job, or comes under suspicion by law enforcement.

To illustrate this problem, consider the case of a person who is wrongfully arrested on the basis of an AI model’s conclusions – who is responsible? Is it the individual policeman who acted on the AI’s predictions, or the whole department for not having a human-in-the-loop structure to vet their AI properly, or the compliance officer who signed the model off, or the developer who first deployed the AI model? This all shows that if improperly handled, the complexity of AI opens up a bureaucratic nightmare that’s rigged against those who may seek restitution or justice.

AI experts have the tools to tackle the risks

Regardless of how well we can automate processes with AI capabilities, any accountable structure will ultimately require humans at its core. Based on this, developers, data scientists and AI specialists have a professional responsibility throughout the development, deployment and operation of AI systems. Whether it is about mitigating undesired biases, ensuring AI processes remain accountable, or guaranteeing privacy by design in machine learning systems, there’s much that those who develop and operate AI models can do.

For mitigating bias, we can be more rigorous with our training data, ensuring that it’s properly representative of the real world by leveraging data science best practices together with domain knowledge, thus reducing the risk of our models making unjust and discriminatory decisions. We can also bring in subject experts in the areas we’re applying models in, ensuring that they subject the model to rigorous assessments of its decision-making abilities in their field throughout development.

In turn, we can also take steps to make models more interpretable. We can spend more time and resources, leveraging explainability tools and processes to introduce explainability into models and AI systems, so they are more understandable to laypeople. This way, the teams using the models know exactly why they’re making a decision and can decide whether the logic is valid. We can introduce humans into helping AIs make their decisions, so there’s always a human agent who shoulders responsibility if something goes wrong.

To make AI humane, regulation is essential

However, for the above to be implemented, we cannot realistically expect data scientists and developers to do it alone. Complex ethical decisions that impact people’s lives cannot just rest on the shoulders of a single developer or data scientist. Companies often tend towards short-run profitability over decisions that would benefit society in the long-run, and making AI humane means putting additional time and resources into an activity that doesn’t immediately generate revenue.

That’s why we need a regulatory environment for AI, and both policymakers and developers need to be proactive and engage in interdisciplinary collaboration to ensure it can be efficiently created. If done well, good regulation that compels organisations to adopt these best practices for AI development and deployment can be a catalyst for innovation, since the additional confidence and stability in the industry will encourage investment and new roles across the field.

Contents