How Can the Financial Services Industry Trust AI?

How can financial services teams scale AI/ML in a manner that promotes confidence, increased automation and financial returns? And what does AI in finance look like?

The level of maturity and the speed of adoption varies across different territories globally. This is largely due to different talent pools, operating models and regulation that Financial Services organizations have to contend with. More efforts need to be made to boost trust in AI in the financial sector. This article explores how and why FS orgs can do this.

How does ML adoption vary across the Financial Services Industry? 

Firstly, it’s important to stipulate that the Financial Services Industry is incredibly diverse. The use cases for AI in finance can change dramatically! Underwriters at Insurance companies like Covea may use AI for approving or rejecting claims, needing something very different to a hedge fund who is analyzing company portfolios. 

However, there are some commonalities in terms of adoption of AI in finance and machine learning. Until recently, adoption of ML has been largely confined to discrete functions like finance or risk departments within an organization. Now however, larger companies are looking to leverage talent and capabilities to scale across an organization. 

Business units often don’t want to wait for the ML platforms to be operationalised as they are working towards their own KPIs, and this can mean that controls and governance can be sacrificed in the pursuit of agility. Most organizations are therefore turning to MLOps to scale machine learning with trust and governance, allowing ‘hub and spoke’ models where compliance can be enforced centrally but teams can pursue their own automation and ML targets independently. 

Every major player, either is or will be investing heavily in this technology, regardless of their maturity right now. The ‘AI Bank of the future’ or ‘Digital insurance’ is omnipresent, and many are getting there by adopting this centralized MLOps approach. 

How has this changed over time? 

5-6 years ago, there was a clamor for Data Scientists to be hired in the Financial Services Industry. They were viewed as the solution to technical inefficiencies and a route to innovation. Companies frantically communicated their need to ‘Do AI’ to stop from being left behind, without much more understanding of what the technology was or what they were going to do with it. 

These data scientists were hired in droves. They spent months if not years wrestling with poor data and old systems in order to get the information they wanted. Eventually, most teams were able to build a handful of machine learning use cases, but largely only in a sandpit environment using anonymised data for example. They had a trained model that could pretty accurately solve its intended problem, be it a sales forecast, customer churn, or directing a chatbot complaint to the right department. 

AI in Finance Today

This brings us to where many organizations are at today, where they are looking at a few years of investment in these data science teams, and wanting to understand where the value to the business is. 

This has resulted, more recently, in a push to put machine learning models into production. Operationalize them and allow the business to generate value off it. A model sitting on a data scientist’s laptop has about as much value to the business as there being no model at all. 

Organizations need a way to automatically run those models into the traditional applications you have as part of your business. As a result, the focus has turned to Machine Learning Engineers and MLOps Engineers (often people from the DevOps world who have skills in the machine learning space) to be their hiring priority and they are now in high demand. This reflects that focus on getting models in production in order to gain value from them. 

Why People and Processes are Essential for Mitigating Risk 

Embedding transparency and compliance into these systems via people and processes is just as important as the tooling. It doesn’t matter how good the technology is, there is a crucial need for the right people and systems to ensure that the organization is following best practices and complying with regulations. 

Several organizations are getting ahead of predicted legislation and compliance restrictions and are implementing explainers with their critical models. Internal validation techniques can ensure organizations are prepared for when regulations are enforced for models that currently don’t need any explanation.

Structurally, several digital big banks have embedded risk officers at earlier stages to help navigate the internal policies and processes. Adapting both the people and processes around this new technology can stop the silos, cut a tremendous amount of time, and shift the team’s focus onto more valuable parts instead of manual maintenance. 

The Three Critical Components of Robust Risk Management

When it comes to AI in finance, effective risk management is critical for protecting the organization and its customers from potential losses. There are several key ways in which people and processes are essential for mitigating risk:

  1. Knowledgeable and trained employees: It’s crucial that employees are well-trained and knowledgeable about the various risks that the organization faces. This includes understanding the importance of following established procedures and protocols. It’s also important to be aware of the potential consequences of not following these guidelines.
  2. Robust processes and procedures: Clear and well-defined processes and procedures can include: policies for handling sensitive customer data, guidelines for managing financial transactions, and protocols for reporting suspicious activity. By following these processes, organizations can better ensure that risks are identified and managed in a consistent and effective manner.
  3. Regular reviews and audits: Organizations should regularly review and audit their processes and procedures. This can include conducting internal audits to identify any potential weaknesses or areas for improvement. External auditors can also help to provide an independent assessment of the organization’s risk management practices.

By investing in people and processes, organizations can better protect themselves and their customers from potential risk.

The Role of Regulation in Financial Services

The financial services sector is a heavily regulated industry. However, there is very little, in any jurisdiction, specific regulation around AI and machine learning. This is despite there being a significant amount of guidelines and regulations around the transmission, storing of data, and other newer, more modern concepts. This is down to a number of factors. This includes the fact that AI is evolving very quickly and is a relatively new technology. 

Everyone is familiar with GDPR. There is a clause that states that if you are deploying sophisticated algorithms to make decisions for someone, they must be able to provide explanations to the end users. There are arguments around whether this pertains to AI, whether it is legally binding and if it could be implemented. However what is important is that it is a regulation we are used to from a few years ago that includes clauses that are paving the way for future regulation that is likely to come. 

The Proposed AI Act

This idea that algorithms must have explanations is a real step to the proposed regulations coming from the UK AI Ethics board and in the EU. The ethics board has a 10-year AI plan which begins with guidelines that will eventually lead to regulations. The outline has suggestions as part of an invitation for industry regulators to create their own guidelines and decide how that is implemented in their industry. 

In contrast, the EU has published their proposed AI act. If this act gets voted in, it will be a law across all industries that machine learning implementation has to follow certain rules. Some of the wording is still open to interpretation but largely here’s the outlines.

They divide machine learning systems into three risk levels:

  • An ‘unacceptable risk’, for example, is the use of facial recognition by law enforcement 
  • ‘High risk’ implementations could be an automated approval of a mortgage because the impact to an individual is very severe. These will be accompanied by guidelines and safety standards
  • Lastly, ‘low risk’ implementations will be unregulated

What’s next for AI in finance?

Industry practitioners need to start thinking about how to implement explainability so they are ready for when laws are enforced.

Even if it takes 5-10 years for concrete regulations to come to fruition, the level of rigor needed means that planning for these moves should begin now. Teams may need to totally redesign the way they build their algorithms. They might have to add additional tooling steps. Or they might have to hire people who have the right skill sets in explainable AI or drift detection. The downstream impact is massive. 

The proactive companies are taking these regulations and thinking about how they can get ahead of them. Is your financial industry services organization taking steps to stay ahead of the curve?

Build trust with Seldon’s Explainability tools

With Seldon, you can drive deeper insights into model behavior with productized Explainable Artificial Intelligence (XAI) workflows.  Our Alibi Explain framework empowers you to generate explanations across a range of data modalities including tabular, text, and imagery.

If you’re interested in seeing how explainability can make a difference to your FS organization, book a demo.

About the author

Richard Jarvis helps our financial services clients across the globe understand and benchmark their ML maturity. He empowers clients to scale ML and AI in finance in a manner that provides both confidence, automation and returns.

Contents