ITProPortal: The place of AI in Fraud Detection

An article by Alejandro Saucedo, originally published on ITProPortal.

Amid Covid, AI is more important than ever for fraud detection.

Fraud detection is a substantial challenge. This is due to the fact that fraudulent transactions only can ever represent a very small fraction of financial activity, which makes finding them equivalent to a needle in a haystack. Using rules-based systems to detect fraud is very difficult, as it’s a phenomenal challenge to create a rule that encompasses every anomalous transaction. Fraud detection instead relies on an understanding of what’s “normal” and being able to detect deviations from standard activity.

To combat this, machine learning (ML) systems have long been recognised as a key technology for fraud prevention; they can process a large quantity of data very quickly, and identify the typical qualities of fraudulent and non-fraudulent transactions. By their very design, ML models are intended to discern patterns in data sets and spot outliers and anomalies.

In addition, ML models are adaptable, and so can swiftly respond to sophisticated organised crime, whose methods often change quickly. Through using anomaly detection techniques, AI models are well positioned to observe and respond to changing patterns which indicate fraud. For all these reasons, it should be no surprise that there’s been an underlying trend within financial institutions, auditors, and governments towards adopting ML as part of their fraud detection infrastructure.

The last year has seen this trend markedly accelerate, culminating in Amazon’s fraud detection platform recently being made generally available. One can look at the upsurge in fraud detection technology as an inevitability, it’s actually the case that rapid recent developments have been spurred in no small part by the challenges created by the Covid-19 pandemic.

An uptick in fraud, a reduction in capacity

Between 2020 and 2024, losses from digital money fraud are expected to increase by 130 percent. It’s anticipated that the amount of fraudulent transactions could reach $10 billion by 2024. The pandemic saw this trend markedly accelerate, with its first phase from March-May seeing a 6 percent rise in digital fraud against businesses. Fraudsters have worked to exploit the sudden transition businesses and employees made earlier this year, with all the business and communication disruption that came with it.

At the same time, many of the teams responsible for monitoring fraud had to rapidly switch to remote working earlier this year, and many others were placed on furlough. This means that while there was an uptick in fraud – which would test teams even in normal business conditions – anti-fraud teams found themselves at low manpower and having to operate in an unfamiliar environment.

This made the pandemic a perfect time for many organizations to accelerate the implementation of AI platforms for fraud detection. The sheer potential AI has for fraud detection processes meant that greater uptake of AI models was inevitable, but the pandemic has accelerated this trend by creating a short-term impetus for companies to automate and adopt AI.

Challenges facing AI for fraud detection

Deploying and scaling AI among anti-fraud teams does throw up some novel challenges, which technologists and teams have to thoroughly consider. Beyond the mere technical challenge of deploying AI models among teams, AI also throws up a range of regulatory, compliance, and ethical problems.

One such problem is that of explainability. When it comes to detecting and proving a fraudulent transaction, it’s important for teams to be able to explain which features of a transaction appear to be fraudulent. However, if improperly implemented, AI can jeopardize explainability. This is because more advanced “deep learning models” such as neural networks can consist of highly complex mathematical representations that can make them hard or almost impossible to interpret by looking at its internal representations, making it a “black box”.

An explainability gap of any sort opens up scope for an AI to also start to detect false positive cases, with human operators unable to scrutinize the reasons the AI did so. This could ultimately hurt normal customers just looking to make an above-board purchase, which will in turn drive them away from the business.

Another problem is that without proper development and operation an AI can make predictions based off of undesired biases. If left unchecked, these undesired biases can veer into outright discrimination on the basis of traits such as race, sex, or other protected characteristics. This is because the decision-making process of an AI is ultimately decided by the data it is “trained” with – if this training data is unrepresentative or biased in some way, then an AI will ultimately inherit such biases and make judgements on that basis.

Bringing humans into the loop

The challenges of explainability and bias likely will pose difficult and pressing ones for many financial service providers, especially given that many will have had to scale up their AI fraud detection platforms so quickly owing to the crisis. Thankfully, there exist tools and best practices to help teams ensure their models remain explainable and avoid bias, such as explainability techniques and libraries which help explain and codify exactly what is happening in the AI’s “black box”.

Another practice that’s well-positioned to tackle these problems is the “human-in-the-loop” model of testing and deployment. This involves humans carefully curating the data used to train the AI, involving themselves in regularly tuning the AI model to refine its predictions, and also regularly testing the model’s performance. When implemented, a human-in-the-loop model ensures that a responsible agent is never far away from the decision-making of an AI, so as to monitor and ensure that its decisions are explainable and ethically sound. Human-in-the-loop also ensures there is someone to be accountable for the decisions made by AI platforms deployed in the context of fraud detection.

The crisis has proved to be a pivotal period for the adoption of AI for fraud detection. This is not a temporary change, but a lasting transition that has only been accelerated due to economic necessity. Through adopting good practices, such as human-in-the-loop, financial service providers can guarantee that their AI platforms have minimal teething problems and prove to not just be a salve in the short-run, but a game-changer for their businesses in the long-term.

Contents