ITProPortal: The top three risks posed by AI, and how to safeguard against them

Seldon’s Engineering Director Alejandro Saucedo writes about how to mitigate risk in AI in this piece for ITProPortal.

AI poses risks, but there are various ways for companies to safeguard against them.

Many companies have embraced the use of AI to great benefit, realizing new efficiencies, improving profitability and overall boosting business results. However, with this enhanced power and productivity comes with more responsibility and organizations must mitigate the risks posed.

Businesses are increasingly aware of the need to apply a responsible approach to artificial intelligence and machine learning techniques, ensuring values and ethical principles are prioritized. Research into the ethical implications of AI are being incorporated into governmental thinking, helping engineers and policy-makers address and measure the immediate effects of AI on society. As it stands, there are three key ‘risk areas’ that must be assessed when implementing AI:

Mitigating bias

Deep learning systems are only as beneficial as the information they are given. It’s critical they make fair and informed recommendations based on the data they receive, rather than applying overt discrimination. Many datasets contain inherent bias from the real world which can result in gender, racial or ideological biases that not only impact end users but can also threaten legal compliance and a company’s reputation. There are numerous real-life examples that have amplified prejudices and stereotypes already existent in systems. For instance in policing, where training AI algorithms to use public data without incorporating the negative traits of humanity continues to remain a challenge.

Even the biggest names in tech who have access to large amounts of data have fallen foul to these issues. A recruitment tool by online giant Amazon came under fire after being trained with data predominantly provided by men. This meant that the algorithm highlighted and prioritized words like “executed” and “captured” that are more likely to be found in male CVs. The result found the recruitment tool ultimately favored men and exposed the organization to criticisms of sexism.

Machine models make decisions that affect everyday lives, so predictions need to be reliable and accurate. It’s imperative the data these systems are trained with thoroughly examined datasets to provide recommendations that can be easily explained. Bias should be detectable across an organization, not just with the data science teams who designed the original algorithm. Those closer to the end user will be able to pick up on data drift far more quickly, making it essential for non-experts to understand the model and it’s decision making. The “black box” of AI refers to an impenetrable system, whose inputs aren’t visible to users. Opening up the ‘black box’ through bias detection technology and explainable AI creates transparency and trust around machine learning deployment, enabling it to scale across organizations.

For example, the use of AI in healthcare has advanced practices in x-ray analysis, diagnosis and surgery but raises questions over the recommendations of evidence-based medicine and machine learning-informed medicine. As one of the world’s oldest binding documents, The Hippocratic Oath is taken to dictate the obligation to uphold professional ethical standards. Experts Brad Smith and Harry Shum argue that the integration of artificial intelligence brings new risks, giving the Hippocratic Oath a new meaning. When making life changing decisions with the assistance of AI, practitioners should vow to adhere only to strict evidence-based guidelines.

The issue of explainability

As machine learning systems are trained with large amounts of data and can automatically learn and improve from experience, their decision making processes are far more complex. Outcomes are often formed without explicit programming meaning it’s difficult to have an objective explanation for the outcomes.

Explainability involves identifying and justifying the reasons behind decisions made by machine learning algorithms. Explainable AI assists organizations in maintaining regulatory compliance, and is especially necessary with the rising requirements when handling and making decisions sensitive data. Many organizations are faced with legal requirements to justify the decisions made, and citing a ‘black box’ exposes corporations to potential fines and restricts their ability to scale ML capabilities across business areas. When an employee makes a mistake due to human biases and prejudices we can hold them negligent, however you can’t similarly use an algorithm.

Machine learning systems must be deployed so that the process is transparent and explainable, with enough human oversight and monitoring. A way of achieving explainable AI is by using machine learning algorithms that are inherently explainable through the traceability and transparency in their decision-making. This enables humans to track back the recommendation and control its tasks whenever an issue arises. As AI becomes increasingly powerful and complex, so must the accompanying technology that monitors and explains it.

Holding humans accountable

There is a requirement in AI to be able to explain and justify reasoning to users, otherwise known as accountability. These decisions are used to guide understanding and form explanations, and are often placed in a broader context alongside societal norms, morals and legal values. Accountability can be challenging to achieve as these technologies have a tendency to spread moral responsibility to different operators. A machine learning pipeline often has a number of people involved and can lend itself to a siloed way of working that can cause disruption.

Algorithms can predict if someone goes to jail, if someone should receive a loan, or advise on who should get hired, but AI will have to be more accountable and respectful of society’s values to thrive. If someone is hit by a self-driving car, who should be held responsible? The original designer of the person-recognizing algorithm? The developer who deployed that algorithm live? The compliance officer who signed it off? The legislator who allowed that car to drive on the road? As the standards differ from roles and contexts, it makes them domain specific, and accountability more difficult to regulate. These issues can often place a stranglehold on the deployment of beneficial ML capabilities in organizations. This essentially limits the effectiveness of their data science teams and the overall efficiency and innovation of their business.

To truly understand risk, even with the relatively low-hanging fruit of back office automation, accountability must be understood before the technology is implemented. The human element for safeguarding and decision making must be present in all ML pipelines, from even the most basic of processes. Companies must ensure this standard is kept, especially when the stakes are high. A chain of accountability which holds every stakeholder responsible for the decisions of the algorithm will enable AI to power businesses and mitigate risk.

Balancing innovation and ethics

Steps are being made to put regulatory frameworks in place that safeguard systems and algorithms to ensure they’re carefully designed, explained and audited. Using community-led and self-policing approaches like open source can spearhead innovation, enable collaboration and avoid a ‘Big Tech’ monopoly on innovation.

All AI applications require a standardized approach that is widely accepted. Regulations are put in place to protect and standardize, and to keep bad players from exploiting loopholes. During the industrial revolution, people were being exploited in the workplace. Due to the rise of unions, standards were set and efficiency amongst workforces improved. The same goes for the regulation of AI – the correct regulations will be a catalyst for innovation. Now is the time for organizations and governments to work together to encourage the discussion on ethical principles and responsible handling of data. The faster they cooperate, the quicker we can begin implementing safeguards and guidelines to mitigate against the threats of AI. This will ultimately lead to greater adoption fueled by minimized risk.

Contents