As the adoption of artificial intelligence (AI) continues at pace across industries, there is increasing awareness of the risks it can pose. Recent high-profile examples have highlighted the risk of unjust bias regarding race and gender, such as those found in some law enforcement or recruitment algorithms. Other examples have highlighted the reputational risk from poorly communicated AI use cases, such as an online insurer’s recent claims of using facial emotion recognition to detect fraud. Perhaps most damagingly, AI models seem to have failed to meet expectations when it comes to mitigating one of humanity’s biggest challenges, the COVID-19 epidemic.
Not surprisingly, regulators have become increasingly vocal. Earlier this year, the European Commission published a draft of its proposed AI law, which prohibits certain uses of AI and defines several other high-risk AI use cases. The Cyberspace Administration of China has just proposed far-reaching rules on the use of algorithmic recommendation engines, including a requirement to ensure gig workers are not mistreated by AI "work schedulers." In the U.S., federal banking regulators completed a comprehensive industry consultation exercise around AI risks in the sector earlier this year. The Securities and Exchange Commission has recently initiated a similar consultation on the use of behavioral algorithms and other digital engagement practices in retail investment (brokerage) platforms. And in April, the Federal Trade Commission warned companies to “hold yourself accountable – or be ready for the FTC to do it for you.”
Risk management professionals could claim that (a) these types of risks are highly technical and require specialist knowledge; and (b) AI/data science teams and their business stakeholders should have primary responsibility for managing them. They would be right on both counts. However, they should not underestimate their own enabling role in this space.
Managing the risks from poor-quality AI is too important to leave purely to the specialists. Such risks must be viewed from a holistic, organization-wide perspective rather than a narrow technical lens. Risk management professionals should embrace this mandate -- as a way of supporting the digital transformation of their employers but also as a means of continuing their own professional growth.
So how can they go about it?
First, they must invest in learning more about AI, its potential and limitations and the ways in which the latter can be addressed. Not everyone has to become a data scientist, but the ability to ask the right questions will be critical. In particular, they should keep in mind that
- The workings of many AI models are far more opaque than traditional models. The most common type of AI algorithms (machine learning) creates models based on the data used to train them. As a result, the data scientist’s understanding of how the model actually arrives at its conclusions can be limited. This poses a challenge in convincing stakeholders – business line owners, risk and compliance teams, auditors, regulators and customers – about the algorithms' suitability for large-scale use.
- AI models’ dependence on the training data can make them prone to particular weaknesses. Compared with traditional models, AI models are more likely to "overfit" or exaggerate historical trends. They may lose their predictive accuracy more easily in the face of changes in input data, such as those triggered by, for example, the pandemic. Finally, they can exacerbate existing biases present in the training data, such as biases regarding gender or race.
Second, risk management professionals must connect the dots between these narrow data and algorithmic risks, and mainstream business risks. This requires a systematic and comprehensive mapping of AI risks to the broader risk landscape in the industry. For example, in banking, the most obvious risks related to large-scale AI use may already be covered as part of the specialist review of model risk and data risk. Model risk answers questions like, “Is the AI model reliable?” or “Is it working as intended?” Data risk answers questions like, “Is the data used to train the model accurate and representative of the target population?” or “Is the AI model using or uncovering protected personal data elements inappropriately?”
However, risk teams must go further and assess whether the use of AI accentuates one or more other existing risks, such as:
- The risk of treating a customer or staff member unfairly -- for example, by discriminating against certain groups when making lending or hiring decisions
- The risk of causing market instability or collusion due to malfunctioning algorithms
- The risk of “mis-selling” to a customer due to an algorithm that is not generating investment advice suited to the customer’s profile
- Business continuity risk due to lack of fallback plans in case of AI failure
- The risk of intellectual property theft or fraud due to adversarial attacks on the AI system
Third, and perhaps most importantly, risk management professionals must work with their business, data and technology colleagues to create mechanisms to manage such risks in a systematic manner. Left to themselves, individual data scientists and their business sponsors might well manage these risks in an ad hoc, case-by-case manner. Risk management professionals can help define risk appetites, standards and controls that enable such risks to be managed consistently and effectively.
See also: 3 Big Opportunities From AI and ML
In this, they can call upon an increasing body of academic research and commercial tools to analyze AI models, explain the underlying drivers of the model outputs accurately and monitor and troubleshoot the model’s performance on a continuing basis. For example, such tools can allow organizations to:
- Create transparency around the key drivers of the model’s predictions/ decisions (“Why did this radiology report not flag cancer risk?”)
- Assess any potential biases in model predictions and the root causes (“Do female applicants have a higher probability of getting short-listed for a particular job application than their male counterparts? If so, is that justified?”)
- Monitor model and data stability over time, trigger alerts when they breach pre-defined thresholds and identify the root causes of such instability (“Is our supply chain management model causing a higher number of parts shortages this month?”)
- Identify potential parts of the population for which the model is unreliable (“Are the model’s predictions for over-60 white collar workers based on too few data points?”)
- Identify potential changes in data quality that may affect the predictive accuracy of the model (“Can the bank’s lending model survive the massive changes in the economy due to COVID-19?”)
***
Increased transparency and control over AI are allowing organizations to become more sophisticated about the manner in which they use AI. The ability to manage these risks effectively can become a source of competitive advantage in the future.