FCA is partnering with the Alan Turing Institute to explore the transparency and explainability of AI in the financial sector
Posted: 16 July 2019 | Source: Finextra
The UK's Financial Conduct Authority is partnering with the Alan Turing Institute to explore the transparency and explainability of AI in the financial sector.
The project will aim to meet one of the biggest challenges in customer-facing applications of artificial intelligence - how to explain to a consumer why a product or service has been denied by a machine.
Inroducing the programme, Christopher Woolard, executive director of strategy and competition at the FCA, points to the example of an AI agent turning down an application for a mortgage or life insurance policy.
"Algorithmic decision-making needs to be ‘explainable’," he says. "But what level does that explainability need to be? Explainable to an informed expert, to the CEO of the firm or to the consumer themselves?
"It’s possible to ‘build in’ an explanation by using a more interpretable algorithm in the first place, but this may dull the predictive edge of the technology. So what takes precedence - the accuracy of the prediction or the ability to explain it?"
By working with the Alan Turing Institute, the watchdog wants to move the debate on - from the high-level discussion of principles towards a better understanding of the practical challenges on the ground that machine learning presents. The research will culminate in the publication of a joint paper around these issues and a workshop planned for early next year.
"AI and its application in financial services is causing us to ask big questions - and the answers we arrive at have the potential to fundamentally alter society and the established order," he says. "We can’t arrive at these answers on our own - the ramifications are too wide reaching - but as the regulator of one of the world’s biggest financial centres, we believe we have a key role to play."
Woollard also reports on a joint survey conducted with the Bank of England on the current use of machine learning tools among regulated firms. This found that the technology is employed typically for back office functions, with customer-facing applications largely in the exploration stage.
"The picture varies depending on the firm in question," he says. "Some larger, more established firms are displaying particular cautiousness. Some newer market entrants can be less risk averse. Some firms haven’t done any thinking around these questions at all - which is obviously a concern."