How your bank is becoming RoboCop

File picture: Jason Lee

File picture: Jason Lee

Published Apr 10, 2016

Share

London - Banking isn't what it once was. Branch managers with personal relationships and hunch-based decisions are out. And in their place are automated algorithms relying on credit checks and other customer data to make decisions on lending and customer spending.

Nowhere is that more prevalent than when it comes to fraud. Ask anyone who's had their card blocked during a stag do in Tallinn or a girls' trip to Vegas.

And now the effects of blanket rules that leave thousands of banking customers in the lurch is prompting a major drive to develop artificial intelligence (AI) that can effectively identify fraud and, potentially, transform consumer banking.

But while AI may sound like yet another step away from the days of the trusty bank manager, proponents argue that the ability to identify unique spending behaviours and make rapid decisions could mean the future of banking is more personal, not less.

Meanwhile, with “robo advice” already tipped as a development that could transform how people access products in the future, it now seems RoboCop could be coming for fraudsters.

“Let's imagine you go on holiday every year to Morocco”, suggests Martina King, the chief executive of Featurespace, a company born from Cambridge University research that is pioneering the use of AI to understand individual behaviour in real time and predict future actions. “Our systems can use that prior piece of information where existing bank systems can't. So we know that you usually go to Morocco, it's an expected pattern and your card wouldn't be blocked.”

But Featurespace's software doesn't just analyse existing data deeper to cut back on the number of false fraud alarms, it can also make predictions about expected behaviour.

“Because we're monitoring data in real time, we can predict what we'd expect the next action to be”, Ms King explains. “So if we saw a transaction in Gatwick after Morocco then we'd expect the next transaction to be in the UK.”

What is particularly clever about their AI system is that it can learn from new types of fraud; allowing it, the designers argue, to keep up with “the arms race” against innovative criminals. In tests with UK banks, the company's AI has resulted in a 70 per cent reduction in the number of genuine customers who find themselves blocked, because it doesn't rely on blanket rules to make decisions.

But fraud detection could be just the tip of the iceberg when it comes to AI and the consumer banking experience.

“Banks have all the data they need to really know us, and through the application of intelligent strategies which emulate the decisions of a personal relationship manager, they can deliver a response that's good for me and them,” says Robin Collyer, of software developer Pegasystems.

In fact, AI systems could even be used to determine whether human operatives are doing their jobs properly, eliminating the risks of human error. Samantha Regan, managing director of Accenture Finance & Risk Services, explains: “We may see more advances in AI in the area of surveillance, where AI could look at sales practices and make sure that bankers and advisers are offering the appropriate sales products, advice and guidance to their customers - and ensuring that these processes are aligned from a regulatory perspective.”

Some customers may find the idea of software watching their behaviours and predicting their actions slightly creepy, but Ms King says most people are annoyed their banks do not already analyse their data effectively.

“We are finding there's a consumer appetite already ahead of where the bank's technology is.

They get frustrated when banks can't recognise patterns in their behaviours or when they don't use their data effectively.

“Our systems are looking for patterns; we're not making a judgement on what people are doing. The AI is searching for things that are different, anomalistic, things that stand out as unusual. We wouldn't investigate them, it would be the bank itself that does that.”

However, even if these developments are widely welcomed by consumers, it doesn't mean they are foolproof. Harry Armstrong, senior researcher at innovation foundation Nesta, says there can be serious unintended consequences.

“While algorithms are driven by data and numbers, they are no less subject to bias than a human and are in no way completely objective machines. Data determines any of the decisions that are made, so if that information does not accurately reflect real life, or if it only represents a subsection of the population, the end result will likely favour or discriminate against certain groups or individuals.

“Another worry is that if the data science behind the process is flawed from the get-go, then the effect could be very bad for a lot of people.”

THE INDEPENDENT

Related Topics: