Regulation and a risk-averse culture is preventing banks and financial institutions from making progress on artificial intelligence (AI) and experiments with automation, according to a panel at the Advanced Analytics & Artificial Intelligence Forum in London.
Technical challenges are only half the battle. Sometimes it can be a year-long process of taking people on that journey, even your own colleagues, you need to give people time to get their heads around new things. It differs between industries as well, where it will take a year for one sector, it might take a month in another sector. There is always a tendency to be cautious with new things.
Moving from proof of concept to a full-fledged AI deployment comes down to setting expectations. Words such as data science, AI, and machine learning are all buzzwords and we didn’t used to find them some years ago. You need to deal with where you are right now, not where you want to be, there is a need to educate management as much as possible.
It’s best to try and find where the biggest objections lie. “Is it general nervousness, is it a skills gap? Do we need to demystify something or break it down and run training sessions and have an open-door policy? You’d be amazed at the people who come along to learn. Nobody will stick their hand in the air and admit that they actually don’t know what an algorithm is. Most people think they know what it is but they’re also not 100% sure.”
According to a May study from Narrative Science and the National Business Research Institute, 32% of financial services executives are already using some form of AI, whether it was predictive analytics, recommendation engines or voice recognition. The research reports that AI adoption grew by 60% in the past year. 87% of financial services respondents said that they would be interested in using AI to provide insights and data analysis.
The best way to do things is to create a little project, a little experiment. Something which is just enough to prove that you can take things to the next level. The ideal is that you want that Spotify model where you have a team that is fully funded and looking for problems to fix. But where we are in the hype curve it’s a little bit difficult to go to a bank and say, ‘our team is going to cost 2-3 million dollars and we’ll deliver “stuff”. We don’t know what you’ll get but it will be really cool. That is a very difficult concept to get over and maturity-wise the technology is quite there either.
There’s a reason why the financial services industry is lagging behind other sectors and that is the fact that there aren’t many use cases, or large training sets of data. You have a problem where you can invest a lot of money you can’t create a scenario where objectively you know where things are going.
When it came to whether a build or buy model was best for moving the sector forward, buying gives crucial control over projects and can move things faster. You may have no problem buying somebody else’s secret sauce. But when you see a competitor who is maybe six months ahead of you and you can just buy the same product, plug it in and go, you may have no issue with that. Then there is that concern about building something with a partner and them reusing that IP.
A swing usually occurs between the two options. You’ll go with build and encounter all the struggles with that, then go to buy and realise that it isn’t a silver bullet. We are more buy these days and I think that’s because we are more mature in our approach. We want to think about what products are solving problems.
Take natural language processing (NLP) as an example. It’s far from solves, meaning that if you really want to do something in it it’s a very intense process to build the knowledge and expertise. Data scientists can’t just jump from one topic to the other. At the same time there are concerns about buying a product from a third party, helping to build it up and then watching as they flip it and use it with your competitors. That can pull us back towards the build strategy.