AI, Risk and Banking – Peter Moenickheim, Prior Chief Risk Officer at Santander Consumer
Consumers and companies are all-too familiar with the consequences of financial fraud, having been victims of massive data breaches and orchestrated campaigns of payment fraud across all channels.
What’s less clear is how financial institutions and merchants can stay one step ahead of the risk and fraud, while reducing their regulatory burden. In the most recent episode of the Real Machine Podcast, Feedzai’s Ajit Ghuman hosts a fascinating exploration of the potential of AI and machine learning with one of the risk management profession’s foremost practitioners, Peter Mockenheim, prior Chief Operational Risk Officer for Santander Consumer.
Peter has had a storied career leading the risk and compliance departments of a variety of financial institutions. Peter was a founding member of Nationwide Bank, and then went on to become Chief Control Officer for the operations of Chase’s Consumer And Community Bank, where he achieved the highest possible internal audit rating, firm-wide.
You can listen to the episode here:
Growing Regulatory Expectations
Ajit and Peter discuss how systemic fraud, massive data breaches, and the financial crisis, have led regulators to expect more from financial institutions. Most notably, Peter points to the Target credit card hack, which cost the company not only lost revenue, but also its CEO.
Regulators now expect financial institutions to do a much better job of knowing their customers, reducing risk, and improving customer service. Yet the conventional answer – throwing more human effort at the problem – is costly and dubiously effective.
Peter points out that machine learning is now “light years ahead” of human checking in terms of efficiency, and that we’re now “at the cusp of putting it in place”. He identifies three key reasons for its limited deployment so far:
- Regulatory issues: regulators do not want to see ‘untested’ technology deployed to consumers. This is especially the case in institutions regulated by the OCC or the US Federal Reserve.
- Organizational inertia: change in large organizations sometimes happens slowly.
- Internal bureaucracy: some parts of the FI organization may want to try new approaches, but struggle for approval and resources. This is especially the case when the solution involves support from the FI’s information technology team.
Expanding Your Machine Learning Use Cases
Peter signposts some potential ways that organizations can overcome these obstacles. Using fraud detection as an initial use case, organizations can gain confidence and adopt the technology to solve other related problems.
Ajit describes how the Feedzai platform fits nicely with this approach. It empowers users to discover these use cases for themselves, developing models and testing them iteratively against real data.
Since all stakeholders have an interest in reducing fraud, machine learning can be deployed in this area with little internal conflict. Peter also suggests that AI vendors should build partnerships with forward-looking mobile development divisions within financial institutions, which can offer them the chance to gain a foothold in the wider organization.
Do you want to be featured in our next Real Machine Podcast? Contact us to learn more.
Latest posts by Ken Bui (see all)
- EU Fifth AML Directive: How Banks Can Prepare for Five Key Changes - November 18, 2019
- AI Best Practices to Improve Enterprise Risk Outcomes - October 22, 2019
- Why It’s So Hard for Challenger Banks to Fight Financial Crime - September 24, 2019
Subscribe to stay infomed