My Advice to Banks on AI: Nick Perrett of Prosper

Practical advice on AI implementation, data governance, and regulatory compliance for banks from the CEO of an FCA-authorised AI-powered wealth management platform.

My Advice to Banks on AI: Nick Perrett of Prosper

I spoke with Nick Perrett, CEO and Founder of Prosper, an FCA-authorised AI-powered wealth management platform managing £197M in assets. Nick has deep experience deploying AI in regulated environments, having previously co-founded Tandem Bank and now chairing the Wealthtech Alliance. His perspective is uniquely valuable: he's living the exact challenges banking executives face around AI implementation, data governance, and regulatory compliance.

Over to you Nick - my questions are in bold:


What practical AI use cases are actually working inside financial institutions right now - and which ones are failing?

What's working: Decision intelligence around specific, high-value customer actions. At Prosper, we've had success with AI-enabled nudges that help customers optimise cash-to-investment allocation and tax wrapper decisions. These work because they're narrow, measurable, and have clear guardrails.

What's failing: Moonshot AI offerings that try to replace human judgement on complex decisions. Most banks' AI programmes stall because they start with the wrong problem statement - "How can we use AI?" instead of "What specific decision do our customers struggle with that AI could improve?"

The biggest mistake I see is banks treating AI as a technology project rather than a decision-improvement project. You don't need AI for everything. You need it for the 3-4 decisions where data-driven optimisation genuinely improves outcomes and where you can build robust guardrails.

What's your practical advice for bank executives on sequencing AI implementation?

Start with workflow automation around existing products, not new AI-powered offerings. Banks have decades of customer interaction data - use AI to optimise how you serve existing products before you try to launch new ones.

At Prosper, we learned this the hard way. We built an AI guidance feature for investment decisions, then paused it because the guardrails weren't strong enough to operate within FCA Consumer Duty requirements. What we should have done first: use AI to improve how we allocate customers to the right savings accounts or ISA products - lower-risk decisions with clearer parameters.

Practical sequencing: (1) Automate low-risk, high-frequency decisions first (cash allocation, fee transparency, suitability checks). (2) Build your governance and human-in-the-loop review processes on those simpler use cases. (3) Only then move to higher-stakes decisions like investment guidance or credit decisioning.

How should banks think about data and governance for AI within regulatory expectations?

The FCA's Consumer Duty framework is actually helpful here - it forces you to ask "Can we explain why the AI made this recommendation?" If you can't explain it to a regulator, you can't deploy it to customers.

Practical governance at Prosper: Every AI recommendation goes through a rules-based filter first (checking regulatory constraints, tax rules, eligibility). Then the AI optimises within those boundaries. Then we have human review on edge cases. This isn't sexy, but it's what allows AI to operate in a regulated environment.

The mistake banks make is thinking they need perfect AI before they deploy. You don't. You need AI that's good enough, combined with guardrails that catch the edge cases. Build the guardrails first, then improve the AI within them.

How should bank boards think about the commercial case for AI - beyond "everyone else is doing it"?

Frame it as margin expansion and fee transparency, not technology spend. AI allows you to deliver better outcomes at lower cost-to-serve - which is exactly what Gen X and millennial customers demand.

At Prosper, we can offer zero platform fees and refunded fund fees because AI handles portfolio optimisation and guidance that would traditionally require expensive human advisors. That's not about replacing humans - it's about letting humans focus on the high-value, high-empathy conversations while AI handles the data-heavy optimisation.

For banks: AI is your answer to "How do we compete with neobanks on cost while maintaining trust?" You can't out-cost a neobank by hiring more people. You can by using AI to reduce cost-to-serve while improving outcomes. That's a board-level strategic opportunity, not just a technology experiment.

What can banks learn from fintech about designing AI-enabled experiences that customers actually trust?

The biggest lesson from challenger banks: don't over-gamify, and don't optimise for short-term engagement at the expense of long-term outcomes.

Robinhood is the cautionary tale - they used AI and behavioural design to drive trading activity, which drove their revenue but destroyed customer wealth. The FCA is now clamping down on gamification in fintech for exactly this reason.

What works: AI that helps customers achieve their stated goals (retirement income, tax efficiency, wealth preservation) rather than AI that drives activity. At Prosper, we celebrate customers who do nothing for a year - because that's often the right investment behaviour. AI should reinforce good long-term decisions, not exploit behavioural biases.

Banks have a trust advantage over fintechs if they use it correctly. Customers trust you with their money already - use AI to reinforce that trust by optimising for their outcomes, not your revenue.


Thank you Nick! You can connect with Nick on his LinkedIn Profile and find out more about the company at prosper.co.uk.