My Advice to Banks on AI: Rachel Curtis of Inicio AI
Rachel Curtis, CEO of Inicio AI, shares practical advice for bank executives on avoiding vanity AI projects, improving data quality, and finding the courage to take meaningful risks.
I spoke with Rachel Curtis, CEO of Inicio AI, a company building conversational AI that helps financial services firms carry out compliant, auditable affordability and financial assessments at scale. Rachel shares practical advice on where banks are getting AI wrong, what they should prioritise, and why the hardest decisions are the ones being avoided.
Over to you Rachel - my questions are in bold:
Can you give us an introduction to you and an overview of your organisation?
I'm Rachel Curtis, CEO of Inicio AI. We started the company after spending years watching well-intentioned people in financial services struggle through conversations that were emotionally difficult, operationally expensive, and often not very human – particularly around affordability and financial vulnerability.
At Inicio, we build conversational AI that helps banks and lenders have better financial conversations at scale. Not just faster ones. We focus on regulated journeys like income and expenditure assessments, where accuracy, auditability and empathy all matter. We're FCA-authorised, which has shaped our DNA from day one – this isn't AI for show, it's AI that has to stand up in the real world.
We're backed by long-term, impact-led investors including Future Planet Capital Regional, which gives us the space to build technology that's responsible, durable and genuinely improves outcomes, rather than chasing short-term hype.
If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?
They're trying to do too many things at once and mistaking activity for progress.
I see a lot of "busy fools" behaviour – dozens of pilots, proofs of concept and shiny demos that exist largely to show the organisation is "doing AI". Many of these turn into vanity projects: technically impressive, politically safe, but operationally irrelevant.
The irony is that AI works best when it's focused on a small number of high-friction problems and done properly. Depth beats breadth every time. Otherwise, banks end up with lots of movement, very little impact, and a growing sense of frustration that AI somehow hasn't delivered.
What's one AI or data capability banks should prioritise in the next 12–18 months, and why?
Improving the quality of customer-provided data at the point it's captured.
So many critical decisions in banking – lending, collections, forbearance, advice – are built on data customers provide under stress, confusion or time pressure. Historically this has been handled through long forms or rushed phone calls, which is a perfect recipe for error.
Conversational AI that can guide, prompt and gently challenge customers in real time can transform this. It improves accuracy, reduces downstream remediation, and often delivers a calmer, clearer experience for customers at the same time.
Where do you see banks overestimating AI, and where are they underestimating it?
Banks tend to overestimate AI's ability to fix foundational issues like poor data hygiene, fragmented systems or unclear ownership. AI can't compensate for a lack of organisational discipline – it just exposes it faster.
Where they underestimate AI is in how honest consumers are with it. In many cases, people are more open with a well-designed AI than they are with a human, particularly around sensitive financial topics. That creates a powerful double effect: not only more complete data, but truer data as well. When combined with good prompting and real-time sense-checking, the quality uplift can be significant.
What does "good" actually look like when AI and data are working well inside a bank?
Good looks easy. Almost boring – which is exactly when strong organisations make good decisions about where to build and where to partner.
When AI and data are working well, banks are clear-eyed about what is truly core and where specialist capability adds more value than trying to do everything themselves. The best teams don't assume they need to build every component in-house, particularly in complex, regulated areas where getting it wrong is expensive. Ironically, when something is working well it can look deceptively simple – which is often why people think it must be easy to recreate.
Operationally, AI does the legwork at scale, handling volume, consistency and repeatable tasks, while intelligently routing cases that need judgement, discretion or additional support to humans who are best placed to help. When that balance is right, the whole system feels calmer, fairer and more effective.
What's the hardest AI or data decision bank executives are avoiding right now, and why?
Finding safe ways to take meaningful risks.
There's a real fear of being first – of regulatory scrutiny, reputational damage, or simply getting it wrong. As a result, many organisations retreat into very safe, very anodyne AI trials that don't touch anything truly important. They're low-risk, but also low-impact.
The challenge is that innovation without risk isn't really innovation. The banks that will pull ahead are the ones that can create environments where experimentation is controlled, well-governed and deliberate – but still bold enough to matter.
Thank you Rachel! You can connect with Rachel on her LinkedIn Profile and find out more about the company at inicio.ai.