2026 FinTech Predictions: Insights from Hugh Scantlebury of Aqilla

Hugh Scantlebury explores AI's shift from automation-first to judgement-led adoption, the rise of deepfake fraud risks, and why ethical standards must anchor financial services transformation.

2026 FinTech Predictions: Insights from Hugh Scantlebury of Aqilla

We spoke with Hugh Scantlebury, Founder and CEO at Aqilla, about the forces reshaping financial services in 2026. From AI's maturation beyond hype to the underestimated threat of sophisticated deepfakes, Hugh shares his view on where the industry needs to focus as technology transforms decision-making.

Over to you Hugh... my questions are in bold:


What's the biggest shift you expect across financial services in 2026?

In 2026, we'll see the conversation around AI begin to shift. It will become far more pragmatic and far less concerned about job-loss anxiety or fear of the unknown. In practical terms, this means the financial services sector will move away from a "computer says no" mentality and towards systems that can be challenged, interrogated, and understood. Financial services leaders will be less concerned with how much of the organisation is automated, and more focused on whether they can challenge, justify, and explain those automated decisions. That shift in mindset – from speed-first automation to judgement-led adoption – will define 2026.

Which emerging technology will have the most practical impact on banks and the FinTechs that support them?

Many people will instinctively answer this question by pointing to AI itself. But with tools like ChatGPT now approaching three years in mainstream use, it's fair to say that AI has already emerged. So, for a sector that acknowledges its own cautious adoption of new technology, it's less about innovation and more about interaction. That's where we'll see AI have a practical impact on banks and FinTechs.

With that in mind, over the past 18 months, many software developers, including ourselves, have been reassessing how users engage with native AI in financial software and what needs to change to make that interaction productive and trustworthy.

For us, it's about giving users a more direct connection to AI within their tools. Ideally, that interaction should happen at the prompt level. This will allow users to question how outputs are generated rather than simply accept them. AI responses also need clearer confidence signals. This will enable users to judge when to trust the output and when to interrogate further.

When executed well, this approach helps users understand why a system made a particular recommendation, which data sources were used, and where human intervention remains essential. Confidence levels also enable teams to set clear thresholds—for example, allowing AI to proceed automatically at higher confidence levels while routing lower-confidence outcomes for human review. This combination of thoughtful interaction design and human judgement will deliver the most practical impact in 2026.

What customer behaviours or expectations will most challenge banks and financial service providers?

Customers are already accustomed to fast digital experiences, but speed alone won't be enough in 2026. The challenge is for customer-facing staff to fully understand and be able to explain AI-generated information to customers. In practical terms, this means being able to articulate the logic, trust the underlying data sources, and communicate outcomes in a professional, human, and empathetic way. This is one area where machines cannot compete with people.

What risks or blind spots do you think the industry is underestimating as we move into 2026?

As we move into 2026, we risk underestimating the growing sophistication and accessibility of AI-enabled social engineering – particularly deepfake audio, video, and text scams. While the industry often discusses deepfakes in the context of viral videos or political misinformation, their most immediate and damaging impact is likely to be felt in financial and accounting settings.

The barrier to entry is already far lower than many organisations assume. Creating convincing fake audio or video no longer requires specialist skills or significant investment, which means fraud attempts are becoming cheaper, faster, and more targeted. Requests that appear to come from trusted colleagues, suppliers, or executives – especially when delivered through familiar channels like email, WhatsApp, or Teams – can be extraordinarily difficult to distinguish from genuine communications.

The real blind spot is the assumption that last year's (or even the year before's) preventative controls are sufficient. The technology deployed by threat actors doesn't stand still, and countermeasures need to keep pace. Deepfakes can now convincingly replicate an individual's voice, writing style, or even moving image. So, without stronger verification steps in place or support for clearer escalation routes, banks and fintechs risk leaving the door open to threat actors. Culture plays an important role, too, and employees must be able to challenge and question suspicious activity, even from the board or senior management. Technology may power deepfakes, but prevention remains a people, process, and culture challenge – and one the industry needs to take far more seriously as it heads into 2026.

If you were advising a bank's leadership team today, what strategic priority should they focus on to stay competitive in 2026 and beyond?

Ensure your organisation's ethical standards and professional responsibilities remain intact as AI adoption grows. The introduction of AI shouldn't require a new ethical framework; in many cases, the correct principles are already in place. The challenge for leadership is ensuring those principles are upheld as technology reshapes decision-making.

AI is transformational, but it doesn't remove the obligation for professional judgement. Banks have always been responsible for the accuracy, integrity, and consequences of their financial decisions, and they cannot delegate that responsibility to systems or models. Leadership teams need to be explicit that AI is a support tool and a predictive aid – rather than a definitive authority.

Ultimately, staying competitive in 2026 will depend on leadership's ability to balance innovation with continuity. That means adopting AI in a way that strengthens, rather than erodes, professional standards. Banks that treat ethical AI use as a core leadership responsibility will be far better placed to navigate the next phase of transformation.


Thank you to Hugh Scantlebury for sharing these insights. Learn more about Aqilla via their website.