My Advice to Banks on AI: Philipp Buschmann of AAZZUR

AAZZUR's CEO shares candid advice on where banks are getting AI wrong, what they should prioritise, and why the shift has already happened.

My Advice to Banks on AI: Philipp Buschmann of AAZZUR

I spoke with Philipp Buschmann, CEO of AAZZUR, who leads an orchestration platform for embedded finance that helps enterprises and banks integrate regulated financial services. Philipp shares practical, candid advice for bank executives navigating AI and data strategy in an environment where customer expectations have already shifted.

Over to you Philipp - my questions are in bold:


Can you give us an introduction to you and an overview of your organisation?

I'm Philipp Buschmann, CEO of AAZZUR. We build an orchestration layer for embedded finance, which is a neat way of saying we help enterprises and banks connect regulated financial services without ending up with a brittle stack of one-off integrations.

Payments, issuing, onboarding, compliance checks, fraud tooling, data flows, the reality is most organisations have accumulated these capabilities over time from different providers, built for different moments, then stitched together under pressure. It works until it doesn't, and when it breaks it breaks at the worst possible time, during growth, expansion, a product launch, or a regulatory shift. AAZZUR is designed to reduce that complexity by creating a more structured way to connect, manage, and swap parts of the stack without rebuilding everything from scratch.

If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?

The biggest mistake is still treating AI as a future programme rather than the current operating environment. The language in boardrooms often sounds like, we're preparing for AI, we're exploring AI, we're waiting to see where it lands. That framing is already too late. The shift has happened, even if it is not evenly distributed across every team and every product yet. Customers are already being trained by other experiences, and once that expectation exists, it becomes the new baseline.

Data and AI should be used to improve the customer journey and reduce friction, not to generate internal reporting theatre. A chatbot is not the point; it is just another channel. The point is whether the bank can use data to make decisions faster, support customers better, prevent issues earlier, and remove steps that exist purely because the organisation is used to them. If your digital experience still revolves around a static website journey, you're not thinking in the right direction, because the way people will interact with financial services is shifting towards conversational, contextual, and increasingly automated experiences.

What's one AI or data capability banks should prioritise in the next 12 to 18 months, and why?

Banks should prioritise getting intelligence into operational decision-making, rather than running more pilots that never leave the sandbox. Most banks already have the data they need to improve outcomes; the bottleneck is turning it into action inside core workflows. That means applying AI where it can automate decisions, reduce manual review, and improve real-time responses, particularly across payments, fraud, onboarding, and service.

A practical way to think about this is agentic capability, not in the buzzword sense, but in the sense that the system can take more responsibility for routine actions within defined boundaries. Fraud is a good example because the tools and models exist, yet many institutions still bolt solutions on top rather than designing a joined-up flow where intelligence sits inside the infrastructure. What holds banks back is rarely that it cannot be done; it is a lack of clarity on what is possible, plus a tendency to treat AI as a separate initiative instead of a capability that should plug into the stack and improve everyday decisions.

Where do you see banks overestimating AI, and where are they underestimating it?

Well, I'm not usually in many boardrooms with Bankers, but I do think they tend to overestimate AI when they expect it to fix problems that are really organisational. AI will not solve unclear accountability, weak process design, or a culture that avoids decisions. If the foundations are fragmented, adding AI often just accelerates the mess, because it increases speed without increasing alignment.

They underestimate it when they assume customers will tolerate slow, generic experiences for much longer. The bigger shift is not that AI exists, but that it changes what customers expect as normal. People will increasingly assume the bank can recognise context, respond quickly, personalise support, reduce unnecessary steps, and prevent issues before they become painful. When that expectation is set elsewhere, customers bring it with them, and the gap becomes visible fast.

What does good actually look like when AI and data are working well inside a bank?

Good looks like less friction and fewer handoffs. It looks like fewer false positives in fraud, faster resolutions when something goes wrong, fewer manual interventions across onboarding and servicing, and a more consistent experience for the customer. The best outcomes are usually not dramatic; they are quietly efficient. Customers feel that things are smoother and more responsive, and internal teams feel that less time is spent on repetitive tasks, escalations, and workarounds.

The simplest test is whether it makes life easier for the customer and cheaper for the organisation at the same time. If AI adds steps, increases uncertainty, or creates a layer of complexity that nobody can explain, it is not working properly.

What's the hardest AI or data decision bank executives are avoiding right now, and why?

The hardest decision is usually speed versus comfort, because banks are built to manage risk and preserve trust. It's deep-rooted in their culture to be cautious. That caution is their asset, but it also becomes a huge barrier when the organisation uses it as a reason to delay changes that are already inevitable. Executives often avoid the structural decisions that come with operational AI, which means redefining workflows, changing who owns outcomes, and accepting that some decisions will be automated within clear controls rather than manually checked by default.

Security, safety, and compliance will always matter, but the question is whether they are being used to shape progress or to justify hesitation. The uncomfortable truth is that the world has already moved, and the risk is less about adopting AI too quickly and more about assuming there is still plenty of time to catch up.


Thank you Philipp!

You can connect with Philipp on his LinkedIn Profile and find out more about the company at aazzur.com.