My Advice to Banks on AI: Sanjeet Singh of SUSE

Sanjeet Singh, Senior Director of AI at SUSE, shares why banks must prioritise agentic sovereignty and warns against the hidden risks of pattern leakage in AI adoption.

My Advice to Banks on AI: Sanjeet Singh of SUSE

I spoke with Sanjeet Singh, Senior Director of AI at SUSE, who has built SUSE AI from the ground up with a focus on security and digital sovereignty for regulated industries. With over 30 years of enterprise technology heritage and the trust of 60% of the Fortune 500, SUSE brings a unique perspective on how banks should navigate AI adoption without compromising control.

Over to you Sanjeet - my questions are in bold:


Can you give us an introduction to you and an overview of your organisation?

My name is Sanjeet Singh, Senior Director and Product Lead for SUSE AI. I've built our AI product, SUSE AI, from the ground up — and that journey has shaped everything about how I think about the role AI should play in financial services.

SUSE AI is built for organisations where security and privacy aren't optional extras - they're non-negotiable. That puts us squarely in the world of banking and government, where the question isn't simply "how do we adopt AI?" but "how do we do it without compromising the trust our customers place in us?"

SUSE has been a cornerstone of enterprise technology for over 30 years, underpinning the majority of Fortune 500 companies. In financial services, that heritage translates into something banks increasingly need - the ability to harness AI on their own terms, keeping sensitive data and core business logic firmly within their control. Digital sovereignty is no longer a regulatory checkbox; it's becoming a basic requirement. For institutions navigating one of the most significant technological shifts in a generation, that kind of confidence is everything.

If you were advising a bank CEO today, what would you say it the single biggest mistake they're making with data and AI?

The mistake is treating AI as a peripheral IT project rather than a fundamental rewrite of the bank's operating system. Many are trying to power a legacy institution with cutting-edge intelligence without updating the underlying architecture.

More critically, by relying on external public interfaces, banks risk 'Pattern Leakage.' Even if your raw data isn't explicitly stored by a provider, these models learn from the proprietary logic, risk correlations, and customer behaviours you feed them. You are essentially subsidising the intelligence of a third-party platform with your own intellectual property. If I sum that up: 'In the AI era, if you don't own the infrastructure of your intelligence, you don't own your future.'"

What's one AI or data capability banks should prioritise in the next 12–18 months, and why?

The priority must be Agentic Sovereignty. We are moving from 'Chatbots' to 'AI Agents' - autonomous systems that actually execute transactions, assess credit, and manage compliance.

Because these actions carry significant regulatory weight, you cannot run them on shared public infrastructure. Banks must prioritise a private AI foundation that keeps the 'brain' of the operation inside the bank's own vault. This ensures that as AI moves from talking to acting, it does so within a perimeter you fully control.

Where do you see banks overestimating AI, and where are they underestimating it?

Banks overestimate the 'Magic' - the idea that AI can bypass the need for rigorous data governance and structural change.

Conversely, they underestimate the 'Plumbing' - specifically the value of Open Standards. Many are walking into a vendor lock-in trap by building their strategy on proprietary cloud ecosystems. The model is a commodity; the sovereign platform that runs it and how it's integrated into the processes and internal data is the true competitive moat.'

What does "good" actually look like when AI and data are working well inside a bank?

'Good' is when AI becomes the invisible operating system. It isn't a separate app; it is the fabric of the institution.

It looks like a world where a commercial loan is approved in minutes because private AI agents have securely accessed internal data to verify risk - without that data ever touching an external server. It is an environment where every AI-driven decision is fully auditable and transparent to regulators, providing the speed of a fintech with the security of a global bank.

What's the hardest AI or data decision bank executives are avoiding right now, and why?

The hardest decision bank executives are navigating is how to balance fiduciary caution with operational urgency. Currently, many institutions are in a state of 'high-stakes deliberation.' While this caution is a necessary part of risk governance, it often creates a vacuum. In that vacuum, employees and Lines of Business - eager to stay competitive - begin adopting unmanaged public AI tools.

The decision being avoided isn't just about 'which AI to buy'; it's the decision to build a sovereign middle ground. Executives need to move past the binary choice of 'wait and see' or 'risk it all on the public cloud.' The solution is providing a private, enterprise-controlled environment that gives employees the tools they want today, while maintaining the rigorous security the bank requires for tomorrow.

In a regulated industry, caution is a virtue - but in the AI race, silence at the top is interpreted as a licence to experiment in the shadows.


Thank you Sanjeet! You can connect with Sanjeet on his LinkedIn Profile and find out more about the company at www.suse.com.