My Advice to Banks on AI: Roman Stanek, Founder & CEO, GoodData
GoodData's founder shares why banks have an intelligence problem, not a data problem, and what CEOs should prioritise in the age of agentic AI.
This series brings together senior leaders from across the technology ecosystem to share candid perspectives on AI with the FinTech Profile readership. Senior banking executives frequently tell me that while there is no shortage of AI commentary, there is a lack of direct, practitioner-level insight from senior vendor experts who understand the realities of operating at scale. This series is designed to address that gap.
I’d like to thank Roman for sharing his perspectives and for being the first of several experts contributing to the discussion.
- Ewan
Roman Stanek is the founder and CEO of GoodData, a platform built for organisations navigating the challenge of delivering intelligence at scale in an AI-driven world. With a career spent building platforms at architectural transition points, Stanek works with large enterprises including banks, payment providers, and fintechs to embed AI-native analytics infrastructure directly into products, workflows, and decisions.
In this interview, he explains why banks don't have a data problem but an intelligence problem, and why shared governed meaning matters more than dashboards.
As always, my questions are in bold - over to you Roman:
Can you give us an introduction to you and an overview of your organisation?
I'm Roman Stanek, the founder and CEO of GoodData. I've built platforms for most of my career, usually at transition points when existing architectures stop scaling and new ones have to emerge.
GoodData is built for organisations navigating that exact challenge - delivering intelligence at scale in an AI-driven world where traditional analytics are reaching their limits.
We work with large enterprises, including banks, payment providers, and fintechs. These organisations need to operate at scale, across multiple teams, regions, and products. All whilst ensuring that no control or trust is lost. Our focus is not dashboards for analysts, but building AI-native analytics infrastructure that gives both humans and machines governed access to business meaning, so intelligence can be embedded directly into products, workflows, and decisions.
This issue is especially evident in banking; banks do not have a data problem, they have an intelligence problem. Too many dashboards, too many tools, and still too little clarity when real decisions need to be made. Particularly in an industry where every decision must be auditable and defensible under strict regulatory scrutiny.
We're helping institutions move from reporting on the past to reasoning about the present and acting with confidence, while staying compliant by design.
If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?
I'd say the biggest mistake is believing AI will make decisions safer by default. It won't.
What most bank CEOs actually worry about isn't technology; it's making the wrong call because different leaders bring different answers into the room. Or because the numbers arrive too late to matter.
AI doesn't resolve that uncertainty, it just exposes it faster and more visibly. If your executive team can't give you one clear, trusted answer today, AI will simply produce ten versions of it tomorrow. That's why so many AI initiatives stall.
The hard truth is that if your organisation can't agree on what its core business metrics actually mean today, AI will automate that disagreement tomorrow. That comes down to leadership issue, not a technology failure.
What's one AI or data capability banks should prioritise in the next 12–18 months, and why?
One thing banks should prioritise in the next year is shared governed meaning. Everything else is noise. Not another dashboard. Not another model.
One global payments and financial services organisation we work with recently launched an intelligence platform that brings real-time portfolio insights, benchmarks, and cardholder behaviour into a single place for issuers and internal teams.
The real value it produced wasn't flashy visualisations; it was consistency. Everyone was looking at the same performance and risk indicators, updated in near real-time, without rebuilding reports or reconciling numbers across teams. That consistency is what allows insight to scale, across markets, products, and decisions, without adding complexity every time.
Where do you see banks overestimating AI, and where are they underestimating it?
Banks overestimate AI's ability to "figure things out" on its own. LLMs don't understand your business by default. They don't know your regulatory constraints, internal definitions, or risk appetite unless you explicitly give them that context.
At the same time, banks massively underestimate what happens once that context exists. When AI can reason over governed, real-time data, it stops being a chatbot or analyst assistant and becomes a decision layer. That's a very different level of impact, and a very different level of responsibility.
What does "good" actually look like when AI and data are working well inside a bank?
"Good" is boring, in the best possible way.
It's when people stop questioning numbers in meetings. When regulators get clear explanations instead of hand-waving. When AI shows up inside credit decisions, fraud reviews, and pricing workflows, not just in slide decks.
Architecturally, it also means AI systems aren't scraping dashboards or hardcoding logic. They're accessing a governed source of truth through clean interfaces. When that's in place, speed and trust stop being a trade-off, you get both.
What's the hardest AI or data decision bank executives are avoiding right now, and why?
They're avoiding the question of who owns meaning.
Everyone wants AI, but no one wants to own definitions. IT owns infrastructure, risk owns controls, finance owns numbers (sometimes), and product teams own customer views (until they don't). AI forces this ambiguity into the open, and that makes people uncomfortable.
But there's no way around it, intelligence doesn't scale without shared meaning.
Banks that confront this head-on by establishing clear, governed ownership of business semantics will move faster and safer. Banks that don't will keep running pilots and calling it progress.
Many thanks to Roman Stanek for sharing his insights. You can learn more about GoodData on their website.