My Advice to Banks on AI: Robert Cooke of 3forge

Robert Cooke, CEO of 3forge, shares practical advice for bank executives on AI strategy, warning against architectural inconsistency and highlighting the need for governed runtimes.

My Advice to Banks on AI: Robert Cooke of 3forge

I spoke with Robert Cooke, Chairman and Chief Executive Officer of 3forge, an enterprise software company delivering high-performance tools for real-time data integration, visualisation, and application development. With over two decades modernising mission-critical systems at JPMorgan, Bear Stearns, and Liquidnet, Robert shares practical advice for bank executives navigating AI and data strategy.

Over to you Robert - my questions are in bold:


Can you give us an introduction to you and an overview of your organisation?

I'm Robert Cooke, Founder and CEO of 3forge. I've spent more than two decades designing and modernising mission-critical systems inside financial institutions, which has included middle-office workflow optimisation at JPMorgan, regulatory and high-frequency trading infrastructure at Bear Stearns, and post-trade processing and transaction cost analysis at Liquidnet.

Those environments teach you something fundamental: financial markets do not tolerate instability. Systems must remain live, controls must remain intact, and architecture matters.

Over the past fifty years, capital markets technology has layered forward, with each building upon what came before it. Mainframes were wrapped instead of replaced, while client-server extended interfaces. In the same vein, Java abstracted deployment, FIX standardised trading communication, and web GUIs pushed controlled functionality closer to users. Each advancement built on what already worked, but finance never standardised the application layer itself into a coherent environment where data access, business logic, user interface, and governance controls live together.

Instead, financial institutions accumulated stacks of disconnected databases, streaming engines, middleware, UI frameworks, and entitlement systems. But this wasn't an irrational decision. It's the natural outcome of regulation, M&As, asset-class specialisation, and years of urgent delivery. That said, without a unified foundation, innovation inevitably becomes more difficult and costly.

3forge was built to address that very structural friction. We provide an application engine purpose-built for finance – a unified runtime where live data access, a finance‑native development language, a workbench for building and operating applications, and built-in governance and observability operate together. This system design allows banks and other financial institutions to modernise by layering forward, shipping change continuously while mission-critical systems remain online.

If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?

Right now, enormous enthusiasm and budget is flowing into all things AI. But the biggest mistake is treating AI as if it can compensate for architectural inconsistency and expecting ROI before the foundation exists to support and deliver it. After all, the market will always reward outcomes over effort. Banking outcomes mean measurable cycle-time reduction, fewer breaks, faster delivery, safer automation, and better controls. If AI cannot survive compliance review, be audited end‑to‑end, and be repeated deterministically, it won't scale, and it certainly won't deliver returns.

What I often see is AI being layered on as a set of experiments: data copied into side environments, models wired into brittle interfaces, and governance reinvented each time. This process increases operational complexity faster than capability, leaving banks with a growing portfolio of demos and a dwindling path to production.

The more durable approach is architectural discipline. Virtualise access to legacy systems to stop rewiring core infrastructure for every new idea. Put a governed control plane in front of AI so entitlements, audit trails, and kill‑switch controls are consistent. Lastly, enable AI to operate inside a trusted runtime rather than outside it.

AI doesn't remove software gravity, but it does operate within it. ROI depends on whether you reduce that gravity or add to it.

What's one AI or data capability banks should prioritise in the next 12–18 months, and why?

Banks should prioritise building a unified application runtime that turns AI spend into production outcomes, rather than merely creating a new data repository or model initiative. Most financial institutions already possess data platforms with streaming capabilities and ample storage, but lack cohesion.

If AI is introduced into an environment without architectural discipline, it will not simplify the stack; it will amplify fragmentation. Given free rein to assemble applications, AI will stitch together dependencies across systems, generate new integration paths, and create artefacts that still require compliance review. The result is not acceleration, but accelerated technical debt.

Finance operates under four simultaneous requirements:

  • Unified access to streaming, historical, and legacy data under existing controls
  • Interfaces that evolve at business cadence
  • Finance‑native development language without fragile bridge code
  • The ability to build and deploy whilst systems remain active

From an AI perspective, that unified runtime becomes the vessel that lets banks deploy AI responsibly and repeatedly. In practice, it looks like a three‑part progression.

First, virtualised legacy access: a real‑time abstraction layer that makes legacy "invisible but reliable," so new AI and analytics workflows don't keep punching holes into fragile systems.

Second, a governed AI gateway: a single control plane where agents inherit the same permissions as users, activity is logged and reproducible, and access is controllable and killable.

Third, AI‑native development inside trusted guardrails: using coding agents to generate layouts, workflows, and business logic inside the platform's native grammar, so what AI produces is auditable, permission‑aware, and operationally safe.

The result? "We tried AI" becomes "AI is now part of how we run the bank." New initiatives require returns, which require compounding deployments. Start with bounded domains (reconciliation, exception management, and client analytics), and then scale outward as each release strengthens the architecture.

Where do you see banks overestimating AI, and where are they underestimating it?

Banks overestimate AI's ability to reconcile fragmented systems. Intelligence does not eliminate structural inconsistency. If data is disjointed, entitlements are uneven, and workflows are brittle, AI will naturally reflect those conditions.

Then, banks underestimate AI's operational implications. We are moving from passive analytical models toward agentic systems that invoke tools, execute logic, and persist across workflows. As new interface standards connect and embed agents in enterprise systems, the temptation will be to treat those connections as privileged exceptions. In regulated markets like financial services, that creates backdoor access routes that bypass durable governance and outlive the pilot that created them. The correct approach is the opposite: subordinate every new access path to the same entitlement and audit framework that already governs dashboards, APIs, reports, and human users.

Further, financial institutions underestimate just how much AI will change software creation itself. The latest code‑generation models are not just helping developers write snippets; they're shifting expectations around how quickly internal applications can be produced. These advances can be daunting, and AI will disrupt traditional software companies primarily on writing code.

For an application engine, AI is not a threat, but a multiplier. With a governed runtime and clear grammar for data, logic, and UI, AI accelerates delivery from inside an engine without dissolving controls.

What does "good" actually look like when AI and data are working well inside a bank?

I've mentioned all of these phrases already, but "good" is coherent, continuous, and production-ready.

Front office, risk, and compliance should be sharing a governed, real-time view of the same underlying data (streaming and historical), without copying sensitive datasets into uncontrolled silos. Interfaces must remain live and responsive under high volume, business rules must be defined once and reused across workflows, and observability must include lineage, latency, entitlements, and workflow integrity end-to-end. Lastly, controls are embedded in the delivery process rather than attached afterwards.

In that environment, AI operates inside workflows rather than beside them. It assists with reconciliation, surveillance triage, transaction cost analysis, exception management, and decision support without requiring separate data silos or bespoke control overlays.

As AI-native development matures, "good" will start to mean AI generates parts of the application surface, such as layouts, workflows, and business logic, inside a trusted runtime. In finance, speed only counts if it's safe.

What's the hardest AI or data decision bank executives are avoiding right now, and why?

Bank executives are avoiding the decision to standardise the foundational layer of internal software delivery. Every strategic initiative in banking requires new applications: new risk views, reporting workflows, and operational tooling. If each initiative assembles its own integration stack, complexity accumulates, delivery slows, and governance diffuses. We like to use the phrase "pilot purgatory" to describe this situation: plenty of promising experiments, but no compounding capability.

Standardising on an application engine is a structural decision. It requires cross-silo alignment and commitment to embedding governance at the platform level. Whilst it likely won't produce immediate headlines, it will transform the economics of delivery.

The ROI point is simple, yet uncomfortable. It's impossible to achieve durable returns from AI if it cannot move into production safely. Production means entitlement‑aware access, auditability, determinism, and operational control. The next stage is AI-native development, which makes this even more important because when machines begin producing more of the code and workflows, the runtime they operate in becomes the control plane. Without a governed vessel for that capability, banks won't just fail to capture ROI, they'll expand risk. In financial markets, sound architecture is what turns innovation into operations, and operations are what produce returns.


Thank you Robert! You can connect with Robert on his LinkedIn Profile and find out more about the company at 3forge.com.