My Advice to Banks on AI: Dan Kemp of Portfolio Thinking

Dan Kemp, founder of Portfolio Thinking and former CIO at Morningstar, shares why banks should treat AI as a mirror not an oracle, and how to use it to protect client wealth.

My Advice to Banks on AI: Dan Kemp of Portfolio Thinking

I spoke with Dan Kemp, founder of Portfolio Thinking, an AI-enabled independent investment consultancy and research platform. Before launching Portfolio Thinking, Dan served as Chief Research & Investment Officer at Morningstar, overseeing circa $350 billion and leading over 300 analysts and portfolio managers globally. He shares his practical advice on how banks should - and shouldn't - be deploying AI and data in wealth management.

Over to you Dan - my questions are in bold:


Can you give us an introduction to you and an overview of your organisation?

I am the founder of Portfolio Thinking, an AI-enabled independent investment consultancy and research platform. Before launching this business, I served as the Chief Research & Investment Officer at Morningstar, overseeing circa $350 billion and a team of over 300 analysts and portfolio managers around the world. One of my key areas of focus during my time at Morningstar was the application of behavioural science to the research and investment process. This focus on behaviour was born out of my time as a financial adviser, helping families navigate the chaos of markets as they planned for their future.

Portfolio Thinking is the crystallisation of that experience. My work is now dedicated to equipping advisers and institutional investors with the mental models and behavioural insights needed to bridge the gap between investment returns (what the market does) and investor results (what people actually keep).

If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?

There are clearly many aspects of banking with varying use cases for AI, but if we focus on the wealth management area, then the biggest mistake is treating AI as an oracle when it is, in fact, a mirror.

Executives appear to be dazzled by the "intelligence" of these systems, attempting to apply them to every stage of the investment process, from view formation to portfolio construction. However, because Large Language Models are trained on the vast corpus of human text, they inevitably reflect and amplify the cognitive biases that plague human decision-making.

AI has a tendency to provide the obvious consensus answer with supreme confidence. In financial markets, where the consensus view is often already priced in, this is a bug rather than a benefit. The mistake lies in using a probability engine for words to try and solve for investment outcomes. CEOs are ignoring the fact that we already have robust tools for valuation and asset allocation, and by forcing AI into this role, they miss its true value: the creation of time.

What's one AI or data capability banks should prioritise in the next 12–18 months, and why?

I would recommend prioritising the use of AI as "Behavioural Circuit Breakers."

We know that wealth is rarely destroyed by markets themselves; it is destroyed by the decisions investors make during market stress. Banks possess a treasure trove of data regarding how their clients react to volatility; this can be used to predict what the client is about to do next.

Imagine a system that detects when a client is obsessively checking their portfolio during a downturn, a typical precursor to panic selling, the AI triggers a stewardship intervention, perhaps prompting a call from their adviser or serving content that reinforces the long-term plan. The priority must be using technology to insert a pause between the emotional impulse and the financial action.

Where do you see banks overestimating AI, and where are they underestimating it?

Overestimating: They are vastly overestimating AI as a Crystal Ball. There is a seductive narrative that if we feed enough data into a large language model, it will act like Laplace's Demon and predict future stock prices or economic turns. This ignores the reality that markets are complex adaptive systems driven by human psychology, not just physics. AI cannot predict the "unknown unknowns."

Underestimating: They are underestimating AI as a Process Architect. We often view investment discipline as a human trait, yet humans are famously ill-disciplined. We get tired, hungry, and biased. Banks are underestimating the power of AI to act as an "investment exoskeleton" for advisers, automating the drudgery of rebalancing, tax-loss harvesting, and compliance. If AI handles the science of implementation, it frees the adviser to focus entirely on the art of empathy and coaching.

What does "good" actually look like when AI and data are working well inside a bank?

"Good" looks like noise-cancelling headphones. In a poor system, AI acts as a megaphone, amplifying every market tick. In a "good" system, the technology filters out the frequencies that are irrelevant to the client's long-term goals.

When data is working well, an adviser's dashboard shouldn't be flashing red or green based on yesterday's market moves. It should be quiet. It should only alert them when a portfolio has drifted beyond its parameters or when a client's behaviour indicates they are at risk of abandoning their plan. Success isn't measured by the volume of insights generated, but by the size of the "Behavioural Gap", the difference between the model portfolio's return and the client's actual return. If technology narrows that gap by keeping the client in their seat, that is what "good" looks like.

What's the hardest AI or data decision bank executives are avoiding right now, and why?

The hardest decision is the choice to introduce friction.

We live in an era of "frictionless finance", one-click trades and gamified interfaces. The entire tech industry is built on removing friction. But as stewards of capital, we know that some friction is essential for prudent decision-making.

Executives are avoiding the decision to intentionally slow down the user experience. They are afraid to design an interface that says, "Wait. Before you execute this trade, let's review how this aligns with your 10-year objective." They avoid this because it hurts short-term engagement metrics and feels counter-intuitive to the "digital transformation" mandate. However, the bravest leaders will realise that to protect their clients' wealth, they need to use AI to build speed bumps, not just superhighways.


Thank you Dan! You can connect with Dan on his LinkedIn Profile and find out more about the company at www.portfolio-thinking.com.