My Advice to Banks on AI: Dmitry Borodin of Creditinfo

Dmitry Borodin, Head of Decision Analytics at Creditinfo, shares practical advice for bank executives on avoiding common AI pitfalls and prioritising high-impact use cases.

My Advice to Banks on AI: Dmitry Borodin of Creditinfo

I spoke with Dmitry Borodin, Head of Decision Analytics at Creditinfo, who leads a team of credit risk professionals across eight countries. Dmitry shares his practical advice for bank executives navigating AI and data strategy, from avoiding costly mistakes to identifying the capabilities that matter most.

Over to you Dmitry - my questions are in bold:


Can you give us an introduction to you and an overview of your organisation?

I am Head of Decision Analytics at Creditinfo where I lead a team of credit risk professionals across eight countries, overseeing the development and maintenance of risk products and credit bureau scorecards in 16 markets.

Creditinfo was founded in 1997 and provides credit information and risk management services internationally. We work with lenders and public-sector bodies to support responsible access to finance through data, software, and analytics. We have a global footprint, operating in a network of more than 30 credit bureaus globally. Our work focuses on improving credit decision-making, expanding access to credit and boosting financial inclusion for SMEs and individuals.

If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?

The biggest mistake is a lack of focus on high-impact use cases where banks already have plenty of clean, interconnected data. Instead of prioritising quick wins, some organisations get stuck in endless projects that drain time and resources without delivering P&L impact.

For example, a bank might spend millions trying to build a generative AI "financial concierge". This is a complex, high-risk initiative that may quickly become outdated. Banks shouldn't overlook the less 'shiny' options, such as a simpler machine learning model for transaction categorisation. This kind of model can deliver immediate uplift, improving mobile app engagement and credit risk scoring at a fraction of the price.

How can this be avoided? And where should you start with your first AI/ML model?

As with anything, don't bite off more than you can chew. I would always recommend starting with small, focused prototypes. Assess uplift and implement through a series of targeted research projects designed to quickly evaluate the value of an AI model and deliver early results. If the value cannot be demonstrated within weeks, understand why that was so, and move on.

What's one AI or data capability banks should prioritise in the next 12–18 months, and why?

Customers will increasingly challenge banks on speed and affordability; if they do not receive instant decisions and relevant offers, they go elsewhere. After years of tight monetary policy, the pause and gradual reduction in interest rates have already boosted loan demand and will continue to do so.

As a result, consumers will aggressively seek cheaper credit. Banks need more efficient decision-making and accurate risk-based pricing to remain competitive, attract new customers and retain existing ones.

Where do you see banks overestimating AI, and where are they underestimating it?

Human strengths such as empathy, judgement, and contextual understanding remain critical for high-stakes interactions. Fully autonomous systems still face severe limitations in complex scenarios. Therefore, financial institutions should focus on systems where AI can amplify human output rather than replace it. For example:

  • Analysts can use LLMs to draft credit assessments, which experts then finalise.
  • Compliance officers can use AI to flag suspicious transactions, which are then reviewed by humans applying their expertise and contextual knowledge before escalation.
  • Advisors can use AI to generate five potential portfolio strategies based on market data, but the advisor selects the one that best fits the client's risk tolerance and long-term goals.
  • Banks should also define clear processes for when human validation is required and maintain strong model management practices that continuously monitor and challenge models in use.

What does "good" actually look like when AI and data are working well inside a bank?

"Good" looks like explainable efficiency. It is not only about automation; it is about having a working interaction between data, model, and human oversight.

AI is a powerful asset, but its real impact depends on how it is used. AI-driven credit scoring, for example, can introduce bias if trained on historical data without any supervision and negatively affect financial inclusion. With proper human oversight, this risk can be managed.

The most effective approach is a balanced one. Combining AI efficiency and pattern recognition with human judgement leads to more reliable outcomes. AI should strengthen financial decision-making, but it shouldn't replace the human judgement needed to interpret results, challenge conclusions, and ensure fairness.

Data quality is equally critical. Poor-quality data leads to bad predictions and can amplify bias, while strong data foundations enable stable and strong models.

What's the hardest AI or data decision bank executives are avoiding right now, and why?

The biggest issue is the "last mile" problem: reengineering business processes to efficiently use the model's output.

Many organisations adopt new techniques but fail to build operational frameworks around them. As a result, even strong models remain underused. For example, a powerful AI model is useless if front-line staff are not empowered or equipped to act on its recommendations.

Smart model-driven strategies and well-defined decision processes are essential to fully understand the value of AI.


Thank you Dmitry! You can connect with Dmitry on his LinkedIn Profile and find out more about the company at creditinfo.com.