My Advice to Banks on AI: Emmanouil Gavriil of Hack The Box
VP of Labs at Hack The Box shares practical advice for bank executives on AI security, penetration testing, and why speed matters more than intelligence for attackers.
I spoke with Emmanouil (Manos) Gavriil, VP of Labs at Hack The Box, the leading AI-powered cybersecurity readiness and upskilling platform with over 4 million members globally. With more than 20 years in cybersecurity spanning offensive security, incident response, and leading both offensive and defensive teams, Manos brings a unique perspective on what banks need to get right with AI. He shares practical guidance on treating AI as a risk addition rather than a technology upgrade, and why continuous penetration testing must become the norm.
Over to you Manos - my questions are in bold:
Can you give us an introduction to you and an overview of your organisation?
I'm Emmanouil (Manos) Gavriil, and I am the VP of Labs at Hack The Box, where I've been for almost five years. My team and I design and operate large-scale, realistic environments where security teams can practice against real-world threats. These environments replicate everything from phishing and ransomware to APT attacks, across domains like AI, OT, SCADA and cloud.
I've been working in cybersecurity for over 20 years, starting with a background in offensive security. Over time I expanded into security operations, incident response, threat detection, digital forensics and leading both offensive and defensive teams as a Director in an MSSP. That combination of perspectives shapes how I look at modern cyber readiness today.
Hack The Box itself is now a global cybersecurity training and talent platform. We are trusted by enterprises, governments, financial institutions and universities to help build and continuously develop skilled cybersecurity teams. We do this through hands-on labs, unique gamification models, and structured learning through our Academy. For banks specifically, the focus is on operational readiness—helping teams be genuinely prepared to defend against real attacks, not just on paper.
If you were advising a bank CEO today, what would you say is the single biggest mistake they're making with data and AI?
The biggest mistake I see is that bank CEOs tend to deploy AI on top of existing data, thinking of it as a technology upgrade. In reality, it changes the entire threat model and should be viewed as a risk addition. When AI is introduced without re-evaluating how sensitive information is processed, accessed and exposed, banks create new attack paths they haven't fully accounted for.
The real question is how much you can trust an AI model and whether your teams are properly equipped to protect it. AI must be treated as a product, not a project. It requires an operating model where product, engineering and domain experts work together continuously, embedded into business units rather than working in isolation. Without that mindset, AI becomes something you deploy but cannot control.
What's one AI or data capability banks should prioritise in the next 12–18 months, and why?
The very first thing banks should prioritise, immediately and continuously, is penetration testing of their AI systems. Not as a one-off audit but as ongoing security testing that evolves as the systems evolve. You can outsource some of it, of course, but banks must also train internal people to properly test and challenge their own AI.
Without that internal capability, you only ever see part of the picture. Continuous AI-focused penetration testing is the only way to ensure these systems don't quietly accumulate risk.
Where do you see banks overestimating AI, and where are they underestimating it?
Banks tend to overestimate AI in two main ways. First, they believe it can replace humans. AI is powerful, but it lacks context, judgement, and strategic understanding. Roles as complex as those inside a bank, especially leadership roles, can't simply be automated away. Second, there is still a belief that AI is a "set it and forget it" solution. Install it, turn it on and let it run. In reality, humans are firmly in the loop and AI introduces new configuration, oversight and security tasks that require more human involvement, not less.
Where banks underestimate AI is in the skills required to secure it. Many assume that a security administrator or penetration tester automatically knows how to handle AI systems. But unless someone has had proper training for AI-oriented roles, they will miss critical issues.
The other underestimated aspect is the offensive side - what attackers can do with AI. The speed at which AI can operate fundamentally changes the defender's challenge. Attackers don't necessarily need more intelligence, just more speed, and AI gives them that.
What does "good" actually look like when AI and data are working well inside a bank?
Good looks like treating AI like a real production system. That means testing it properly before it goes live, involving security teams at the design phase rather than at the end and resisting the urge to deploy too quickly just because of hype. When security, engineering, risk and data teams collaborate early and continuously, AI becomes a system that enhances the bank rather than exposing it.
In practice, "good" means having AI systems that support staff rather than replace them. For example, by summarising complex, unstructured customer information so credit risk teams can make decisions faster and more accurately. When AI is embedded thoughtfully into existing workflows, the improvement is clear without compromising safety.
What's the hardest AI or data decision bank executives are avoiding right now, and why?
The hardest decision is accepting that moving fast with AI can introduce real failures. There are tremendous pressure and hype to adopt AI quickly, but banks operate in an environment where the impact of mistakes is enormous. Executives must decide whether to take on that risk, to accept it knowingly, or to slow down and be more cautious.
There isn't a right or wrong choice here, but it is a genuinely difficult one. Every bank has a different tolerance for risk, and the decision must be based on a calculated assessment rather than fear of missing out. Sometimes the correct approach for a bank is to deploy AI more slowly because the security consequences of getting it wrong are much larger than in other sectors.
Anything else?
Education is the core theme underpinning all of this. If stakeholders aren't trained and informed, they can't anticipate the risks or understand how AI will affect their environment. With AI evolving so quickly, a one- or two-year plan becomes irrelevant almost immediately. Banks must think in cycles of months, not years.
Continuous training is essential for both offensive and defensive security teams. New threats, new techniques and new attack paths appear constantly, so readiness is never a static concept. And as organisations integrate AI, the complexity of processes increases - not because the technology is too advanced but because we don't yet fully understand it. That's why it's vital to understand where AI fits inside the bank and how it should be integrated safely.
Thank you Manos! You can connect with Manos on his LinkedIn Profile and find out more about the company at www.hackthebox.com.