Jump to Content
Security & Identity

How AI can boost defenders, from defense in depth to the cyber kill chain (Q&A)

February 12, 2026
https://storage-googleapis-com.mygreatmarket.com/gweb-cloudblog-publish/images/GettyImages-1285497603.max-2600x2600.jpg
Seth Rosenblatt

Security Editor, Google Cloud

Get original CISO insights in your inbox

The latest on security from Google Cloud's Office of the CISO, twice a month.

Subscribe

It's no secret that AI is driving radical shifts across industries and civil society — including how we think about cybersecurity. Threat actors are already abusing the technology, including experimenting with automating attacks, but do they have the upper hand, and if so, for how long? We know that defenders are also developing powerful AI tools, but what’s still unknown is what it could mean for enterprise software ownership if companies have to constantly mount AI-directed defenses at AI-powered attacks?

These are just some of the questions Google's Google Cloud's Anton Chuvakin and Timothy Peacock raised with public-interest technologist and author Bruce Schneier during a recent episode of the Google Cloud Security podcast.

Coming off the publication of his new book, Rewiring Democracy: How AI will transform our politics, government, and citizenship, co-authored with Nathan E. Sanders, Schneier also shared thoughts on AI and trust, on whether AI concentrates or distributes political power, and even whether the technology should be used to help cities zone themselves.

What follows is an edited transcript of their conversation.

Anton Chuvakin: At Google, we have long talked about how we believe that AI should be developed boldly and responsibly. Last year, we introduced Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero that actively searches for and finds previously-unknown software vulnerabilities. Do you think AI will end up being a net advantage for defenders or attackers?

https://storage-googleapis-com.mygreatmarket.com/gweb-cloudblog-publish/images/Bruce_Schneier.max-600x600.png

Bruce Schneier: I think it's going to be an arms race for several years. For example, there's been a lot of research about using AI to find vulnerabilities in code. And now we're seeing attackers doing just that, as well as using AI to build exploits. But defenders have access to powerful automation, too, and will be able to use it to fend off the attacks.

The imbalance is that when the defender fixes them, they're gone forever. But I imagine a future where we're embedding AI vulnerability finders in compilers and into the development process. To a great extent, we'll be producing bug-free code.

This technology is coming and it's going to change a lot of things. You can't understand how it's going to affect you if you don't regularly engage with it because it changes all the time. Things that weren't possible three months ago are possible today, and things that aren't possible today will be possible in three months.

In the short term, though, attackers benefit more because existing legacy code is vulnerable. Attackers are also more agile than defenders because they have no bureaucracy or procurement, so we're seeing the rise of automated attacks and ultra-powerful script kiddies through AI tools.

While we haven't seen good defensive AI tools or tactics yet, that's going to change. My long-term bet is that AI will offer more net benefit to defenders because attackers are already attacking at computer speeds and defenders will eventually be able to match them — but getting there is going to be a rocky road.

Anton Chuvakin: Are people more willing to be compromised than to adopt expensive processes to defend at the attackers' speed?

Bruce Schneier: That's an economic problem, not a security problem. That's the problem of it being cheaper to be vulnerable than to be secure. You fix that through the market.

Can defenders deploy automated tools to patch and remove exploits? It's going to take a lot of work. For example, if a company buys a big piece of software and has to modify it every other day based on what its AIs say, the vendor won't like that. We'll have to rethink how software is sold, and what owning and modifying it looks like.

I suspect we'll move to a world with equally automated defense and patching and with systems that are constantly monitoring, hacking, updating, and patching themselves.

Anton Chuvakin: People think getting hacked is cheaper than defense that's strong enough.

Bruce Schneier: That's right, because it's someone else's problem, it's long term, and because companies care about their next-quarter revenue numbers. If we internalize the true cost of security breaches, it's no longer cheaper. It's cheaper because the costs are broadly spread across society.

Late-stage capitalism is full of these externalized costs and risks. Any system where the upside is centralized and the downside is decentralized has too much risk, because the smart actor can take chances knowing they're not going to have to pay the cost.

Anton Chuvakin: Is defense in depth a really good approach for the defenders? The more layers they have, the more it might delay the arrival of the bad guy with AI?

Bruce Schneier: Yes, defense in depth, the cyber kill chain, all those ways of thinking about the attack processes tell us where we need interventions. There are seven steps to the cyber kill chain, and I think AI provides a way to improve each one and make defense better. Does that mean attackers never win? Of course not. But that's how we need to start thinking.

Tim Peacock: Do you have any advice to help people get ready for how AI is going to impact society?

Bruce Schneier: Engage. This technology is coming and it's going to change a lot of things. You can't understand how it's going to affect you if you don't regularly engage with it because it changes all the time. Things that weren't possible three months ago are possible today, and things that aren't possible today will be possible in three months.

So keep paying attention. You can say something is bad, but you can't say you won't even study it. You have to have an informed opinion.

Tim Peacock: What do you think is the impact of AI on society's ability to trust things, like the images and videos we see?

Bruce Schneier: I think about that a lot. It goes a bunch of different ways. I worry less about deepfakes and more about the fact that AI can be used to be deliberately manipulative.

The conversational nature of AI makes it harder, right? If I ask an AI to book a vacation for me, is it doing what's best for me? Or is it getting a kickback from some company? It'll be more effectively manipulative, and it'll be harder to disclose that manipulation. That impacts trust a lot.

Also, manipulating imagery and photos is nothing new. What's different now is how easy it's getting, and I don't know what effects that will have. To me, it's not whether we trust an image, it's whether we trust the source. Do I trust the national media? Do I trust a friend on social media? My guess is we'll get there, and I'm already seeing that my students have a very healthy skepticism of anything they see on the internet. I worry about older people who believe something just because it was in the newspaper.

What I don't know is how much AI makes this problem worse, or it becomes so bad that we question everything.

Posted in