In this interview from Google Cloud Next 2026, Morgan Adamski, U.S. cyber, data and technology risk platform leader at PwC, joins Charles Carmakal, chief technology officer of Mandiant Consulting, part of Google Cloud, to talk with theCUBE's John Furrier and co-host Alison Kosik about how AI is simultaneously expanding the enterprise attack surface and equipping defenders with unprecedented capabilities to outpace adversaries. Adamski underscores the core tension enterprise leaders face: adopting AI fast enough to capture efficiency gains without exposing new vectors to threats. She highlights how AI is enabling organizations to finally map the full scope of their networks — visualizing crown jewels, segmenting risk and identifying gaps that were previously invisible. Carmakal argues that defenders currently hold the advantage, noting that AI models — including Gemini — are now uncovering vulnerabilities that have sat undetected in shipped software for years.
The conversation also explores the evolving relationship between public and private sectors in cyber defense. Adamski notes that with 90% of critical infrastructure owned by private industry, intelligence sharing between government and enterprise must shift from transactional exchanges to continuous, real-time dialogue. The discussion then turns to the risks of deploying AI agents at scale, where Carmakal highlights that agent identity management — a hybrid of human and machine behavior — presents a fundamentally new governance challenge. Both guests warn that the rush to build AI applications is outpacing security rigor, with red team exercises routinely exposing basic vulnerabilities in custom enterprise AI tools. From rethinking 30-day patch cycles to preparing workforces for the realities of an agent-augmented environment, Adamski and Carmakal provide a clear-eyed roadmap for CSOs navigating the next 12 months.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Morgan Adamski, PwC & Charles Carmakal, Mandiant Consulting
Morgan Adamski of PwC, US Cyber Data and Technology Risk Platform Leader, and Charles Carmakal of Mandiant Consulting, Chief Technology Officer, participate in a conversation at Google Cloud Next 2026 that examines artificial intelligence, abbreviated AI, adoption and evolving cybersecurity risk. The discussion addresses agentic defense, vulnerability discovery, modernization of security operations and governance for enterprise AI deployments.
John Furrier of theCUBE Research and Alison Kosik of theCUBE Research host the session and guide analysis of how AI and agent fleets change enterprise attack surfaces, the role of threat intelligence and detection and the implications for security operations. Topics include agent identity, segmentation, vulnerability discovery and public private collaboration.
Adamski recommends that organizations integrate AI into security from the outset and reimagine security operations rather than add tools as an afterthought. They emphasize governance, agent identity management and segmentation as critical factors to consider for enterprise AI deployments. Carmakal stresses that defenders currently hold advantages with AI but must govern agent identities and behaviors to maintain those advantages. They underscore the need for robust governance frameworks and continuous monitoring. Analysts on theCUBE Research highlight accelerated vulnerability discovery and recommend faster patch timelines and enhanced cross sector information sharing.
Morgan Adamski, PwC & Charles Carmakal, Mandiant Consulting
In this interview from Google Cloud Next 2026, Morgan Adamski, U.S. cyber, data and technology risk platform leader at PwC, joins Charles Carmakal, chief technology officer of Mandiant Consulting, part of Google Cloud, to talk with theCUBE's John Furrier and co-host Alison Kosik about how AI is simultaneously expanding the enterprise attack surface and equipping defenders with unprecedented capabilities to outpace adversaries. Adamski underscores the core tension enterprise leaders face: adopting AI fast enough to capture efficiency gains without exposing new...Read more
Morgan Adamski
US Cyber, Data, and Tech Risk LeaderPwC
Charles Carmakal
Chief Technology OfficerMandiant Consulting
In this interview from Google Cloud Next 2026, Morgan Adamski, U.S. cyber, data and technology risk platform leader at PwC, joins Charles Carmakal, chief technology officer of Mandiant Consulting, part of Google Cloud, to talk with theCUBE's John Furrier and co-host Alison Kosik about how AI is simultaneously expanding the enterprise attack surface and equipping defenders with unprecedented capabilities to outpace adversaries. Adamski underscores the core tension enterprise leaders face: adopting AI fast enough to capture efficiency gains without exposing new...Read more
exploreKeep Exploring
How should organizational leaders balance adopting AI (to improve efficiencies, workflows, and workforce upskilling) with the security risks and cyber‑resilience challenges posed by geopolitical dynamics, data sovereignty concerns, and the emergence of AI agents at scale?add
How will AI affect organizations' ability to find and remediate security vulnerabilities in their existing and future software?add
What are C-suite executives focusing on with AI adoption, and what benefits, risks, and workforce challenges are they encountering?add
Morgan Adamski, PwC & Charles Carmakal, Mandiant Consulting
search
Alison Kosik
>> Welcome back to Google Cloud Next26. I'm Alison Kosik alongside John Furrier. Good morning. We're going to get right into it as AI is adopted more widely, more globally, cybersecurity risks I imagine would be expanding as well. I want to bring in our guests, Morgan Adamski. She's the US cyber data and technology risk platform leader at PwC. Welcome to The Cube. And Charles Carmakal, he's the chief technology officer of Mandiant Consulting, part of Google Cloud. Welcome to The Cube as well.
Charles Carmakal
>> Thanks for having us.
Alison Kosik
>> Morgan, I'm going to start with you about this cybersecurity risk. What are you seeing as AI adoption really just takes off?
Morgan Adamski
>> We have a lot of... Thank you so much. I'm so excited to be here. We have a lot of leaders right now trying to think about, okay, how do I adopt AI, use it to improve everything from efficiencies, workflows, help my workforce grow, upskill them, but also what are the security risks associated with integrating that into our enterprise? And so that balanced conversation is really difficult for them to think through because they want to adopt the technology, but they also want to make sure it's secure because they don't want to expose themselves to additional threats and additional attack vectors from adversaries that could take advantage of the fact that they're integrating that technology so quickly.
John Furrier
>> And on the cyber resilience piece of it, you're seeing the geopolitical theater have an impact today. And agents are coming, you thought cloud was at scale, now you're going to see agents at scale and you got the sovereignty global piece unfolding on top of the geopolitical dynamics. How is that impacting cyber?
Charles Carmakal
>> Yeah. So right now the US is really leading the charge in terms of the hardware, the software, the frontier models and from a cybersecurity perspective, we're doing a lot of really good things coming out of the United States. But there's this race between the good folks and the adversaries that are really trying to build out capabilities to help attack organizations. And really right now we're in this phase where with some of the latest models that are incredibly capable of identifying security vulnerabilities and products, we're trying to help organizations find as many vulnerabilities as they possibly can so that they can fix them and then ship them out to the rest of the world so that we can try to beat the adversaries in terms of fixing software before the adversaries have the ability to find those same vulnerabilities and exploit them. So it's going to be a continuous race.
John Furrier
>> Morgan, as you talk to your customers, we see the full stack here at Google, Gemini is the center of orchestrating everything. But you got the data cloud, but they've got the agentic defense, which brings the threat intelligence with Wiz together at scale. How are the customers thinking through this? Because with agents, there's a whole new surface area of attacks. It's at scale. There's tons of risk, but there's tons of upside and it's changing so fast. When you lock in, 30 days later, it's like, wait a minute, we locked this in, now we got something else to deal with. How are you guys framing that and how are you talking to customers about it?
Morgan Adamski
>> Yeah. We're really trying to help customers think through how do you integrate all of those capabilities, the things that you talked about in terms of threat intelligence, detection, response, remediation. When you think about the phenomenal intelligence from Mandiant and Google Threat Intel, you bring that into the picture, you talk about Google's security operations and how they're modernizing that. The conversation with clients is, how do you reimagine what you're doing from a security operations perspective? When you think about old school, it was very manual, a lot of human capacity. You were looking through massive data files and it took hours and days and weeks to find a single intrusion or anomaly. Now we have the technology, which is why I'm team defense when it comes to AI. The adversaries may have the advantage, but I'm team defense, is how do we integrate AI from the beginning to improve everything you're doing from a security perspective so that you can benefit from the efficiencies and find the adversary faster?
Alison Kosik
>> How do you get these enterprises, these companies into the mindset that this is something that should be at the forefront of their budgets?
Charles Carmakal
>> Yeah. And look, a lot of organizations are trying to figure out how do they think about AI? First of all, how do they adopt it in a secure way? To Morgan's point, many years ago, people were a little bit afraid to share data with third party organizations from an AI perspective. I think people are much more comfortable today and they're trying to figure out how do you gain more efficiencies, more economies of scales? How do you better leverage AI from both a creativity perspective, from a defensive perspective? I'm also in team defense, but I actually believe that the defenders have the advantage today because we have incredible cyber capabilities. We have incredible AI technology that's available to all of us today. And look, it's going to continuously be a race against the adversaries, but we have so many good capabilities to help defenders right now that I'm going to continue to be on the team defense side for some time.
John Furrier
>> I like offense. I want to take the offense position. I like offense. I like to run the score up a little bit on the bad guys.
Charles Carmakal
>> Fair.
John Furrier
>> How does AI change the game? Because we're seeing the use cases on the commercial side where new things that they could never do before with AI are emerging or unlocking. What's unlocking in the cyber realm relative to the new things? Because you got the scale, you got the integration. What do you see, because you see more things? What has changed? What are the new unlocks that were hard to get before that you get with the AI models?
Morgan Adamski
>> Yeah. So what I would offer is one of the hardest things sometimes for clients to be able to do is to just understand the enormous of their network. We'd always have discussions of like, "Do you really know where your perimeter defense is? How many devices are part of your network?" They struggle with that and it's hard to... You can't protect what you can't see and you don't know about, right? And so AI has really helped them to build out that comprehensive picture of what their entire network is. And then to better understand what are the vulnerabilities I have? What risk am I carrying? And then how do I do segmentation in my network to say, okay, if this gets breached and the adversaries get in, okay, that's a lower risk. My crown jewels aren't there, but these crown jewels, I really need to protect and I need to make sure I have the right boundaries in place to say, "Adversaries, you're never going to get in here." Let's hope not.
John Furrier
>> Yeah.
Alison Kosik
>> Yeah.
Charles Carmakal
>> Yeah. With AI, you could scale tremendously, well beyond what humans are able to do. And something that we're seeing right now is a lot of organizations are trying to figure out how do they leverage AI to find security vulnerabilities in the products that they've already shipped out for the last 10, 20 some odd years. And we're finding incredible advances with organizations that are leveraging Gemini as well as many other models in terms of finding vulnerabilities that have existed for quite some time, but it's also helping organizations better protect the code that they ship out moving forward. So I think we're going to find... The paradigm is shifting right now, but we're going to find a lot of security gains by leveraging AI but with that said, there's always going to be risks associated with it too.
Morgan Adamski
>> Yeah.
John Furrier
>> I have to ask, because this has been a trend for a while, public-private partnerships. CyberCon was doing all the heavy lifting, as you know, but it's a lot more private. Google's not affiliated with the US government, but they have a lot of full stack capabilities. What is the role of private public in fighting cyber? I know Mandiant has been really proud of over the years of taking down these big groups that were organized. What's the role? How is the government and private, public working together these days?
Morgan Adamski
>> Yeah, I'll jump in here. And look, so 90% of critical infrastructure is owned by the private sector. We all know that, right? So when I see the relationship between public and private and what the government is able to share, the government may have insights on the adversarial intent. What do they hope to accomplish? What are they targeting? What are they trying to get after? They need to be able to share that at the right classification and the right mechanism possible with the private sector and say, "Hey, here's what we should focus on. Here's what we should prioritize on. Here's what our knowledge and insights are. What are you seeing?"
Because the private sector is seeing it every single day against their networks. And so we have to bring those two pieces together to see the comprehensive picture. It's like you have to see all the dots to connect the dots, right? And we all have different parts of the picture, and if we don't bring it together, we can't share. And it can't be transactional, that is part of the problem that's existed in the past. It has to be a continuous conversation with each other and we have to do real-time sharing and it can't just be sharing technical details. It has to be a conversation.
John Furrier
>> Is the progress good? I mean, is it working?
Morgan Adamski
>> I think there's been significant progress over the last couple years in this space. I just think we have to keep that momentum and continue to have the conversation because the threats are getting faster. The adversaries have AI and so any challenges that we currently have, we've got to fix even more here.
Charles Carmakal
>> Morgan did a really good job in our prior life bringing together organizations from the private sector, from the government to share information about emerging threats, very significant things that the world needed to know about. And so kudos to you, thanks for your team's work in bringing everybody together. I think you helped disrupt a lot of bad things from happening that never had the chance to actually play out.
John Furrier
>> I wanted to bring that up because the private sector, they're fighting the battles, they get their own militia.
Morgan Adamski
>> Yeah.
John Furrier
>> I mean, there's a war out there.
Morgan Adamski
>> I think the other thing I'll just point out is I know Google and Sandra Joyce have talked about their active disruption work that they're doing. I think the private sector, like Mandiant, like Google that's really leaning forward in terms of, hey, what can we do on our own behalves, either with legal support or through the fact that we have adversaries using our infrastructure and we're not going to let them do that anymore and we're going to take action to make sure it doesn't impact our customers. I think that's really commendable because industry really can do a lot and I see a lot of people stepping up to make that difference.
Alison Kosik
>> What about the governance of it all? How should organizations be approaching that?
Morgan Adamski
>> In terms of how they can contribute to the conversation, right?
Alison Kosik
>> Yeah.
Morgan Adamski
>> I think that every company can play a role. I think even small to medium-sized businesses, their infrastructure is potentially being used in some of these adversarial nation state malicious activity, being knowledgeable about the threat, knowing who to talk to, starting to build those trust pipelines and relationships and pathways is really critical. I think that the medium size to large-sized companies that have phenomenal capability and insight, they're playing a big part in trying to contribute to the conversation. Even if it's not directly impacting them, they're sharing information with each other. That's the best part.
Charles Carmakal
>> Absolutely. Yeah.
John Furrier
>> Alison and I were talking yesterday a lot about how the enterprise adoption with agents is going to be strong. We think it's going to be a big year for agents, mainly because coding broke through. I mean, you had search, RAG, you had marketing copy, but now you saw coding come into the enterprise. We think agents are going to come in. How does that affect the surface area? Because agents are coming in in fleets. There's good and bad agents, espionage going on within agents, bad agents flipping good agents. I mean, this is a war gaming out now for enterprises. Your thoughts on how people should think about this, the C-suite specifically, it's not your yesterday cyber conversation.
Charles Carmakal
>> Yeah.
Morgan Adamski
>> Yeah.
Charles Carmakal
>> With the launch of any new capabilities, you have to think about how do we get the best benefit out of the new capability, but also how do we manage the risk associated with it? So now a lot of folks are thinking about how do we manage agent identities because that's different than a human identity. It's different than a machine identity. It's a little bit of a hybrid of both, but how do you think about that differently? How do you think about the behaviors of an agent? Because it acts a little bit like a human, but also acts like a machine, but it's continuously running. So how do you manage the behaviors? How do you profile the agent identity so that they have the access to the right information, but not too much access? Because you still have an issue with some models hallucinating and doing things that it may not be intended to do. So again, a lot of risk, but a lot of opportunities for all of us.
John Furrier
>> Yeah. Intent and context are huge conversations. What's the intent?
Charles Carmakal
>> Yeah.
John Furrier
>> What's the context?
Morgan Adamski
>> Yeah. And I think you have a lot of people in organizations, especially ones that tend to be of the newer generations who want to build their own agents, who want to leverage them. This is the whole conversation about shadow AI that companies are struggling with because they want to encourage their workforce to use AI to become familiar with it, understand its capabilities and possibilities, but then companies are struggling to figure out, okay, how do I have a governance and policies in place of what we can use? And everyone's like, "No, I don't want a policy. Don't stop innovation." Policies are supposed to be in place to help you move faster, not slower. You just have to write them the right way. And I think that's what we have a lot of conversations with our clients about is write it to encourage innovation, but also to maintain that security barrier that you need.
John Furrier
>> And a lot of those directives come from the C-suite.
Morgan Adamski
>> Yeah.
John Furrier
>> Learn AI, infuse AI. How's that conversation at the C-suite? Certainly CISOs have been on this from day one, but you start to see the CFOs become more operational. Obviously investments are involved. Chief people officers are involved because agents are doing work. How has that C-suite conversation changed or has it changed and how?
Charles Carmakal
>> Yeah, look, it's going to continue to evolve. Executives want to figure out how do they get more productivity gains by leveraging AI because they know everybody's trying to use the AI and they're trying to find efficient ways to do it. CFOs are thinking about how do they control the spend associated with AI because they get very expensive very quickly and they want to make sure that they're getting good productivity out of it. The CIOs are trying to figure out how do they create new solutions, new applications by leveraging AI. And by the way, what we're finding for most of the custom AI applications that my team is doing, write team exercises against, we're finding pretty basic vulnerabilities within the applications that they're building themselves. We're able to get the AI solutions to give us information that they shouldn't give it to us. We're able to compromise the backend infrastructure. So there's a little bit of a quick rush to build as many AI applications across enterprises as possible, but there's still a lot of security that still needs to be put into place to better manage the risks associated with that.
Morgan Adamski
>> Yeah. And I think the two topics that are coming up in a lot of C-suite conversations around this is to all the things Charles said, a lot of companies though are still trying to see that quick turn margin improvement or investment on leveraging AI. Well, hey, if I've used AI, now I don't need as many people or I should be seeing a huge return on my investment. It's not that quick. It's got to take a little bit of time to dwell. People have to get upskilled. There's also the conversation of, I need to upskill the workforce. If I just tell them to use AI, they'll start to use it and they'll be fine. There's a human aspect of this where you need to help the workforce understand they still have value if they're not doing what they did every single day the last 10 years, that they still have that expertise and insight to contribute to the conversation. So there's a bunch of different things you have to do to actually be successful in this space.
John Furrier
>> Morgan, that's a great point. I mean, there's a conversation we've been having on The Cube where if you don't wire up AI, it will wire you up.
Morgan Adamski
>> Yeah.
John Furrier
>> And that's really driving the AI. Has there been advances in red teaming and blue teaming with the agents? Can you share story or examples?
Charles Carmakal
>> I mean, incredible capabilities. I mean, right now, organizations are leveraging AI models to find security vulnerabilities in products that have been shipped out for years. Vulnerabilities that existed for 5 years, 10 years, 20 years or so. And so we will continue to see that and right now, again, the good folks have access to a lot of this capability that the bad folks don't, but it's again, it's a race. The adversaries over time will be able to create their own models. They'll steal models and capabilities through distillation attacks. They'll leverage other capabilities that exist. They'll steal it, they'll get access to it. So it's a continuous race, but we're seeing incredible capabilities from both an offensive and defensive perspective.
Alison Kosik
>> Okay. You've got a minute each to say this. Your best advice to CSOs in the next 12 months. Go, Morgan.
Morgan Adamski
>> Oh, okay. Ask really hard questions of your security team of how they're integrating AI. How are they reimagining what they're doing every single day. You can't just bolt AI on, and this is more than one thing, I recognize that. You can't just bolt AI onto everything that you're doing. You should really be asking yourself, should I be starting with an AI centric first type approach? A lot of what Google's doing in their agentic defense.
Charles Carmakal
>> Yeah. Right now, we're going to deal with a surge of new vulnerabilities that are discovered because people are using AI to find those vulnerabilities. So we need to evolve how we think about our vulnerability management programs. Do we still patch after 30 days? Do we need to accelerate that timeframe? What other risk parameters do we need to think about when we consider patching vulnerabilities? And by the way, organizations have tens of thousands of vulnerabilities that they know that exist in their environment. How do they manage the risk when the agentic problem starts getting more and more introduced?
Alison Kosik
>> Morgan, Charles, great conversation. Thanks so much.
John Furrier
>> Thanks.
Alison Kosik
>> All right. And you've been watching The Cube, the leader in live technology coverage. We'll be right back.