In this interview from RSAC 2026, Kapish Vanvaria, global Americas risk consulting leader at EY, joins Dan Mellen, global chief technology officer for cyber at EY, to talk with theCUBE's Dave Vellante and Christophe Bertrand about how the rapid deployment of agentic AI is outpacing governance and widening the trust gap that security leaders must close. Mellen anchors the challenge in EY survey data: 96% of organizations have AI in their cyber defense strategy, while 95% are deploying AI broadly across the business — yet controls, training, and governance are consistently lagging behind. Vanvaria introduces "trust by design," a model that embeds cyber, legal, and compliance stakeholders into product development from the start rather than applying governance after the fact. He also flips the conventional framing, arguing that the most capable organizations will position humans at the center and let AI operate in the loop around them.
The conversation also explores how agentic AI is reshaping security operations in practice — from AI-augmented SOCs and automated third-party risk management to the traceability challenges that emerge when autonomous agents inherit and act on human entitlements. Mellen references EY's joint announcement with CrowdStrike and NVIDIA to deploy an AI agentic SOC as a concrete model for embedding responsible AI principles natively into security workflows. Vanvaria argues that the profession must make a fundamental shift from a defensive posture to an offensive one, using AI to hunt threats at machine speed. From a projected 5X increase in cybersecurity investment to the reimagining of roles like pen tester and threat hunter, the guests outline a roadmap for how organizations can move at the speed of trust without repeating the governance failures of the cloud era.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSAC 2026 Conference. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for RSAC 2026 Conference
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for RSAC 2026 Conference.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSAC 2026 Conference. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to RSAC 2026 Conference
Please sign in with LinkedIn to continue to RSAC 2026 Conference. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Kapish Vanvaria & Dan Mellen, EY
In this interview from RSAC 2026, Kapish Vanvaria, global Americas risk consulting leader at EY, joins Dan Mellen, global chief technology officer for cyber at EY, to talk with theCUBE's Dave Vellante and Christophe Bertrand about how the rapid deployment of agentic AI is outpacing governance and widening the trust gap that security leaders must close. Mellen anchors the challenge in EY survey data: 96% of organizations have AI in their cyber defense strategy, while 95% are deploying AI broadly across the business — yet controls, training, and governance are consistently lagging behind. Vanvaria introduces "trust by design," a model that embeds cyber, legal, and compliance stakeholders into product development from the start rather than applying governance after the fact. He also flips the conventional framing, arguing that the most capable organizations will position humans at the center and let AI operate in the loop around them.
The conversation also explores how agentic AI is reshaping security operations in practice — from AI-augmented SOCs and automated third-party risk management to the traceability challenges that emerge when autonomous agents inherit and act on human entitlements. Mellen references EY's joint announcement with CrowdStrike and NVIDIA to deploy an AI agentic SOC as a concrete model for embedding responsible AI principles natively into security workflows. Vanvaria argues that the profession must make a fundamental shift from a defensive posture to an offensive one, using AI to hunt threats at machine speed. From a projected 5X increase in cybersecurity investment to the reimagining of roles like pen tester and threat hunter, the guests outline a roadmap for how organizations can move at the speed of trust without repeating the governance failures of the cloud era.
EY Global Cyber Chief Technology Officer; EY US Cyber Chief Technology OfficerEY
In this interview from RSAC 2026, Kapish Vanvaria, global Americas risk consulting leader at EY, joins Dan Mellen, global chief technology officer for cyber at EY, to talk with theCUBE's Dave Vellante and Christophe Bertrand about how the rapid deployment of agentic AI is outpacing governance and widening the trust gap that security leaders must close. Mellen anchors the challenge in EY survey data: 96% of organizations have AI in their cyber defense strategy, while 95% are deploying AI broadly across the business — yet controls, training, and governance are ...Read more
exploreKeep Exploring
What is happening between cybersecurity, cyber resiliency, and AI?add
What is happening at the intersection of cybersecurity, cyber resiliency, and AI?add
What did your surveys reveal about organizations' AI adoption (including use of agents) and the resulting cybersecurity readiness, gaps, and risks?add
How should organizations design and deploy agentic AI in cybersecurity to “move at the speed of trust,” balancing trust-by-design governance (traceability, explainability), appropriate human involvement, technology choices, industry maturity differences, and workforce upskilling?add
How should CISOs and organizations prioritize their spending on AI-related security investments?add
>> Hi everybody. Welcome back to San Francisco. You're watching theCUBE's continuous live coverage of RSAC 2026. My name is Dave Vellante I'm here with Christophe Bertrand. We're here at Moscone West on the ground floor. Stop by and see us. We have a great week. This is day two of our four day coverage. Kapish Vanvaria is here. He is the global America's risk consulting leader for EY and he's joined by Dan Mellen, who is the global chief technology officer for cyber for EY. Gentlemen, welcome to theCUBE. Good to see you.
Kapish Vanvaria
>> Thanks for having us.
Dan Mellen
>> Great to be here.
Dave Vellante
>> So this is the Super Bowl of security. Everybody's here. Let's start with your roles. Kapish and then Dan, tell us sort of your focus. EY, Ernst & Young, everybody kind of knows you guys, but bring us up to speed on what you all do for them.
Kapish Vanvaria
>> Yeah, absolutely. By trade, I specialize in technology, telco and media. And then in my role for the firm, I oversee a lot of our risk, regulatory and security work, not only in the U.S., but around the world as well.
Dave Vellante
>> Great. And Dan?
Dan Mellen
>> Sure. So I sit in our cyber practice and I look after our ecosystems, alliances, and a lot of our asset development. So it gets into the cyber innovation space quite a bit.
Christophe Bertrand
>> Well, let's talk about this. I mean, this is RSAC. This is supposed to be about cyber, yet I have a feeling or the feeling as I walk around, it's all about AI. So maybe starting with you, Kapish, what's your take on this? What's really happening between cyber security, cyber resiliency and AI?
Kapish Vanvaria
>> Yeah. I don't think you can actually skip a booth without seeing the word AI. And honestly, I think it's a good thing. I think it's an embracing of sort of where the future's headed from a security perspective. And so to me, I spent a lot of time with our clients yesterday, this morning, right before this. And a lot of the questions they're asking is scale deployment. And two, how do they answer the mail on trust? And when you think of all the different services from threat hunting to pen testing to vulnerability management, all the way to SIEM and SOC, they're asking about how do we take these pieces of technology now fueled by agents and AI and do scale deployments to be one step ahead.
Christophe Bertrand
>> And Dan, how does that influence the ecosystem you're responsible for? Because it's got to be changing the game significantly.
Dan Mellen
>> Yeah, absolutely. I think the change cycle that used to be six months long is now six weeks long. And so I joke with my European colleagues when they go out for the month of August, they've missed an entire cycle. And so you've got to play catch up with all of that. But Kapish is is right. Everyone is talking about AI. We're talking about how to govern this effectively at speed. And the pace at which this is happening, it's sort of like a race condition between cyber and the business.
Dave Vellante
>> Oh, please.
Dan Mellen
>> Yeah, just cyber trying to keep up and trying to not repeat some of the mistakes that we made in the cloud era and the app era.
Dave Vellante
>> And your background in telco, I was at MWC as were you. And I hosted a round table for about 23 business and technology executives. And where they are on the AI maturity curve is they're deploying agents, but they're like single agents. They're being really careful about scaling them. There's a gap in sort of their trust of these agents. I know you guys have some data on this. Last week we were at GTC and the buzz was all about OpenClaw. Everybody wants to OpenClaw their business. I'm like, "Whoa. Okay. That's going to bring some real threats." I was talking to Google earlier today. They said that 800 of the skills that are downloadable, more than 800 for OpenClaw are malware. Yeah, download this great skill. Oh, that looks good. It's like the Wild West now. So you guys have some data on this. I know you guys have launched some surveys. Maybe you can share some of the insights that you found.
Dan Mellen
>> Yeah, absolutely. I'll start. So if we take some of the data from the survey, there are three interesting stats that really stood out to me. The first one was 96% of folks had AI as a part of their cyber defense strategy. Now, I really think strategy is the operative word in that because it gives a pretty wide variation in terms of what the implementation state of that strategy is. So the curious thing of that will come in when we start to look at the next data point, which is 95% are deploying AI. So not necessarily just in security, but in the business more broadly. So you've got this situation where AI is being deployed, cyber's trying to play catch up. We're seeing a little bit of the repeated bad behavior from cloud where security's just bolting capabilities on to try to secure these. But the business is outpacing governance, it's outpacing controls, it's outpacing cyber's ability to train folks. And so you do, you've got this delta and essentially that is the trust gap that we're seeing.
Christophe Bertrand
>> So let's talk about this trust gap because I think there's definitely an issue here where we're looking at these agents essentially appearing out of nowhere, managing data, making decisions semi-autonomously. Are we really treating them as human beings? Should we? And how do we reconcile the trust that we need to have? What other guard rails that we need to have in place to be successful here?
Kapish Vanvaria
>> Yeah. So one of the taglines we've been using to simplify the thoughts on this is move at the speed of trust. And helping people think through like, what does that actually mean? I think Dan summed it up very well. The survey summarizes such a large volume of people are working it into their strategy, but where is that falling short in operational execution? And we think of that, I think a lot of organizations as part of that strategy will have things like responsible AI as concepts. We'll have things like traceability, explainability, but really, that's really a bolt-on method because you're taking compliance and governance and applying it to something that already exists and sort of where we're trying to push the market to and people to just change their frame of reference is think of trust in design. When a product is about to be launched and go through those cycles of design and development, are the folks from cyber, regulatory, legal, compliance in the room helping build those agents, those products that help drive customer experience and employee experience? And I think there's a statement many organizations use about human in the loop, I think the most advantageous organizations out there will be the ones who take humans and empower them with the power of technology to run faster.
Dave Vellante
>> AI in the loop.
Kapish Vanvaria
>> That's right. AI in the loop.
Dave Vellante
>> Right.
Kapish Vanvaria
>> We should coin that, AI in the loop.
Dave Vellante
>> Seriously, it's a flip of that model. It's very challenging right now because the CISOs, they've got a portfolio that they have to take care of and now there's all this AI stuff coming in. And even though it's encouraging that they're increasing their budgets, they've got to balance them across, as you say, and identity and AI security and cloud security. I mean, all that stuff is still there. So how can companies actually move at the speed of trust? Is it a cultural thing? Is it not bolting it on? Is it a technology aspect? It's all of the above. People process technology, we know, but give us the insights as to how you advise clients to do that.
Dan Mellen
>> Yeah. I think it's a blend of all three. And I really do like Kapish's concept of trust by design. If you can take the concepts and back into where you want to be, so let's take the end in mind. And we talk about these agents. We're deploying 5,000 agents as a part of a security operations center, and you've got folks that are going to orchestrate those agents. That's where the value is. I need to be able to say that agent A delegated something to agent B that derived from human H's entitlements. So then what happens when human H is exited from the company, right? You've sort of got this orphan structure. You need traceability between what happened, who did it, and where it originated from. And so those are the kind of constructs that a lot of organizations and consortiums are working on, how do we define those protocols and those structures to allow us to have that traceability going forward? It's super important.
Christophe Bertrand
>> So let's double click on agents because you've hinted here that agents are being used for cybersecurity. Obviously we know they are, so hard to miss.
Dan Mellen
>> Hard to miss.
Christophe Bertrand
>> RSAC 2026. But obviously you see a lot of companies, large enterprises, all sorts of verticals. Where do you think we are with the reasonable use, I will say reasonable use of agentic AI in the context of cybersecurity across the clients that you've seen? Are you satisfied with what you're seeing? Do you think it's early stages? Are some verticals more mature than others? And I bet that some are. Again, without divulging too much, that might be confidential, I'm very curious about that because I do believe there are two or three speeds in agentic AI and cybersecurity based on the vertical.
Kapish Vanvaria
>> Yeah. So I'll take that first. I think we're like in the first inning, right? And I think it's very early days. And I mean that, and I know it feels like a lifetime going through walking the halls today and seeing how much AI there is and infuse and everything, but I still think it's very early days. To me, when I look at where organizations are deploying it, very much like the SOC, the SIEM, also thinking about business type risk issues like third party risk management and a service or an offering in an organization that was always limited by human capital, of how many people could evaluate how many vendors in your ecosystem, using agentic software and agents in that process, you can take the governors of how many vendors you evaluate, the types of tests you put them through, the evaluation cycles. But that doesn't mean it adds burden to them. It streamlines the process for them. It also allows you to quantify risk in a much different way and explain to your board, your audit committee, your risk committees throughout the organization that we're willing to actually accept a different profile of risk in the organization because we have a better feel for the surface attack.
Christophe Bertrand
>> Interesting.
Dave Vellante
>> You know what's interesting, Dan, about what Kapish just said, so I feel like the technology industry is in the third inning and customers are in the first inning and there's a gap there. And I'm listening to Jensen last week talk about the new revenue model and every company's going to have to figure out where they are in this Pareto. And I'm like, "Whoa, most companies aren't really even thinking about that yet." And I know you have some data. I saw this data point says 98% say ROI depends on effective human insight. So it says to me, you've got to empower the humans and they have to be the one orchestrating the agent. So I wonder if you could pick it up from there and elaborate.
Dan Mellen
>> Yeah, absolutely. That was actually my favorite statistic out of the survey was that, because it dispels some of this just fear and uncertainty around job loss and replacement. If you let that number 98% sink in, it's pretty remarkable. So one of the challenges that we do see is finding cyber pros with AI backgrounds. And so that's definitely a gap in talent development that most organizations are looking to fill. I think the good thing is, in the same survey, about 89% said that they see training and upskilling as a path to success. So there is light at the end of the tunnel. If you specialize that training into the places where there's data density, like we talked about with identity, with third party risk, there's a lot of work that can be done immediately in those spaces.
Christophe Bertrand
>> So I'd like to follow up on what you said and this idea of responsibility, right? Because we've been going back and forth here, well, should it be humans controlling AI or AI semi-controlling humans? I think that may not be the right conversation. At the end of the day, as an organization, your clients, they are responsible for their data. It's a compliance thing, it's a governance thing. They're responsible for the outcomes of whatever process they put out there with their end users, whether agents have delivered them or whether it's a human who did. So just like you wouldn't want to have drunk teenagers flying a plane, how do we put in in place this level of responsibility from the ground up in the development of the AI infrastructure, from the management of the data? Are there specific programs or training approaches that you use with your clients? And again, are there some verticals differences that we should be aware of?
Kapish Vanvaria
>> Yeah, for sure. I'd break it down into two categories. The first is very transparently in an organization, in plain English, explain your stance of where you are, right? And that's not a cyber issue, it's not a business issue, it's an organizational issue, right? And do it in a way where you can very easily explain what you must do, right? Those are things that are regulatory in nature if you're in a financial services or telco or oil and gas or life sciences because there's a lot of regs in those space. Then the second piece of it is like what you should do, things that are good for your employee experience, things that are good for your customer experience, right? That's the transparency in certain things. Maybe you don't have to, but you should, right? And then the top of that pyramid is really what you could do, right? Things that are good for society, right? Open sourcing, different things, sharing ideas, partnering with universities. And I'd say that's really like a very transparent, simple framework as a C-suite executive to push a message out. But then put that in the lens of your employees. Your employees are your first and your last mile of defense, and experience for the people that experience your organization, your customers, right? And so to me, it's really thinking about how do you empower your employees to truly feel and experience what you want them to, right? How you make your EX better and allowing them, to Dan's point, AI will do so much. Human judgment will keep an ... I love the AI in the loop because the human is the center of it. The AI is in the loop around them. So empowering next best action and allowing them to use judgment to then do that.
Dave Vellante
>> And Dan, I always say bad human behavior can beat good cyber security architecture every time. And so, and it's a new era we're entering. I see another stat. This is the 25% of cyber incidents were AI-enabled, that's half of your respondents said that. And it could be even much more. I mean, the phishing emails are so much better now. So, okay, so you've got layered defense. All right, but then there's the cultural aspect that Kapish is talking about. So help us understand how organizations should respond to that. What does this mean for their supply chains, the training inside of organizations? How are you helping organizations understand that?
Dan Mellen
>> Yeah, there's a couple of the things that I would say relate very squarely in that. One of the things is leveraging AI to govern AI. And so we've seen lots of really good traction in that space. So pitting AI against AI to make sure you're implementing the responsible principles that Kapish just talked about. There's a way to codify those things. We just made an announcement with CrowdStrike and Nvidia for an AI agentic SOC, and that's one of the things that we've done as a part of that. How do you embed natively into the processes that go into your SOC? If you're going to govern it with AI, how do you ensure that those processes, those outcomes, sort of that reinforced learning, how do you make sure that those things are following those principles and adhering to those principles? And when it does act anonymously, how do you go in and correct that and help that learn? The other thing is, yeah, training, again, I can't underscore the talent development piece associated with getting folks the right specified training to the tasks that they're going to be doing.
Dave Vellante
>> So the threat landscape, it's never been ... These adversaries have never been more capable. They've always been capable, but now they're achieving a new level. Is there still an imbalance of the ... I mean, I suppose broadly at the macro, the attackers have the advantage because they only have to be successful once is the line, but tell us your thoughts on the landscape and how that's evolving.
Kapish Vanvaria
>> I think there's like a fundamental shift in strategy that we have to make overall in the profession of being very comfortable just being on the offense. And I think for many years, this business and the profession has been one of defensive posture, preventing things from occurring. I think now it's just time to sort of refocus that alignment because those threat vectors, they're supercharged and they're coming faster and faster than ever. And so to Dan's point, how can you train AI to run AI on the offense for you, to manage those end boundaries of your organizations, to do things that human capacity could never do. And so if I were to just summarize that, it'd be change the posture from defense to offense and empower your organization to sort of go into that.
Dave Vellante
>> So there's mindset there, as you guys were talking about. I suppose there's technology too, things like continuous pen testing and automated, which is very manual today. So AI applied to best defense is a good offense is essentially what you're saying, Kapish, right?
Kapish Vanvaria
>> That's right.
Dave Vellante
>> Okay. We know AI spending's on the rise. It's starting to be self-funding. We saw in the early days of gen AI, it was stealing from other areas and now, despite the MIT study, there is ROI. I'm sure you're seeing it with your clients. Customers need to, especially CISOs, need to balance their AI portfolio and the rest of the portfolio. Maybe they shouldn't be thinking about it as a separate item. Maybe they should bring those together. So how should CISOs and organizations prioritize their spending?
Dan Mellen
>> Yeah, we see those as integrated, definitely. In the survey, I think the result was 5X of current level. So we're looking at a 5X increase in investment. So it's material, it's meaningful. And we're seeing a lot of investment in the identity access management space and the security operation space, third party risk we talked about, and fraud. Those seem to be some of the top areas where they're seeing investment and return on that investment.
Dave Vellante
>> Let's close on I want to come back to that sort of vision that Jensen put forth last week, that Pareto, he said every CEO needs to figure out where they are in this Pareto. And to use the example, if I hire a software developer for half a million dollars a year and that individual only uses 5,000 tokens, I'm going to be really upset. I'm going to fire that individual. That's like a new mindset. "Hey, welcome to your new job. Here's your laptop and here's your token budget."
Kapish Vanvaria
>> Go for it.
Dave Vellante
>> And that is a completely different world of business. What does that mean for the future of cybersecurity? What does that look like in your mind?
Kapish Vanvaria
>> Yeah. I think it's going to force us to change sort of everything, people, process, tech. From how we hire and train people, to Dan's point, just given we're a professional services firm, we hire thousands of individuals across cyber, risk, regulatory, that entire ecosystem of trust. And it's going to require us to change the type of profile we're hiring to be much more fungible and feel comfortable in this world. Two, I think from a process perspective, truly redesigning a lot of these businesses to think like the future of pen testing may not be what we grew up doing. The future of threat hunting may be very different from that in the future. And I think we're seeing it from all the vendors today. The future of SOC and SIEM is very different already. So to me, it'll be being very comfortable with the re-imagining of the profession and being accepting of it. And I think we'll go back to the best defense is a good offense and change the mindset there.
Dave Vellante
>> Well, EY is one of the few firms on the planet that has the depth of industry expertise, the global footprint, and the deep technology expertise in fields like cyber. Guys, thanks so much for a great conversation. I really appreciate you coming on theCUBE. Appreciate the time.
Kapish Vanvaria
>> Thanks for your time.
Dan Mellen
>> Thank you.
Dave Vellante
>> You bet. And thank you for watching. This is Dave Vellante for Christophe Bertrand and John Oltsik. We're here at RSAC 2026 live from Moscone West. Keep it right there. Be right back with more great content on theCUBE.