At RSA Conference 2026 Jon Oltsik of theCUBE Research joins Christophe Bertrand of SiliconANGLE and theCUBE to review four days of conference coverage. Oltsik and Bertrand draw on analyst experience and sector expertise to assess advances in cybersecurity and enterprise adoption.
They examine pervasive artificial intelligence AI themes, the emergence of agentic AI and semi-autonomous agents, governance challenges and data management issues, vendor innovation and platform dynamics, and implications for security operations.
Key takeaways include actionable guidance and practical recommendations. Oltsik recommends that organizations establish robust AI governance, align stakeholders and prioritize developer and user training to mitigate prompt injection risks and data exposure. Bertrand emphasizes that data integrity is foundational, vendors accelerate AI-enabled features and end users should evaluate a broad set of emerging solutions while preparing for an expanding attack surface. Viewers gain practical insights on AI governance, data governance, prompt injection mitigation, security operations and vendor evaluation.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSAC 2026 Conference. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for RSAC 2026 Conference
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for RSAC 2026 Conference.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSAC 2026 Conference. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to RSAC 2026 Conference
Please sign in with LinkedIn to continue to RSAC 2026 Conference. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
RSAC Show Wrap
At RSA Conference 2026 Jon Oltsik of theCUBE Research joins Christophe Bertrand of SiliconANGLE and theCUBE to review four days of conference coverage. Oltsik and Bertrand draw on analyst experience and sector expertise to assess advances in cybersecurity and enterprise adoption.
They examine pervasive artificial intelligence AI themes, the emergence of agentic AI and semi-autonomous agents, governance challenges and data management issues, vendor innovation and platform dynamics, and implications for security operations.
Key takeaways include actionable guidance and practical recommendations. Oltsik recommends that organizations establish robust AI governance, align stakeholders and prioritize developer and user training to mitigate prompt injection risks and data exposure. Bertrand emphasizes that data integrity is foundational, vendors accelerate AI-enabled features and end users should evaluate a broad set of emerging solutions while preparing for an expanding attack surface. Viewers gain practical insights on AI governance, data governance, prompt injection mitigation, security operations and vendor evaluation.
In this interview from RSAC 2026, Jon Oltsik, analyst in residence, joins theCUBE's Christophe Bertrand to discuss how the rapid rise of agentic AI is expanding the cybersecurity attack surface and why governance must become the cornerstone of enterprise AI strategy. Oltsik frames the conference as defined by two parallel challenges: governing AI development within the enterprise and deploying AI to strengthen defenses. He warns that the attack surface will grow unpredictably as autonomous agents proliferate, and recommends that security teams build strong go...Read more
exploreKeep Exploring
Where are we in the adoption and maturity of AI and autonomous agents in cybersecurity solutions — are organizations still in early stages, and how developed is AI governance?add
Has AI created a "monster" from a security standpoint?add
How should an organization (or CISO) approach governance, security, developer/consumer training, and vendor selection when adopting AI technologies?add
What are the key considerations for adopting AI in cybersecurity (including the role of data, bias, architecture, and governance)?add
>> And we're back at RSAC 2026 for our last segment. We're wrapping it up after four days of all tool coverage. I'm very pleased to be joined by Jon Oltsik, our analyst in residence. Everybody knows Jon. Jon, okay. I thought this was RSAC, a cybersecurity conference. It's RSAAI-C, lots of AI. Lots of AI and cyber in combination. What's your take?
Jon Oltsik
>> Yeah, you couldn't walk across the street without seeing AI hyphen something. And I get it because there's AI development. And that's really the two sides of this conference is what's going on with AI development in your organization and how do you get your arms around that from a governance perspective, from a policy perspective, from a policy enforcement perspective. And alternatively, what does AI do for cybersecurity for your defenses and what are the offensive players doing as well? So yeah, it's wall to wall AI and... I could use a break from AI. I'm sure you can too.
Christophe Bertrand
>> You need your own agent or something. Yes. So let's talk about agents, agenetic AI. I mean, there have been many conversations around how... Two things. So two sides of the same coin. How do you manage secure control police agents? So that makes sense. But the other that was maybe even more interesting from a technology standpoint, which I've seen play out also in data protection, recovery, et cetera, compliance governance is leveraging agents as part of the solution itself. So using AI to manage AI in a sense. Where do you think we are with the actual use of AI and AI agents in solutions? It feels like early stages was six months ago and we're already in phase two or three, but that's just a general take from my standpoint. What about your take?
Jon Oltsik
>> Well, I think judging by the sentiment at this show, you would think that, but when you talk to users, they're more on the on ramp and progressing quickly, but there. But I do think that there's an accelerated pace of things. Generally, I'm reminded of the beginning of cloud where we went from we don't trust the cloud to reusing the cloud to we're cloud native. But I think the cycle's accelerating. So I think we'll see both. We see rudimentary AI governance now. We see governance. So if you have strong governance, you're in a really good spot and then you have to figure out what you need to layer on top of that specific to AI. And then on the other side, we're going to see that quick rapid development and hang on because every show we go to this year and certainly next year, that's going to be the theme.
Christophe Bertrand
>> Right. I feel that AI has introduced a number of new security exposures. I mean, literally blown up the parameter like nobody's business. And now we have agents that are semi-autonomous doing stuff, being hard to control, maybe making the wrong decisions, being the source and the target of attacks. So have we created a monster from a security standpoint with AI?
Jon Oltsik
>> Absolutely, we have. It's very scary. The attack surface is going to grow like a weed. And as a security professional, you have to anticipate that and get your arms around that. Now, the difficulty is we don't know how fast that weed's going to grow. We don't know what species to carry this metaphor out that that weed is, but we do have to anticipate that. But yeah, from a security perspective, you should be concerned but proactive.
Christophe Bertrand
>> Okay. So now is the time if you're an end user, you're thinking about this, maybe you're at the show, you're not at the show, you're watching us. Panic, but don't panic, it sounds like. So what proactively would you say should end users do right now as they think about maybe some early projects they're working on for AI? They're being told by management we need to do this AI thing. I have to start leveraging agents. What is the best recommendation right now? Should they be training themselves? Should they be working with vendors in a certain way? Which resources should they leverage? Because it feels like it's a wild, wild west right now.
Jon Oltsik
>> It is the wild, wild west, but there are things you could do. So the first thing that I would do is I'd get all of the stakeholders in place and if you don't have strong governance, build strong governance. Now a lot of that starts with what are we trying to do here? So no one's going to play... Well, developers will play with AI, but if I'm a CEO, I'm going to say, well, what's the business value here? Am I driving new revenue sources? Am I automating processes, cutting costs, what have you. What are we doing broadly across the enterprise there? When we have those two things, then we start to form what our governance should look like, what our policies should look like. Now you touched upon another important thing, training. And we have to start with training our developers, training consumers, because this is a new world. And so from a security point, yeah, training about things like prompt injection and sharing confidential data, certainly. But the developers, this is a playground for them. So we have to put some, maybe not guardrails, but some guidance for them. And then in terms of your vendors, talk to your vendors. What are they doing? But what I'm saying and what I'm seeing proactive CISOs do, cast a wide net because the history, historically strong vendors may not have the best solution for you. And there's a lot of innovation. A lot of it's going to fall on its face. You and I have seen this a million times in the industry, but someone's going to come through and have a revolutionary or at least a very good evolutionary platform for AI and security. So be open-minded. It may not be the vendor that you have. It may be the vendor that best serves your needs in the future.
Christophe Bertrand
>> Right. Yeah. We had a great innovation sandbox winner, 11 month old startup won the award.
Jon Oltsik
>> Who was that? I missed that.
Christophe Bertrand
>> Jordy. Jordy. Yeah. Check out the segment. Wonderful CEO, young guy, very talented clearly. And 11 months in the business and they have this great solution, they win the award here. Okay. What can I say? You just make that point and it's proven to be true. And we've had also just today, a couple of conversations, one around data with the real issue being that data is not really being managed and it's hard for anybody to actually do AI in a way that's going to be trusted if you don't trust the data. If you let agents access data, they should not access, et cetera. So to me, the whole data layer is a mess as is, and it's not even being managed correctly. And now we want to, on top of that through agents that are going to be using the data. Well, that's a problem, but there's hope. It feels like if we had a great conversation also with ServiceNow where clearly some of the customers in the ServiceNow environment are able to get ROI out of agenetic AI because they have controlled processes that they can deploy AI on. They understand the data, they understand they've provided platform has provided a number of guardrails, et cetera. So I think in many ways success for end users will come from being able, and it's going to sound very weird, but if you don't know how to build the plane as you fly it, you may not be very successful in AI, in my opinion, as a business, because I think that's what we're doing now. So we're asking customers to do, but it could be the new way of doing things. Now for security, that's not necessarily the best advice.
Jon Oltsik
>> That is not the best advice, but we're going to have to do some of that. But at the same time, I think you're right. AI only is successful if it has the right data and that also, just the data has to be relevant, but the data has to be ethical and fair as well. So you can't build models where there's a bias involved, for example. Now in security, that's not as important, but you do want security centric data. You want those databases, you want the threat intelligence. So that's all important too. I think you do have to build the plane while you're flying it, but you should also thinking about what's the foundation for the future, because that doesn't scale. And if you're doing that in security, are you going to do that for your SIM and for your EDR and for your asset management, that doesn't make sense. So you have to build the overlying foundation, the platform, and then develop on top of that. Clearly that's what the vendors are doing.
Christophe Bertrand
>> Right. Yeah. And I think that one of the foundations is data, and I think what security means and what the components of security really are is changing as we go. I talked to Convault, I talked to Veeam, I talked to multiple companies during this conference. And clearly everybody's coming at it from slightly different perspectives, but I think we're seeing really a convergence or a confluence, I should even say, of multiple streams building this sort of bigger river and that's really that big AI stream, which of course has a number of consequences from a market perspective and even from a pricing perspective, how are you going to price this stuff if people are coming from different places? What is the role of storage vendors in this whole thing, which I think will be pretty critical. What is the role of hyperscalers? I mean, talk about multiple dimensions here, but commonalities are these. Governance still applies.
Jon Oltsik
>> Yes.
Christophe Bertrand
>> Data critical. Security processes still apply.
Jon Oltsik
>> Yeah.
Christophe Bertrand
>> And in the end, even if there will be some replacements potentially of some functions and some people and some jobs, it's still going to be the human in the heart of it, we hope, right?
Jon Oltsik
>> Partly. Yeah. Yes.
Christophe Bertrand
>> So you've seen a lot of vendors at the expo here, you've talked to a number of vendors. What's your take on the vendor ecosystem after four days here?
Jon Oltsik
>> Well, I would say that there's tremendous optimism, but we're at a carnival for vendors, so they're going to pitch that. But there's some interesting things going on that are logical, but I hadn't thought of it. For instance, the speed of development that the vendors are doing, because they're in the software development business, so their developers are using AI tools. And I heard so many times about the acceleration of projects. So a lot of the functionality that they've promised in the past may be closer than we think. Here's a stupid example, but I worked in the software industry, you did. I worked for a company and way back when we would support Unix systems. And so Sun would upgrade from Solaris 4.2 to Solaris 4.3, and that meant that our developers had to go and see, "Well, what's the changes? What can we support?" That was half of our business was just maintaining all those connections or all that support. Those things start to get automated and now all of a sudden the thought of integration, API integration, connectors, common data models, that starts to go away. So the industry can scale and these tools can now do a lot more integration and cross product functionality. So that's an opportunity. It certainly will accelerate things and it means a different mindset for the buyer who've been stovepiped on the way they think. They shouldn't be stovepiped anymore.
Christophe Bertrand
>> Yeah. I'm wondering if we're not setting up essentially a little bit of a war between platforms. There will be a few big platforms or ecosystems that emerge, maybe they already have, and you have three or four sort of options or fades that you've got to go believe in and that may be where things become interesting. We'll see. And also coding becoming a thing of the past. I mean, who needs to code anymore? Now you can just ask AI to do it for you, but it makes the job of creating a solution or software even more interesting because now you don't have to worry about coding.
Jon Oltsik
>> Yes.
Christophe Bertrand
>> You have to worry about really the outcomes and what it does and what you want it to do. And so I think this is, to your point, changing and accelerating the answer also to the questions. What are your thoughts about what this is going to be next year? Is this show going to be about agenetic AI or is it going to be about something else? Is it still going to be about just security?
Jon Oltsik
>> Well, looking into my crystal ball, my pretend crystal ball, I think there's no doubt it will be about AI, but it will start to be about more about the business functions of AI, some of the developments with AI that are particular to industries and the vulnerabilities that they create. So for instance, in healthcare, and I'm doing a little work in healthcare, this is a game changer. It's a game changer for any kind of clinical work. It's a game changer for patient monitoring. It could be a game changer for managing chronic diseases. So all of that is going to happen, and then we have to understand the vulnerabilities and that creates all the way out to the human element.
Christophe Bertrand
>> And the digital imaging. I mean, like radiology, too. I mean, what the eye can pick and AI can, right?
Jon Oltsik
>> Absolutely. And then on the security side, we're going to have to deal with that, but how far do these agents go? I think in the next year they'll go really far and wide. And then it comes into, well, what are they doing in terms of aggregating data? And you're right. I mean, do we need multiple platforms? Do we need... Is there a platform war? I would say yes. I would say Microsoft's in a wonderful position, at least in its space, small enterprise SMB. Everyone else is going to fight, but there'll always, always, always be innovation at the fringes. And some of that innovation using agenetic AI may be startlingly good compared to the status quo. And that's why I say keep an open mind.
Christophe Bertrand
>> Excellent. And remember what I always say, which is we've seen a lot of companies here, lots of products. Products become features in time.
Jon Oltsik
>> Products become features and execution, execution, execution, right? You can have a great idea. You can have some really good engineering developers or engineering founders. Can you scale? Can you deliver what the enterprise needs? You know, and I know that's a challenge. They're very demanding. And who can do that? I don't think you can tell by this year's show. Maybe next year.
Christophe Bertrand
>> Maybe next year. Well, John, thank you so much for this wrap up. And to our viewers, this concludes our coverage of RSAC 2026. I'm Christophe Bertrand, principal analyst at Acute Research. See you next year.