In this segment from theCUBE + NYSE Wired: Cyber Security Leaders series, Rehan Jalil, president of products and technology at Veeam, joins John Furrier to discuss the launch of AI Commander. Following Veeam’s acquisition of Securiti.AI, Jalil explains how the convergence of data risk and AI risk necessitates a new "trust layer" for the enterprise. He details how AI Commander provides a necessary harness for autonomous agents, allowing organizations to embrace AI adoption with the required visibility, protection and resilience to manage autonomous behavior in real time.
The conversation explores the three core pillars of the new solution: detect, protect and undo. Jalil breaks down the concept of "precision undo," a specialized rollback capability that remediates specific errors or malicious actions made by AI agents without disrupting overall data systems. By leveraging a "Data Command Graph" to identify toxic data combinations and unifying live and backup data visibility, Veeam is enabling businesses to monitor the machine-speed activities of agents while protecting their most valuable proprietary assets.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Zero Trust Cyber Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Zero Trust Cyber Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Zero Trust Cyber Series
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Zero Trust Cyber Series.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Zero Trust Cyber Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Zero Trust Cyber Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Zero Trust Cyber Series
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Zero Trust Cyber Series. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Rehan Jalil, Veeam
Inna Tokarev Sela is the CEO and founder of Illumex. The platform enables companies to extract value from structured data, creating a virtual semantic graph for users to interact with in natural language. Illumex focuses on contextualizing data in real-time and offers built-in governance features. By partnering with major data platform providers, Illumex has increased data usage for customers. The company has raised $13 million and has a diverse workforce. Inna's leadership style is described as empathetic. Illumex envisions a future where data interactions are seamless and efficient. Overall, the company aims to lead the industry towards a more streamlined application-free future.
In this segment from theCUBE + NYSE Wired: Cyber Security Leaders series, Rehan Jalil, president of products and technology at Veeam, joins John Furrier to discuss the launch of AI Commander. Following Veeam’s acquisition of Securiti.AI, Jalil explains how the convergence of data risk and AI risk necessitates a new "trust layer" for the enterprise. He details how AI Commander provides a necessary harness for autonomous agents, allowing organizations to embrace AI adoption with the required visibility, protection and resilience to manage autonomous behavior in...Read more
exploreKeep Exploring
What can you tell us about your company's recent acquisition and the launch of AI Commander?add
What do customers want when adopting AI agents in their enterprise environments — specifically regarding visibility, control, protection, and recovery?add
Does this describe the Agent (AI) Commander solution for evolving cybersecurity, identity/data protection, and resiliency for AI agents?add
Can you break down the three primary functions—detect AI, protect AI, and undo AI—and explain in detail what "detect AI" entails (what context to gather, how to assess risk, and how tools like a data command graph identify toxic combinations)?add
What does "precision undo" (or "precision rollback") mean in the context of AI-driven data systems, and how does it work/why is it important?add
>> Welcome back. I'm John Furrier, host of theCUBE here at the NYSE CUBE Studios of course, we have our studio in Palo Alto, California connecting Silicon Valley and Wall Street, bringing tech and capital markets together, part of our new NYSE Wired program, a CUBE original, of course. It's an open network of leaders sharing our CyberStreet leaders come in. We have a three-time CUBE alumni, Rehan Jalil is here with Securiti.AI, now president of products and technology at Veeam, founder and CEO of Securiti. Recent acquisition and big news hitting today on AI Commander. First of all, congratulations on one, the acquisition, and two, the launch of AI Commander. Thanks for coming on.
Rehan Jalil
>> Thank you so very much. It's the fourth time here with you. It's always great experience, great discussions, super excited to be talking about the Agent Commander and actually also sharing with your audience.
John Furrier
>> Let's do a deep dive on the product. We just had Anand, the CEO of Veeam, lay out the high level. He nailed all the key points. Love the transformation story of Veeam. I call it the iPhone to the iPod. You had DR. now you have a much bigger product, unified, hyper-converged, whatever you want, talk about it. But it really has a lot to it. So let's get into AI Commander. As president and running the technology teams over there, what is this? What is AI Commander and what does it mean to the 500,000 customers that Veeam has?
Rehan Jalil
>> Yeah, in fact, Veeam has about half a million customers, but there're a lot more of which they have a common aspiration. What is that aspiration, John? As you know, everyone wants to get AI agents and AI adopted because nobody wants to be left behind. At the same time, they know that unless they feel comfortable, unless they created a harness in which they feel comfortable to let lose these AI agents, they just don't want to turn it on, right? Or if they do, they're going to pay for it. So what we've been envisioning is the overall solution that is actually is needed, not just from visibility and observability lens, not just from protection lens and not just from resilience lens like recovery, but all. Why? Because if you talk to many of our customers, they want all of it. They want to make sure they can know more exactly what's going on in their environments in terms of what AI agents are active, what are good agents, what are bad agents, what could be malicious agents, could be rogue agents. They just want to know. Why? Because they want to make sure they only allow the one they want to allow and the rest they can shut it down. So protections against it. Some they can , shut it down, but some they want to do more granular controls on things the agent could do and things the agent cannot do. It's legit, it's honored to coming into the enterprise, but they still want to put guardrails. So that's the protection side. But they also know that agents will make mistake. Why? Because they're all probabilistic models. They're not deterministic. They will make mistakes even if their intention is ought to do all the right things. Or frankly, some agents could go rogue or are designed to be rogue. So when they do make mistake, you can't really not have a recovery back from it. So that's also what we're calling as how do you undo the mistakes or mishaps that an AI agent would do and can you enable that? And that's the beautiful story that actually comes together from Securiti.AI's AI visibility, data visibility and data security and controls and the strong technology that Veeam has for recovery side of things. But you do that all together in a more precise manner. It's not a sledge ... You cannot use a sledgehammer because if an agent touches one file or touches one data object, and if messes it up with that data object, you have to undo only that part. So that's really the overall philosophy and overall technology that we have built here in a very rapid way.
John Furrier
>> I love the sledgehammer example because it implies just crushing a tool when we are in a systems game. Anand was chatting with me before this interview and he highlighted the core issue in the industry, and I'll read this because I think it's important. As enterprises deploy agents operating at machine speed, data risk and AI risk converge. And the key point is that traditional backup and security and governance tools were not built to monitor and remediate autonomous AI behavior in real time. Okay, autonomous AI behavior in real-time. The world has moved to real-time. Security professionals have been dealing with this inbound threats, data, control plane. This is generational, it's been set, but now it moves to a whole other level of everything kind of converging and unifying. This is AI Commander solution, right?
Rehan Jalil
>> That is correct. So think of from the lens of it's an evolution of data security and cyber security journey because now you have to think about how to secure this identity of an AI agent and what data touches, what permissions it gets, what activities it's allowed to do, basically having full visibility and control. So cybersecurity has to evolve because these agents actually, unlike humans, they don't have to read one file at a time. They can read like a hundred files at a time and they can dictate actions much at a blazing speed. So cybersecurity has to evolve, the identity and the data security have to evolve, which we are evolving as part of this Agent Commander. Then resiliency has to significantly evolve, the data protection has to significantly evolve because it is not about bulk backups and recovery, but going into precision. the way I would put it is they were drivers in the resilience side, for instance, and still are. Ransomware can happen and if ransomware happens, all your data can be gone. It can be locked up. And of course, people put security around it and nobody want a malware to come in to do the ransomware. If you flip the coin on this side, CIOs and frankly all the business owners, they're inviting these agents. They want the agents to come in and do the job. So there'll be tens of thousands of them. How do you know they have the right rulebook which is being implemented, the guardrails are implemented and they're playing within the framework which a company really wants it to play.
John Furrier
>> It's interesting-
Rehan Jalil
>> And Agent Commander is designed to do that holistically.
John Furrier
>> Rehan, it's interesting. If people think ransomware is bad, which it is, it's really bad, you ain't seen nothing yet as they say when it comes to AI with all the shadow AI going on, remember shadow IT, that brought in the . A lot of shadow AI is implementing AI out in the wild and you look no further than Clawd shutting down OpenClaw recently because they were exfiltrating data distilling their IP. Now this means that an enterprise could be at risk at any given time within seconds. This is really the threat. I mean, it's not ransomware in a similar construct. It's more impactful from a collateral damage standpoint with this kind of attacks.
Rehan Jalil
>> The reason is very simple, we used to be always worried about insider threat. Agent is an insider. I mean, there are several agents, you don't want to be an insider, so you have to block them completely. So you have to make a distinction between what's a legit agent and what you would never want to be in your enterprise. So of course you have to do that. But the one that came into your enterprise and is doing useful work for you or you think is going to do useful work, and let's say every employee has few of them, you'll have tens of thousands of these instances of automated things working in your environment. That's where the complete different requirements have popped up and they're not disconnected. I mean, if somebody wants a trust layer before they can say, "Let me go ahead and turn on all these agents," that trust layer, it has observability, visibility, security controls, protection against AI, and then of course recovery. That's when-
John Furrier
>> I like how you brought in data security and cybersecurity together because I think it really is converging there. One of the things that AI Commander talks about in the release and what you guys are sharing is that there's three primary things, detect AI, protect AI and undo AI. Break that down for us because we see the threats. I mean, it's very clear to me that agents going rogue, doing these things in seconds, modifying behavior. You don't know it. It's basically become an insider threat overnight when you let them in on their own, right? It's like it's letting an insider in without knowing they're an insider. So take me through and let's drill down on those three cables, detect AI, protect and undo.
Rehan Jalil
>> That's great. So let's talk about detect AI. So detect AI is not just about detecting what AI agents are getting activated or used and getting connected into your environment. It's not just that. It is to actually understand the entire context around every AI agent that is getting activated in your environment. What is meant by context? First, you need to understand of course what AI is there, so you need to understand the risk. Second, you need to understand, well, what data can they touch? What permissions do they have, right? So understanding permissions. Thirdly, you need to understand who's turning it on? If it's a human turning it on, is it tied to a human, which means they won't inherit the permissions of that user. Fourth, you need to understand is it anything sensitive that they're going to touch and is it within the guardrails and within the rule book? And then you have to also understand what activity they are doing. And then above it you have to understand what risk it's creating. Is it a compliance risk? Is it an access risk? Is it leakage risk that's going on? And based on the activity you would probably have to infer to say, "Oh, this is a malicious activity," or some mistakes going on. So when we say detect AI, it is not just detecting a model or agent it's understanding everything around that AI agent that you need to know to understand if it's a risk. So what we have, because you mentioned about going one layer down in the tech side of things, we have something's called data command graph. It's like a knowledge graph. So it has inherent ability to what we call a toxic combination. There's certain things that an agent would do that is just toxic. So what we not normally detect is not only about finding all the context, but finding what is bad inside it. That is a toxic combination part of it. So when we say detect AI and AI agents, it's all of it, including the relationships it has with everything else, but finding what is toxic and what risk this AI agent is actually creating. So that's just the one pillar of the detect AI piece that we talk about.
John Furrier
>> On this undo AI, which I love, Anand corrected me because he said, no, it's precision undo. Define what that means. What is precision undo AI? What's a precision rollback?
Rehan Jalil
>> I'll maybe give you an example. Let's say you're writing a document and you made one sentence mistake somewhere, but you have a really long document. You don't want to go back to two versions back and read and the entire Word document back. You just want to basically fix the issues that you thought you made in the document. This is an analogy, of course. If you actually have to think about in your data system, you have millions of files and thousands of users are touching those files in a daily basis. When you want to recover, you cannot afford to recover, carte blanche all the files back or all the data back. But if you think a mistake has been made by an agent, you first of all need to know that agent touched what file and what you think it made a mistake in terms of deleting, editing or some other aspects of it. And also based on that, you would want to precisely recover those set of files or data objects, which you think are by mistake either edited or by mistake deleted or corrupted or so forth. And only those have to be recovered because if you bring all the data back, you're going to basically override the work of other people who's just doing the right things all along. And that precision requires that you tie it to visibility of activity of AI agents and track all of it, and then have the ability to only with those particular data objects, to basically undo the mistake along the journey, which is quite a bit of a sharp contrast to typically when you recover everything back and without even knowing exactly surgically what needs to be fixed. That's why precision is important, and that's what Securiti.AI's data command graph and AI visibility combined with the strong capabilities on the resilience side for Veeam is enabling it. Otherwise it's just not possible. And tight coupling between these two is enabling it. Other thing which I'll highlight is traditionally the backup data and the live data has been thought of in separate terms. Frankly, to solve this problem, you have to look at one pane of glass. You have to have full visibility on your live data and your backup data in one place. If you don't have it, you really cannot solve this problem because you need live data visibility and activity along with backup visibility and then manage it in a more automated fashion. And that's the very unique part about this platform, which you've called data command center that enables both the live and backup data.
John Furrier
>> I love that, real-time data coming in because that's also going to feed data and that's one of the benefits of Commander. I guess the question on being a unified layer requires some level of independence. How independent are you guys of our model providers? Do you rely on LLMs, SaaS integration? Take us through that unification layer and what does that mean from an integration? How do you maintain that?
Rehan Jalil
>> Yeah, so I mean, there are two levels. One is for our own AI use cases, we have variety of models that we use. Some we built ourselves, the SLMs for classification and many other things. And some of course you can use it from the provider in a more frozen model way where you're not really sharing any information, or using it in a way that information is not getting out anywhere. And then that's of course using the AI, whether homegrown or provided by the model providers. At the same time, the integrations that we have enabled. Securiti.AI platform actually had hundreds of connectors for the live data understanding. And then these integrations that we're just announcing with the backup data, which Veeam has very strong kind of capabilities, we are combining that together in one graph engine.
John Furrier
>> What about agent frameworks? Are there blind spots in these agent frameworks?
Rehan Jalil
>> So agent frameworks are designed to enable you to quickly build agents and do things. They're of course not designed to create all the security and the resiliency, the recovery. They don't have understanding of the enterprise environment, what rules the enterprise should be using, how permission should ... Whether the permissions are right or wrong, what data is considered classified or not. I mean, that's not what agent framework is going to give you. What agent framework gives you is to get your use cases, and frankly, they're phenomenal and they should be used. But the crown jewels of an enterprise is the proprietary information and their internal rules, which are implicit, not sometimes mostly not explicit. The implicit actually in the data systems itself and some are explicit governance and compliance rules. How do you make sure that the agent builder frameworks actually can live within the bounds of the rules? That's where our partnership with those agent builder frameworks come through. Because we provide that framework and the rule book on how these agents can behave. It should behave by protecting the proprietary crown jewels of the company.
John Furrier
>> We got a lot of customers, got a lot of ecosystem partners at Veeam. We've been following 17 years down in theCUBE. I guess my final question to wrap up is you're the president of products and technologies, you have the keys to the kingdom at Veeam. What's on the roadmap? Share a little context of where this goes next. What's on your to-do list? What are you optimizing for? What's your big to-do?
Rehan Jalil
>> Yeah, I think we're super jazzed about innovating for where the world is going. Where the world is really going is, as we all can see, is rapid evolution of new ways of actually utilizing AI. And frankly, in the enterprise, AI without the information of the enterprise is not really AI. A model is meaningless if it doesn't act upon the enterprise proprietary data, which sits across hundreds of different types of applications. So all our focus and innovation is going to be how we can provide the trust layer, which means security, which means privacy, which means resilience, which means governance. A company that can provide that layer of the trust which has all these important elements and be the partner for the enterprise, in fact, there's a lot to do. This AI thing, as we all can see, this is just getting started and it's in fast flux. So making sure that we keep up with the different models, the way the different deployment models are coming through, that we continue to be that trusted layer. And I can tell you that some of the largest enterprises on the planet are partnering with in this construct with us. You can pick any vertical and you will have some of the largest organizations who want this trust layer to be there. They want partners like us to go along with the journey and enable and accelerate their adoption of AI in a safe manner.
John Furrier
>> Rehan, great to have you on and congratulations as founder and CEO of Securiti.AI AI and the successful acquisition with Veeam. This is a great milestone. Again, it's the beginning of the journey here together, a successful launch of AI Commander. And again, congratulations on the news and thanks for coming on, spending valuable time with us.
Rehan Jalil
>> It is always a pleasure talking to you. It's always fun. And it's the fourth time and maybe the next time is coming soon.
John Furrier
>> Keep going. You guys are hot. Love the position, love what you're doing. Again, capabilities integrated, unified, really where it matters most. This is where you're starting to see hyperconvergence and unification come in where everything kind of comes together. So congratulations and we'll keep in touch.
Rehan Jalil
>> Thank you so much. Appreciate it.
John Furrier
>> I'm John Furrier with theCUBE. We are here at our New York Stock Exchange Studio CUBE. Course, Palo Alto and Silicon Valley connecting into Wall Street, capital markets and technology converge. The market is technology. Every aspect of AI is integrating every vertical, every part of the stack. And of course, we're doing our part to bring you all the data here. Thanks for watching.