This discussion examines securing the artificial intelligence factory and protecting AI workloads, data pipelines, models and supply chains. Steve Kenniston of Dell Technologies is a senior cybersecurity consultant in product marketing and brings deep cybersecurity and product marketing experience to a focused conversation on AI infrastructure security.
Kenniston outlines model and data pipeline protection, identity management for agentic AI, supply chain integrity and integrated stack security. They emphasize prioritizing zero-trust identity controls and treating AI as a workload with security built in rather than bolted on. The session explores secure supply chains, hardware roots of trust, model governance and telemetry and operational readiness for enterprise AI deployments.
Dave Vellante of theCUBE Research hosts the interview and frames technical risks and operational challenges for enterprise AI. They highlight supply-chain complexity and the human factors in incident recovery such as staffing fatigue and coordinated response readiness. The discussion addresses identity management, zero-trust strategies, model security, data protection, telemetry and governance to inform security leaders and architects.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Securing the AI Factory With Dell Technologies and Intel. If you don’t think you received an email check your
spam folder.
Sign in to Securing the AI Factory With Dell Technologies and Intel.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Securing the AI Factory With Dell Technologies and Intel
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You are already logged into TheCUBE Network as
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Securing the AI Factory With Dell Technologies and Intel.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Securing the AI Factory With Dell Technologies and Intel. If you don’t think you received an email check your
spam folder.
Sign in to Securing the AI Factory With Dell Technologies and Intel.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Securing the AI Factory With Dell Technologies and Intel
Please sign in with LinkedIn to continue to Securing the AI Factory With Dell Technologies and Intel. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Steve Kenniston, Dell | Securing the AI Factory
This discussion examines securing the artificial intelligence factory and protecting AI workloads, data pipelines, models and supply chains. Steve Kenniston of Dell Technologies is a senior cybersecurity consultant in product marketing and brings deep cybersecurity and product marketing experience to a focused conversation on AI infrastructure security.
Kenniston outlines model and data pipeline protection, identity management for agentic AI, supply chain integrity and integrated stack security. They emphasize prioritizing zero-trust identity controls and treating AI as a workload with security built in rather than bolted on. The session explores secure supply chains, hardware roots of trust, model governance and telemetry and operational readiness for enterprise AI deployments.
Dave Vellante of theCUBE Research hosts the interview and frames technical risks and operational challenges for enterprise AI. They highlight supply-chain complexity and the human factors in incident recovery such as staffing fatigue and coordinated response readiness. The discussion addresses identity management, zero-trust strategies, model security, data protection, telemetry and governance to inform security leaders and architects.
>> We're entering a new phase of enterprise computing, what we call the AI Factory. These are not traditional data centers, rather they're systems that are designed to produce intelligence at scale. Energy, compute, and data goes in, and intelligence comes out the other end in the form of tokens. And we believe this changes everything because now you're not just protecting infrastructure, you're protecting data pipelines, models, supply chains, and the integrity of that very intelligence itself. Not only are organizations focused on making sure the AI is secure, they're also looking to partner with the technology industry and particular vendors to architect security into these systems from day one versus, of course, bolting it on after the fact. Welcome to Securing the AI Factory, made possible by Dell and Intel. And joining us to break this down is Steve Kenniston from Dell. Steve, good to see you again.
Steve Kenniston
>> Hey, Dave. Great to be back.
Dave Vellante
>> Thanks to coming into the studio. It's always a pleasure to have you here face-to-face. Let's start with AI. I mean, how does it change the attack surface? And what's different, Steve, from just securing everyday applications?
Steve Kenniston
>> Yeah, that's a great question, Dave. I think that in prior years, you had folks who were highly focused, when you built an application, you thought about the security of that application that maybe that application had maybe one road in or one data lake that you were securing to make sure it was secure. AI changes the whole game. There's the model inferencing. There's the model training data. There are the systems where people can do things like prompt injection. There's identity management that needs to be thought about. And these things are changing so fast. For example, now we have agentic AI that's working its way into the model and how do you make sure that the models that you're building and the agentic capabilities don't all take over what's happening? So, there's a whole group of things that actually change from an attack surface standpoint that you want to make sure you have locked down as you're building out this brand new application. Every new application has a new attack surface.
Dave Vellante
>> You mentioned agentic, and I want to come back to that because I want to ask you about the human side and the non-human side. But before I do, explain in a little bit more detail. So, if I'm securing an app, like a CRM app or a service management app, what do you have to do and how is that different in AI?
Steve Kenniston
>> So, it's really a function of the app, right? But I mean, who has access to the app and are they allowed to have access? And a lot of times it was... For example, a CRM application. The sales team had access to it. Maybe marketing didn't because they didn't want anything being done with the data, so you kept those people out. So, you had a great identity management, like who has access. You would make sure that the data was protected, so it's resilient and can be brought back if something happens to it. And you would just make sure that there really wasn't any resultant information. Sales reps might go in, they might put in some information, they might put in what happened at the meeting, but it's a data repository. AI is a bit different because you're asking it questions. There's going to be results from those questions. You're going to make decisions and you're going to do things based on those results. Those results come from how that model was trained. There was the training data, there's the other data that goes into that from the big data lake. There's all kinds of inputs into an AI model or an AI Factory that you might not have with your traditional applications.
Dave Vellante
>> Got it. Okay. So, let's go back to this human element and you mentioned identities and agents. One of the big themes at RSAC this year, actually the theme of the conference was the power of community. And ironically, all the discussion was around securing agents. So, what's the identity of an agent? So, what's the human side of the equation? How does that change the way we should think about it?
Steve Kenniston
>> I'm really glad you brought that up because I'm pretty passionate about the human side of security in general. I think that folks that go through the process of recovering from a cyber attack are infinitely valuable. Not only have they helped the business recover from that attack, but that going through that process, seeing how the attack behaved, knowing what to do, knowing what technology would have helped, what they had, how it did help and going through that whole process really makes a difference. It makes you infinitely valuable, not only to the company that you're in, but also maybe as far as being poached from that company. And I don't think that enough organizations take enough time to think about things like that, the human side of what happens when these things go on. Because you think about recovering from a cyber attack, you hear about these businesses that it takes months to recover. Yeah, that's getting the data back. But just getting the business operations up and running, maybe it's 24, 36 hours to just get those applications back online. Are security specialists focused on solving that problem for those full 24, 36 hours? Are they sleeping? Are they sleep-deprived? Are they making mistakes? You got to make sure, as a company, you're paying attention to what's going on. Do you have good backup teams? And I don't mean backup as in backing up the data, but good teams backing these people up. Do you have enough people? Are you giving these guys the right kinds of breaks that they might need in an event? There's having external vendors that might help you. Dell has a great incident response and recovery program that can help you when something happens and get your information back. So, there's a whole aspect to the human side of things that makes a difference.
Dave Vellante
>> So, again, we can't say it enough, this is a significant change in the way folks need to think about security. About a little more than 10 years ago, I interviewed Robert Gates, who's the former director of the CIA and he sits on a lot of boards of directors. And at the time it was very clear that security had become a board-level topic. Does this change the way in which boards need to be thinking about security?
Steve Kenniston
>> I think it might from the standpoint of... I mean, at the end of the day, I always say from a security standpoint, AI is just a workload, right? And if you have good cyber hygiene in your environment, in your business, and you're using those good best practices to secure that, you're already far ahead of the game. Now, there's a lot of nuance to an AI environment that might change. But for example, there's no special MFA specifically for AI, right? There might be specific places where you put MFA that might not be the same as traditional applications, right? However, as you start looking at agentic and start looking at additional things that can happen, there might be some call-outs that you might need to make to regulatory boards to make sure that you're staying compliant. So, there are some things that the board needs to pay attention to when they're putting in a new application like this. What's the data exposure? Are we at risk? That sort of thing.
Dave Vellante
>> So, that's interesting. I mean, MFA with an agent, the agent has an authenticator. Remember you used to walk around with one of those, the RSA? It would-
Steve Kenniston
>> Auto-generate new passwords.
Dave Vellante
>> Right. A new code every, whatever, every 30 seconds. So, yeah, our agent's going to be doing... Every nanosecond, it gives you a new code. All right. The Dell AI Factory, it's an integrated stack. So, from a security standpoint, you've got compute, you've got networking, you've got storage, of course data, you're securing models and you're orchestrating that whole thing. So, how should we think about that? How does Dell think about that from a security standpoint?
Steve Kenniston
>> I think the right question to ask is not only does how Dell think about it, but how should customers think about it also, right? So, I think 10 years ago, Dave, and I'd be interested in your thoughts, you didn't think about your server as a security product, but today you have to, right? If you're not, you might be missing the boat. At Dell, we integrate security into everything that we do, right from the supply chain, through the chips, right to the device that gets delivered to you. And the Dell AI Factory even more is an integrated set of solutions where we've done some rigorous testing, not only on the devices themselves to make sure that they're secure, but as far as the integration of the whole stack, what does it look like and is it secure? I like to think about most systems from any vendor are fairly secure, right? Where security breaks down in a system like this is where security falls between the cracks, you might think about it. By having an integrated system that's been tested where telemetry is consistently the same through all the devices and you can rely on that and you know about what's going on, it makes it a little bit easier. And right now, complexity is one of the hard things for customers to deal with, especially when you've got a lot of moving parts in an AI workload.
Dave Vellante
>> Well, and you asked me what I think about it. I mean, the supply chains are just exploding with complexity. You've got new fabs that are being built in Arizona and that's definitely the catalyst of that was to have a more secure supply chain, for instance, in the United States. You've got all this discussion about rare earths. You've got software. You've got these high-NA EUV machines coming out of ASML that are $380 million a piece. So, very, very complicated supply chains. AI demand has increased the supply chain risks. So, how should customers be thinking about that piece of the equation? Should they be worried? Should they be concerned? How does that affect which partner they work with? What are your thoughts on that?
Steve Kenniston
>> I do a lot of briefings with customers specifically. And I would say over the last 24 months, I probably had 300 briefings and maybe 10 or 15 of those folks would ask about a supply chain. I'm getting that question almost in every single briefing now. Customers really want to know about the supply chain. And I think that's really solid because I don't think enough folks thought about what that supply chain looked like to make sure there was security built in right from the start before they even got their device. And I think, as you said, the demand has changed so significantly that, for example, the chip shortage is causing a lot of challenges when it comes to acquiring technologies to make sure that you have a good AI environment, right? And so, in some of those cases, you might think, "Well, I'll just go out and get chips where I can." That instantly puts risk into your environment. One of the things that Dell pays attention to and has worked hard at is making sure that from end-to-end, not only do we have the inventory, but we can also make sure that that inventory is a part of that secure component that you're buying without injecting that additional risk.
Dave Vellante
>> Well, and of course, memory supply is a huge issue right now. And with Dell's breadth and depth, you would think that you're in a better position than many firms in terms of securing things like NAND supply and other memories. The narrative around security has always been the perimeter has vaporized. There is no perimeters. Is traditional perimeter security even relevant in this AI Factory era?
Steve Kenniston
>> I would say it's not irrelevant. However, I think the perimeter has changed, much like the attack surface has grown and changed. And this is a great place to talk up that buzzword bingo, zero-trust that everybody liked to talk about in years past. Zero-trust is very relevant from the standpoint of the fact that it makes you think deeply about identity management, about where my risks are coming from, how to identify those risks, right? And those types of things are infinitely more important today in an AI world than they were before, such that you're keeping your environment secure.
Dave Vellante
>> Okay. So, if I infer correctly, zero-trust, you don't really look at it as a buzzword. It's actually something that's real, that while people might struggle to operationalize it, certainly NIST has frameworks, and if you apply those, I guess it's a journey and that's a bromide, but still, it's something that security practitioners really need to pay attention to. It's gone from buzzword to pretty much a fundamental component of a security strategy.
Steve Kenniston
>> Yeah. I think that you start thinking about what does zero-trust actually mean? And when you boil it down to these things like around identity management, for example, you start to think about more than just the firewall and keeping the arrows from coming through and keeping them out. You're thinking about now all of a sudden, who has access? Not only who, when does that person have access? From what locations do they have access? These tools are getting much, much smarter at about keeping your organization safe, right? Whereas it used to be, "Hey, I might travel someplace and I might log in. I might have permission to do so." Well, maybe someone else does that, but maybe they log in from another country who might not have access to this information. Now, you're starting to take that depth of security to that next level.
Dave Vellante
>> My last question, is the industry still treating AI as an add-on instead of a fundamental design principle?
Steve Kenniston
>> I think that I'm hearing more and more from our services organization that when they start to go into a customer environment and they start talking about AI and implementing AI, about 85% or 90% of those get stopped because the security team hadn't been involved up until that point. And what it's saying is that it is still being bolted on. And I think the important message to customers is to think about security as a functional part of this new system, this new workload that they're deploying, because the last thing you want to do is get to the five-yard line and have someone from security go, "Stop, stop, stop. We haven't vetted this. We haven't looked through this. We don't understand what's going on." You want to make sure that that's a part of it. The nice thing about the AI Factory is it's built with security built in, so that's a good step. And so, if you get asked by the security teams, "Tell me a little bit about it." You can talk about our supply, Dell supply chain, you can talk about what we do from the factory to build in all these things. You can talk about our secure bios, our hardware root of trust. And that's integrated into all of our systems, but unless you're talking to somebody about those, they're going to say, "Hey, wait, let's talk about this first."
Dave Vellante
>> Well, during the cloud era, the cloud was the first line of defense. Now, the AI Factory is the first line of defense. So, thank you, Steve, for sharing with us some of your thoughts and thanks for the good work that you guys are doing at Dell. Appreciate it.
Steve Kenniston
>> My pleasure. Thanks for having me, Dave.
Dave Vellante
>> You're welcome. All right you're watching Securing the AI Factory, made possible by Dell and Intel. We're going deep into the infrastructure stack, all the way up through. And then, we've got a special conversation from RSAC. Keep it right there. I'm Dave Vellante. Thanks for watching.
>> We're entering a new phase of enterprise computing, what we call the AI Factory. These are not traditional data centers, rather they're systems that are designed to produce intelligence at scale. Energy, compute, and data goes in, and intelligence comes out the other end in the form of tokens. And we believe this changes everything because now you're not just protecting infrastructure, you're protecting data pipelines, models, supply chains, and the integrity of that very intelligence itself. Not only are organizations focused on making sure the AI is secure, they're also looking to partner with the technology industry and particular vendors to architect security into these systems from day one versus, of course, bolting it on after the fact. Welcome to Securing the AI Factory, made possible by Dell and Intel. And joining us to break this down is Steve Kenniston from Dell. Steve, good to see you again.
Steve Kenniston
>> Hey, Dave. Great to be back.
Dave Vellante
>> Thanks to coming into the studio. It's always a pleasure to have you here face-to-face. Let's start with AI. I mean, how does it change the attack surface? And what's different, Steve, from just securing everyday applications?
Steve Kenniston
>> Yeah, that's a great question, Dave. I think that in prior years, you had folks who were highly focused, when you built an application, you thought about the security of that application that maybe that application had maybe one road in or one data lake that you were securing to make sure it was secure. AI changes the whole game. There's the model inferencing. There's the model training data. There are the systems where people can do things like prompt injection. There's identity management that needs to be thought about. And these things are changing so fast. For example, now we have agentic AI that's working its way into the model and how do you make sure that the models that you're building and the agentic capabilities don't all take over what's happening? So, there's a whole group of things that actually change from an attack surface standpoint that you want to make sure you have locked down as you're building out this brand new application. Every new application has a new attack surface.
Dave Vellante
>> You mentioned agentic, and I want to come back to that because I want to ask you about the human side and the non-human side. But before I do, explain in a little bit more detail. So, if I'm securing an app, like a CRM app or a service management app, what do you have to do and how is that different in AI?
Steve Kenniston
>> So, it's really a function of the app, right? But I mean, who has access to the app and are they allowed to have access? And a lot of times it was... For example, a CRM application. The sales team had access to it. Maybe marketing didn't because they didn't want anything being done with the data, so you kept those people out. So, you had a great identity management, like who has access. You would make sure that the data was protected, so it's resilient and can be brought back if something happens to it. And you would just make sure that there really wasn't any resultant information. Sales reps might go in, they might put in some information, they might put in what happened at the meeting, but it's a data repository. AI is a bit different because you're asking it questions. There's going to be results from those questions. You're going to make decisions and you're going to do things based on those results. Those results come from how that model was trained. There was the training data, there's the other data that goes into that from the big data lake. There's all kinds of inputs into an AI model or an AI Factory that you might not have with your traditional applications.
Dave Vellante
>> Got it. Okay. So, let's go back to this human element and you mentioned identities and agents. One of the big themes at RSAC this year, actually the theme of the conference was the power of community. And ironically, all the discussion was around securing agents. So, what's the identity of an agent? So, what's the human side of the equation? How does that change the way we should think about it?
Steve Kenniston
>> I'm really glad you brought that up because I'm pretty passionate about the human side of security in general. I think that folks that go through the process of recovering from a cyber attack are infinitely valuable. Not only have they helped the business recover from that attack, but that going through that process, seeing how the attack behaved, knowing what to do, knowing what technology would have helped, what they had, how it did help and going through that whole process really makes a difference. It makes you infinitely valuable, not only to the company that you're in, but also maybe as far as being poached from that company. And I don't think that enough organizations take enough time to think about things like that, the human side of what happens when these things go on. Because you think about recovering from a cyber attack, you hear about these businesses that it takes months to recover. Yeah, that's getting the data back. But just getting the business operations up and running, maybe it's 24, 36 hours to just get those applications back online. Are security specialists focused on solving that problem for those full 24, 36 hours? Are they sleeping? Are they sleep-deprived? Are they making mistakes? You got to make sure, as a company, you're paying attention to what's going on. Do you have good backup teams? And I don't mean backup as in backing up the data, but good teams backing these people up. Do you have enough people? Are you giving these guys the right kinds of breaks that they might need in an event? There's having external vendors that might help you. Dell has a great incident response and recovery program that can help you when something happens and get your information back. So, there's a whole aspect to the human side of things that makes a difference.
Dave Vellante
>> So, again, we can't say it enough, this is a significant change in the way folks need to think about security. About a little more than 10 years ago, I interviewed Robert Gates, who's the former director of the CIA and he sits on a lot of boards of directors. And at the time it was very clear that security had become a board-level topic. Does this change the way in which boards need to be thinking about security?
Steve Kenniston
>> I think it might from the standpoint of... I mean, at the end of the day, I always say from a security standpoint, AI is just a workload, right? And if you have good cyber hygiene in your environment, in your business, and you're using those good best practices to secure that, you're already far ahead of the game. Now, there's a lot of nuance to an AI environment that might change. But for example, there's no special MFA specifically for AI, right? There might be specific places where you put MFA that might not be the same as traditional applications, right? However, as you start looking at agentic and start looking at additional things that can happen, there might be some call-outs that you might need to make to regulatory boards to make sure that you're staying compliant. So, there are some things that the board needs to pay attention to when they're putting in a new application like this. What's the data exposure? Are we at risk? That sort of thing.
Dave Vellante
>> So, that's interesting. I mean, MFA with an agent, the agent has an authenticator. Remember you used to walk around with one of those, the RSA? It would-
Steve Kenniston
>> Auto-generate new passwords.
Dave Vellante
>> Right. A new code every, whatever, every 30 seconds. So, yeah, our agent's going to be doing... Every nanosecond, it gives you a new code. All right. The Dell AI Factory, it's an integrated stack. So, from a security standpoint, you've got compute, you've got networking, you've got storage, of course data, you're securing models and you're orchestrating that whole thing. So, how should we think about that? How does Dell think about that from a security standpoint?
Steve Kenniston
>> I think the right question to ask is not only does how Dell think about it, but how should customers think about it also, right? So, I think 10 years ago, Dave, and I'd be interested in your thoughts, you didn't think about your server as a security product, but today you have to, right? If you're not, you might be missing the boat. At Dell, we integrate security into everything that we do, right from the supply chain, through the chips, right to the device that gets delivered to you. And the Dell AI Factory even more is an integrated set of solutions where we've done some rigorous testing, not only on the devices themselves to make sure that they're secure, but as far as the integration of the whole stack, what does it look like and is it secure? I like to think about most systems from any vendor are fairly secure, right? Where security breaks down in a system like this is where security falls between the cracks, you might think about it. By having an integrated system that's been tested where telemetry is consistently the same through all the devices and you can rely on that and you know about what's going on, it makes it a little bit easier. And right now, complexity is one of the hard things for customers to deal with, especially when you've got a lot of moving parts in an AI workload.
Dave Vellante
>> Well, and you asked me what I think about it. I mean, the supply chains are just exploding with complexity. You've got new fabs that are being built in Arizona and that's definitely the catalyst of that was to have a more secure supply chain, for instance, in the United States. You've got all this discussion about rare earths. You've got software. You've got these high-NA EUV machines coming out of ASML that are $380 million a piece. So, very, very complicated supply chains. AI demand has increased the supply chain risks. So, how should customers be thinking about that piece of the equation? Should they be worried? Should they be concerned? How does that affect which partner they work with? What are your thoughts on that?
Steve Kenniston
>> I do a lot of briefings with customers specifically. And I would say over the last 24 months, I probably had 300 briefings and maybe 10 or 15 of those folks would ask about a supply chain. I'm getting that question almost in every single briefing now. Customers really want to know about the supply chain. And I think that's really solid because I don't think enough folks thought about what that supply chain looked like to make sure there was security built in right from the start before they even got their device. And I think, as you said, the demand has changed so significantly that, for example, the chip shortage is causing a lot of challenges when it comes to acquiring technologies to make sure that you have a good AI environment, right? And so, in some of those cases, you might think, "Well, I'll just go out and get chips where I can." That instantly puts risk into your environment. One of the things that Dell pays attention to and has worked hard at is making sure that from end-to-end, not only do we have the inventory, but we can also make sure that that inventory is a part of that secure component that you're buying without injecting that additional risk.
Dave Vellante
>> Well, and of course, memory supply is a huge issue right now. And with Dell's breadth and depth, you would think that you're in a better position than many firms in terms of securing things like NAND supply and other memories. The narrative around security has always been the perimeter has vaporized. There is no perimeters. Is traditional perimeter security even relevant in this AI Factory era?
Steve Kenniston
>> I would say it's not irrelevant. However, I think the perimeter has changed, much like the attack surface has grown and changed. And this is a great place to talk up that buzzword bingo, zero-trust that everybody liked to talk about in years past. Zero-trust is very relevant from the standpoint of the fact that it makes you think deeply about identity management, about where my risks are coming from, how to identify those risks, right? And those types of things are infinitely more important today in an AI world than they were before, such that you're keeping your environment secure.
Dave Vellante
>> Okay. So, if I infer correctly, zero-trust, you don't really look at it as a buzzword. It's actually something that's real, that while people might struggle to operationalize it, certainly NIST has frameworks, and if you apply those, I guess it's a journey and that's a bromide, but still, it's something that security practitioners really need to pay attention to. It's gone from buzzword to pretty much a fundamental component of a security strategy.
Steve Kenniston
>> Yeah. I think that you start thinking about what does zero-trust actually mean? And when you boil it down to these things like around identity management, for example, you start to think about more than just the firewall and keeping the arrows from coming through and keeping them out. You're thinking about now all of a sudden, who has access? Not only who, when does that person have access? From what locations do they have access? These tools are getting much, much smarter at about keeping your organization safe, right? Whereas it used to be, "Hey, I might travel someplace and I might log in. I might have permission to do so." Well, maybe someone else does that, but maybe they log in from another country who might not have access to this information. Now, you're starting to take that depth of security to that next level.
Dave Vellante
>> My last question, is the industry still treating AI as an add-on instead of a fundamental design principle?
Steve Kenniston
>> I think that I'm hearing more and more from our services organization that when they start to go into a customer environment and they start talking about AI and implementing AI, about 85% or 90% of those get stopped because the security team hadn't been involved up until that point. And what it's saying is that it is still being bolted on. And I think the important message to customers is to think about security as a functional part of this new system, this new workload that they're deploying, because the last thing you want to do is get to the five-yard line and have someone from security go, "Stop, stop, stop. We haven't vetted this. We haven't looked through this. We don't understand what's going on." You want to make sure that that's a part of it. The nice thing about the AI Factory is it's built with security built in, so that's a good step. And so, if you get asked by the security teams, "Tell me a little bit about it." You can talk about our supply, Dell supply chain, you can talk about what we do from the factory to build in all these things. You can talk about our secure bios, our hardware root of trust. And that's integrated into all of our systems, but unless you're talking to somebody about those, they're going to say, "Hey, wait, let's talk about this first."
Dave Vellante
>> Well, during the cloud era, the cloud was the first line of defense. Now, the AI Factory is the first line of defense. So, thank you, Steve, for sharing with us some of your thoughts and thanks for the good work that you guys are doing at Dell. Appreciate it.
Steve Kenniston
>> My pleasure. Thanks for having me, Dave.
Dave Vellante
>> You're welcome. All right you're watching Securing the AI Factory, made possible by Dell and Intel. We're going deep into the infrastructure stack, all the way up through. And then, we've got a special conversation from RSAC. Keep it right there. I'm Dave Vellante. Thanks for watching.