This theCUBE Research interview examines artificial intelligence, AI, infrastructure and model integrity during the Securing Your AI Factory segment at RSA Conference 2026. Mukund Khatri of Dell Technologies participates in the segment. Khatri outlines Dell's vision for the Dell AI Factory, covering rack-level architecture, the AI data platform and integrated engineering for power, cooling and supply chain security. They explain how Dell integrates compute, storage and networking with automation to deliver end-to-end AI infrastructure and services for enterprise deployments. Dave Vellante of SiliconANGLE hosts the conversation.
Key takeaways include designing security and recovery into the stack rather than bolting it on; adopting cyber-resilient racks and model-level protections; and enforcing observability, lineage and least privilege for agents. Khatri highlights Dell Enterprise Hub's containerized signed models for supply chain assurance, warns about shadow AI and agent risks and urges organizations to begin post-quantum cryptography transitions within the next two to three years.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Securing the AI Factory With Dell Technologies and Intel. If you don’t think you received an email check your
spam folder.
Sign in to Securing the AI Factory With Dell Technologies and Intel.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Securing the AI Factory With Dell Technologies and Intel
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You are already logged into TheCUBE Network as
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Securing the AI Factory With Dell Technologies and Intel.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Securing the AI Factory With Dell Technologies and Intel. If you don’t think you received an email check your
spam folder.
Sign in to Securing the AI Factory With Dell Technologies and Intel.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Securing the AI Factory With Dell Technologies and Intel
Please sign in with LinkedIn to continue to Securing the AI Factory With Dell Technologies and Intel. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Mukund Khatri, Dell | Securing the AI Factory
This theCUBE Research interview examines artificial intelligence, AI, infrastructure and model integrity during the Securing Your AI Factory segment at RSA Conference 2026. Mukund Khatri of Dell Technologies participates in the segment. Khatri outlines Dell's vision for the Dell AI Factory, covering rack-level architecture, the AI data platform and integrated engineering for power, cooling and supply chain security. They explain how Dell integrates compute, storage and networking with automation to deliver end-to-end AI infrastructure and services for enterprise deployments. Dave Vellante of SiliconANGLE hosts the conversation.
Key takeaways include designing security and recovery into the stack rather than bolting it on; adopting cyber-resilient racks and model-level protections; and enforcing observability, lineage and least privilege for agents. Khatri highlights Dell Enterprise Hub's containerized signed models for supply chain assurance, warns about shadow AI and agent risks and urges organizations to begin post-quantum cryptography transitions within the next two to three years.
>> Hi, everybody. Welcome to our special segment here at RSAC 2026 at Moscone West. And this is Securing Your AI Factory, brought to you by Dell Technologies and Intel. And we had an opportunity at this event to sit down with Mukund Khatri, who's the fellow and vice president at Dell Technologies. Mukund, great to see you. Thanks so much for coming on.
Mukund Khatri
>> Thank you.
Dave Vellante
>> We've been trying to connect now for a couple of weeks. We've both been ships passing in the night, but such an exciting time. Of course, security is the theme here at RSAC. Let's start with the big infrastructure, which is the Dell AI Factory. I mean, you guys, with your partners, have completely changed the focus from all this bespoke infrastructure to this notion of an AI factory. What's going on with the Dell AI Factory these days?
Mukund Khatri
>> Well, I think that is the new buzz. So, as you know, infrastructure is where we have come from. The Dell AI Factory takes us to the prime place for infrastructure for AI workloads. So, we've been on the journey with our partners to create Dell AI Factory, which brings in rack-level infrastructure that brings our proven compute network and storage infrastructure to the customers in a rack fashion and be able to run AI workloads. So, AI Factory is what we have been building. I think just recently at GTC, you saw us add AI data platform, too. So, I think that's the journey we are on.
Dave Vellante
>> So, I like to think of it as you've got power, you've got compute and you've got data goes in to the AI Factory and intelligence comes out in the form of tokens. And that's a whole new way of thinking about computing. And to your point about data, Dell has always been in the data storage business.
Mukund Khatri
>> Correct.
Dave Vellante
>> The AI platform brings you up even further dealing with unstructured data, applying AI. That's a critical ingredient to manufacturing intelligence. Isn't that what the AI Factory builds essentially?
Mukund Khatri
>> So, I'll put it this way. So, it's an end-to-end solution, but factories are being rebuilt for the AI. That requires redesigning power, redesigning thermals, redesigning racks, optimizing racks. And then, the data storage part of it also has to come in a new form now, which is where AI data platform is being designed, which it enables the RAGs and all of the new AI-related data elements. It can add resiliency, which is what historically Dell has brought for data resiliency. So, end-to-end stack is what we bring to the end customer for the AI data. It's a whole lot more complex than we used to bring servers or storage independently, and that is what the offer brings. And I think it continues to scale as we see these racks.
Dave Vellante
>> I go back to the days of VBlocks where we bolted on compute storage and networking and tied them together. And maybe there was a software layer on top that helped us manage it. We're talking about something completely different here. Explain to the audience the engineering that actually has to go in to build an AI factory. As you pointed out, it's end-to-end. It's not just a bunch of bespoke parts that are bolted on together with the veneer. It's deep engineering, is it not?
Mukund Khatri
>> It is. It is. And as you know, my slant goes to security most of the times, so I think that's a sphere that I lead for Dell as a fellow. But as I look at the overall engineering, I think it is grounds up. It's a new rack design that Dell released, and we drove it through OCP with a bunch of collaborations. The power that these racks need is ever-increasing, right? So, I think we heard at GTC what the challenges are. Same thing with cooling technologies. So, I think Dell's super focused on bringing those technologies to the forefront as needed. Tons of innovation in the racks. As we look at each of the compute nodes, we've had cyber-resilient power edge. Now, it's towards cyber-resilient racks. And then, supply chain on top of it is something that we've historically had supply chain resilience. Dell's known for the supply chain. As we look at supply chain security, that is another area where the entire rack-level security is something that is of specific interest to end customers. These things are very complex. When they get delivered, this is no longer something that a customer themselves can figure out. So, there's a lot of automation needed in here in our factories to build, deliver, install, and all of this involved services. So, there's a end-to-end need for a company like Dell to make it easy for the end customer.
Dave Vellante
>> So, let's unpack some of the themes that you just brought there. The narrative around security is security can't be a bolt on. It can't be an afterthought. It needs to be designed in, same thing with recovery. So, what specifically does that mean? Of course, Intel has things like confidential computing, goes down to the silicon, and then you guys are designing in resilience throughout the stack. What specifically does that mean?
Mukund Khatri
>> Sure. So, let me unpack that a little bit. So, what do we do for security as part of these AI racks? So, Dell AI Factory. And if I focus on the infrastructure part, there is the cyber-resiliency that we design in, into the product. So, everybody uses the same components. But when you put it together and create a system, it is the glue that makes it secure. So, certain design parameters, how did we design it? Was our design methodology secure? Is the product components supply chain secure? And then, how are we designing it such that it is APT resilient, advanced persistent threat, nation state actors? So, there's a lot of learnings from... This is an iterative process over the years where customers have taught us what the key risks are and we've designed our products to mitigate those. Our servers and our compute gets used in all sorts of environments, as you know. So, cyber resiliency is a key part, which... Usually, I like to say it's advanced persistent threat resilience, so it's physical or cyber-related threats tied into firmware into hardware. And we design it grounds up to roots of trust in each of the components. And then, when we are thinking of ransomware, which is another class of threats, so are we designing our products for ransomware protection prevention? And obviously, our data protection and our storage portfolio has tons of security capabilities built in to protect, detect, and then also recover the data, as you're aware, cyber-resiliency portfolio. So, that's when we think of cyber resiliency in our portfolio, we're thinking that. And used to be at an individual node level, now it's coming at a rack level. So, it's a solution for the end customer.
Dave Vellante
>> So, it's just a wholly different scale. So, there's this idea of confidential computing and how does that extend to AI? Is there such a thing as confidential AI and how do you get there?
Mukund Khatri
>> That's a great point. I should have connected that in the previous response as well. So, as you look at the AI, AI brings additional threats, newer threats. Interestingly, the LLMs are, they look like code, they need to be protected like code, but they are essentially data, data that has secrets and weights are secrets. And if those are stolen, those are pretty precious. So, when we think of securing the AI layer, we are dealing with securing the model, having a trusted model to come in. And during deployment, the model has to be secure and the data that the model is using has to be secure, and that entire pipeline is a new addition to the traditional stack. What we had is infrastructure, you have storage and you got applications. Now, you have a whole new data layer, model in the data layer that needs protection. And you're seeing the partnerships we are coming up with, some of the design... You look at Dell Enterprise Hub, Dell Enterprise Hub is where we are creating containers of the models and we're making them available and those have trusted supply chain built in. You can download a model from there, which is now in a container form, not in a model form. And then, you can download it and it has the integrity verifications based on hash and also signing. So, these are additional things that we're continuing to build to provide assurance on supply chain on models as an example.
Dave Vellante
>> And I just want to give the audience a sense of the exposure that we all now face. I was just in a Google meeting, in a presentation. And of course, we're all talking about OpenClaw. Everybody's excited about OpenClaw and doing claws and having agents running around on our behalf. They said that more than 800 of the OpenClaw skills that are available for downloading are malware, but people are diving in. There's shadow AI just like there was shadow IT. So, across the supply chain, we need much greater vigilance than we've ever had before. Your thoughts?
Mukund Khatri
>> Absolutely. So, I think the supply chain has gotten so complex, one. Models can hallucinate, so the integrity of the models, where they came from, what data they were built on is very, very critical. And so, it's really improving all of the model vendors are moving very fast in this direction, but we still don't have what I would call as observability or lineage of where the model, what data it was used and stuff. I think we'll get there, but the model integrity is of paramount importance. And then, as the model goes through the pipeline as deployment... Models will still have its imperfectness. So, as we talk about security of the models, we talk about safety and security, typically. And they address a lot of the safety measures, PII, data filtering. "I won't give a wrong response if this happens." But when they're deployed in financial or health sectors, they are going to need additional things that are guardrails, right? So, as we look forward, guardrails, I think of them as keeping the car on track. At the same time, it is also going to be needed for compliance to HIPAA regulations and stuff. So, each of these things are ones that are going to be needed for AI to not hallucinate, to have the right outcomes. And as you pointed out, as we're looking at moving from LLMs to agents, as we heard yesterday, I think it is less about the answers and now actions. So, LLMs could give you a wrong answer, agents will give you a wrong action, much more detrimental.
Dave Vellante
>> And I don't want to be a fear monger, but I do want people to understand the new threats. You know the term living off the land? It's like using your own tools against you. Attackers will use your own tools and you trust those tools and then they'll flip them on you and use them against you. I'm very fearful that attackers are going to exploit agents in that way and live off the agent land. So, your point about lineage and providence and observability are things that we have to think about differently. Are they not?
Mukund Khatri
>> Yes, absolutely. I would say these were somewhat optional before. These are going to be required. And I think the integrity of the deployment scenarios where the monitoring, the observability is going to be tremendously important as we move with agents. Tons of things. Identity has to be monitored for these things and they have to operate in least-privileged mode, which is going to be... We've talked about it. I think the deployments of least privilege for agents is going to be very, very critical.
Dave Vellante
>> I want to ask you quickly about Dell on Dell, because I was struck several years ago talking to Michael Dell, and he said that, "We gathered the whole company together," and he said, "In the future, there's going to be a company that does exactly what we do and they do it far less, far faster, for much cheaper, and they will put us out of business. So, we're not going to let that happen. We're going to become that company." And so, there was a top-down mandate to basically embrace AI. And then, of course there's a lot of bottoms-up activity. I've talked to Doug Schmitt about this, who is the CIO and runs the consulting professional services organization. So, my point is, this is $100-plus billion company that is taking the lead on applying AI. What does that mean in terms of your ability to gain credibility with customers and actually point that knowledge to the external world?
Mukund Khatri
>> Super important, right? So, you heard from Michael, you heard from Doug. I think this is something that is at top of mind and action for all the leaders, right? So, we are accelerating AI understanding, AI adoption, AI projects within the company. And as we do that, if I turn to security, I think that is a key parameter that we are assessing, experimenting, learning, and ensuring what safe and secure AI deployments for us would mean. So, as we look at multiple AI projects, what is the boundary conditions? What is the risk tolerance? Elements of that are evaluated. We evaluate a number of our partners and what they bring and the efficacy of that, for example, is what we're looking at. Like many other companies, we are also using AI for developing our code. So, as we develop code, what does our coding practices look like and what enhancements need to be done there to ensure that we're not reducing the time to code from two weeks to two days, but we're spending two weeks in validating that code. So, how does AI help with ensuring that the code written is also of better quality and free of bugs, right? So, I think there's a whole journey there that we are investing in and we are all learning together. So, I think as we are implementing that, Dell's internal results is how we use to now guide and discuss with our customers.
Dave Vellante
>> Well, thank you for this conversation, Mukund. I want to close on a topic that is not mainstream yet, but it's starting to percolate. We saw some discussion at Mobile World Congress, MWC, saw a little bit at GTC. We're seeing a little bit here, and that's the conversation around post-quantum cryptography and quantum's ability to break Shor's algorithm. My question to you is, what's that all about? When should customers start thinking about it? Some customers should start today, others maybe can wait a year, I don't know, maybe two, but how should we think about that risk?
Mukund Khatri
>> That's a huge tsunami coming at us, right? When quantum computers are strong enough, Shor's algorithm, they'll be able to break today's crypto and crypto is everywhere. Everything we do has crypto. And all of that crypto transitioning in a reasonable timeframe, next two, three, four years to prevent us from being a sitting duck, right? I think it's paramount. So, everybody, all companies are getting aware. They need to be looking into their transition plans. Timeframe-wise, so Dell's been on this journey for a while, I think both as technology creators and a trusted advisor to our customers. So, we've been on this journey. Our products are moving down the path of post-quantum crypto transition, so our products will have the capabilities and they'll come out soon across the breadth of our portfolio. And then I think we can see that customers will start to transition. US government already has mandates, other governments have different mandates. So, compliance is one, but the industry hygiene is another. So, we can see that over the next two to three years, there's a lot of transitions, new buys that customers do, and then entire software ecosystem has to transition. So, a very multi-year, complex, mandatory, redefining governance kind of event it is, right? So, we are there and we're going to be there for the customers to transition.
Dave Vellante
>> Well, that's important because it's yet another thing that practitioners have to worry about. And of course, they're focused right now on AI. They're trying to get their agentic strategies working. They're trying to secure those agents from the bottom of the stack and the silicon, all the way up to those applications. Thank you, Mukund. I really appreciate your time and appreciate you participating in securing the AI Factory with Dell Technologies and Intel. Keep it right there. We got more deep dives from theCUBE. Be right back.
>> Hi, everybody. Welcome to our special segment here at RSAC 2026 at Moscone West. And this is Securing Your AI Factory, brought to you by Dell Technologies and Intel. And we had an opportunity at this event to sit down with Mukund Khatri, who's the fellow and vice president at Dell Technologies. Mukund, great to see you. Thanks so much for coming on.
Mukund Khatri
>> Thank you.
Dave Vellante
>> We've been trying to connect now for a couple of weeks. We've both been ships passing in the night, but such an exciting time. Of course, security is the theme here at RSAC. Let's start with the big infrastructure, which is the Dell AI Factory. I mean, you guys, with your partners, have completely changed the focus from all this bespoke infrastructure to this notion of an AI factory. What's going on with the Dell AI Factory these days?
Mukund Khatri
>> Well, I think that is the new buzz. So, as you know, infrastructure is where we have come from. The Dell AI Factory takes us to the prime place for infrastructure for AI workloads. So, we've been on the journey with our partners to create Dell AI Factory, which brings in rack-level infrastructure that brings our proven compute network and storage infrastructure to the customers in a rack fashion and be able to run AI workloads. So, AI Factory is what we have been building. I think just recently at GTC, you saw us add AI data platform, too. So, I think that's the journey we are on.
Dave Vellante
>> So, I like to think of it as you've got power, you've got compute and you've got data goes in to the AI Factory and intelligence comes out in the form of tokens. And that's a whole new way of thinking about computing. And to your point about data, Dell has always been in the data storage business.
Mukund Khatri
>> Correct.
Dave Vellante
>> The AI platform brings you up even further dealing with unstructured data, applying AI. That's a critical ingredient to manufacturing intelligence. Isn't that what the AI Factory builds essentially?
Mukund Khatri
>> So, I'll put it this way. So, it's an end-to-end solution, but factories are being rebuilt for the AI. That requires redesigning power, redesigning thermals, redesigning racks, optimizing racks. And then, the data storage part of it also has to come in a new form now, which is where AI data platform is being designed, which it enables the RAGs and all of the new AI-related data elements. It can add resiliency, which is what historically Dell has brought for data resiliency. So, end-to-end stack is what we bring to the end customer for the AI data. It's a whole lot more complex than we used to bring servers or storage independently, and that is what the offer brings. And I think it continues to scale as we see these racks.
Dave Vellante
>> I go back to the days of VBlocks where we bolted on compute storage and networking and tied them together. And maybe there was a software layer on top that helped us manage it. We're talking about something completely different here. Explain to the audience the engineering that actually has to go in to build an AI factory. As you pointed out, it's end-to-end. It's not just a bunch of bespoke parts that are bolted on together with the veneer. It's deep engineering, is it not?
Mukund Khatri
>> It is. It is. And as you know, my slant goes to security most of the times, so I think that's a sphere that I lead for Dell as a fellow. But as I look at the overall engineering, I think it is grounds up. It's a new rack design that Dell released, and we drove it through OCP with a bunch of collaborations. The power that these racks need is ever-increasing, right? So, I think we heard at GTC what the challenges are. Same thing with cooling technologies. So, I think Dell's super focused on bringing those technologies to the forefront as needed. Tons of innovation in the racks. As we look at each of the compute nodes, we've had cyber-resilient power edge. Now, it's towards cyber-resilient racks. And then, supply chain on top of it is something that we've historically had supply chain resilience. Dell's known for the supply chain. As we look at supply chain security, that is another area where the entire rack-level security is something that is of specific interest to end customers. These things are very complex. When they get delivered, this is no longer something that a customer themselves can figure out. So, there's a lot of automation needed in here in our factories to build, deliver, install, and all of this involved services. So, there's a end-to-end need for a company like Dell to make it easy for the end customer.
Dave Vellante
>> So, let's unpack some of the themes that you just brought there. The narrative around security is security can't be a bolt on. It can't be an afterthought. It needs to be designed in, same thing with recovery. So, what specifically does that mean? Of course, Intel has things like confidential computing, goes down to the silicon, and then you guys are designing in resilience throughout the stack. What specifically does that mean?
Mukund Khatri
>> Sure. So, let me unpack that a little bit. So, what do we do for security as part of these AI racks? So, Dell AI Factory. And if I focus on the infrastructure part, there is the cyber-resiliency that we design in, into the product. So, everybody uses the same components. But when you put it together and create a system, it is the glue that makes it secure. So, certain design parameters, how did we design it? Was our design methodology secure? Is the product components supply chain secure? And then, how are we designing it such that it is APT resilient, advanced persistent threat, nation state actors? So, there's a lot of learnings from... This is an iterative process over the years where customers have taught us what the key risks are and we've designed our products to mitigate those. Our servers and our compute gets used in all sorts of environments, as you know. So, cyber resiliency is a key part, which... Usually, I like to say it's advanced persistent threat resilience, so it's physical or cyber-related threats tied into firmware into hardware. And we design it grounds up to roots of trust in each of the components. And then, when we are thinking of ransomware, which is another class of threats, so are we designing our products for ransomware protection prevention? And obviously, our data protection and our storage portfolio has tons of security capabilities built in to protect, detect, and then also recover the data, as you're aware, cyber-resiliency portfolio. So, that's when we think of cyber resiliency in our portfolio, we're thinking that. And used to be at an individual node level, now it's coming at a rack level. So, it's a solution for the end customer.
Dave Vellante
>> So, it's just a wholly different scale. So, there's this idea of confidential computing and how does that extend to AI? Is there such a thing as confidential AI and how do you get there?
Mukund Khatri
>> That's a great point. I should have connected that in the previous response as well. So, as you look at the AI, AI brings additional threats, newer threats. Interestingly, the LLMs are, they look like code, they need to be protected like code, but they are essentially data, data that has secrets and weights are secrets. And if those are stolen, those are pretty precious. So, when we think of securing the AI layer, we are dealing with securing the model, having a trusted model to come in. And during deployment, the model has to be secure and the data that the model is using has to be secure, and that entire pipeline is a new addition to the traditional stack. What we had is infrastructure, you have storage and you got applications. Now, you have a whole new data layer, model in the data layer that needs protection. And you're seeing the partnerships we are coming up with, some of the design... You look at Dell Enterprise Hub, Dell Enterprise Hub is where we are creating containers of the models and we're making them available and those have trusted supply chain built in. You can download a model from there, which is now in a container form, not in a model form. And then, you can download it and it has the integrity verifications based on hash and also signing. So, these are additional things that we're continuing to build to provide assurance on supply chain on models as an example.
Dave Vellante
>> And I just want to give the audience a sense of the exposure that we all now face. I was just in a Google meeting, in a presentation. And of course, we're all talking about OpenClaw. Everybody's excited about OpenClaw and doing claws and having agents running around on our behalf. They said that more than 800 of the OpenClaw skills that are available for downloading are malware, but people are diving in. There's shadow AI just like there was shadow IT. So, across the supply chain, we need much greater vigilance than we've ever had before. Your thoughts?
Mukund Khatri
>> Absolutely. So, I think the supply chain has gotten so complex, one. Models can hallucinate, so the integrity of the models, where they came from, what data they were built on is very, very critical. And so, it's really improving all of the model vendors are moving very fast in this direction, but we still don't have what I would call as observability or lineage of where the model, what data it was used and stuff. I think we'll get there, but the model integrity is of paramount importance. And then, as the model goes through the pipeline as deployment... Models will still have its imperfectness. So, as we talk about security of the models, we talk about safety and security, typically. And they address a lot of the safety measures, PII, data filtering. "I won't give a wrong response if this happens." But when they're deployed in financial or health sectors, they are going to need additional things that are guardrails, right? So, as we look forward, guardrails, I think of them as keeping the car on track. At the same time, it is also going to be needed for compliance to HIPAA regulations and stuff. So, each of these things are ones that are going to be needed for AI to not hallucinate, to have the right outcomes. And as you pointed out, as we're looking at moving from LLMs to agents, as we heard yesterday, I think it is less about the answers and now actions. So, LLMs could give you a wrong answer, agents will give you a wrong action, much more detrimental.
Dave Vellante
>> And I don't want to be a fear monger, but I do want people to understand the new threats. You know the term living off the land? It's like using your own tools against you. Attackers will use your own tools and you trust those tools and then they'll flip them on you and use them against you. I'm very fearful that attackers are going to exploit agents in that way and live off the agent land. So, your point about lineage and providence and observability are things that we have to think about differently. Are they not?
Mukund Khatri
>> Yes, absolutely. I would say these were somewhat optional before. These are going to be required. And I think the integrity of the deployment scenarios where the monitoring, the observability is going to be tremendously important as we move with agents. Tons of things. Identity has to be monitored for these things and they have to operate in least-privileged mode, which is going to be... We've talked about it. I think the deployments of least privilege for agents is going to be very, very critical.
Dave Vellante
>> I want to ask you quickly about Dell on Dell, because I was struck several years ago talking to Michael Dell, and he said that, "We gathered the whole company together," and he said, "In the future, there's going to be a company that does exactly what we do and they do it far less, far faster, for much cheaper, and they will put us out of business. So, we're not going to let that happen. We're going to become that company." And so, there was a top-down mandate to basically embrace AI. And then, of course there's a lot of bottoms-up activity. I've talked to Doug Schmitt about this, who is the CIO and runs the consulting professional services organization. So, my point is, this is $100-plus billion company that is taking the lead on applying AI. What does that mean in terms of your ability to gain credibility with customers and actually point that knowledge to the external world?
Mukund Khatri
>> Super important, right? So, you heard from Michael, you heard from Doug. I think this is something that is at top of mind and action for all the leaders, right? So, we are accelerating AI understanding, AI adoption, AI projects within the company. And as we do that, if I turn to security, I think that is a key parameter that we are assessing, experimenting, learning, and ensuring what safe and secure AI deployments for us would mean. So, as we look at multiple AI projects, what is the boundary conditions? What is the risk tolerance? Elements of that are evaluated. We evaluate a number of our partners and what they bring and the efficacy of that, for example, is what we're looking at. Like many other companies, we are also using AI for developing our code. So, as we develop code, what does our coding practices look like and what enhancements need to be done there to ensure that we're not reducing the time to code from two weeks to two days, but we're spending two weeks in validating that code. So, how does AI help with ensuring that the code written is also of better quality and free of bugs, right? So, I think there's a whole journey there that we are investing in and we are all learning together. So, I think as we are implementing that, Dell's internal results is how we use to now guide and discuss with our customers.
Dave Vellante
>> Well, thank you for this conversation, Mukund. I want to close on a topic that is not mainstream yet, but it's starting to percolate. We saw some discussion at Mobile World Congress, MWC, saw a little bit at GTC. We're seeing a little bit here, and that's the conversation around post-quantum cryptography and quantum's ability to break Shor's algorithm. My question to you is, what's that all about? When should customers start thinking about it? Some customers should start today, others maybe can wait a year, I don't know, maybe two, but how should we think about that risk?
Mukund Khatri
>> That's a huge tsunami coming at us, right? When quantum computers are strong enough, Shor's algorithm, they'll be able to break today's crypto and crypto is everywhere. Everything we do has crypto. And all of that crypto transitioning in a reasonable timeframe, next two, three, four years to prevent us from being a sitting duck, right? I think it's paramount. So, everybody, all companies are getting aware. They need to be looking into their transition plans. Timeframe-wise, so Dell's been on this journey for a while, I think both as technology creators and a trusted advisor to our customers. So, we've been on this journey. Our products are moving down the path of post-quantum crypto transition, so our products will have the capabilities and they'll come out soon across the breadth of our portfolio. And then I think we can see that customers will start to transition. US government already has mandates, other governments have different mandates. So, compliance is one, but the industry hygiene is another. So, we can see that over the next two to three years, there's a lot of transitions, new buys that customers do, and then entire software ecosystem has to transition. So, a very multi-year, complex, mandatory, redefining governance kind of event it is, right? So, we are there and we're going to be there for the customers to transition.
Dave Vellante
>> Well, that's important because it's yet another thing that practitioners have to worry about. And of course, they're focused right now on AI. They're trying to get their agentic strategies working. They're trying to secure those agents from the bottom of the stack and the silicon, all the way up to those applications. Thank you, Mukund. I really appreciate your time and appreciate you participating in securing the AI Factory with Dell Technologies and Intel. Keep it right there. We got more deep dives from theCUBE. Be right back.