This interview explores platform-level security for artificial intelligence deployments. In this theCUBE Research interview host Dave Vellante speaks with Mike Ferron-Jones of Intel, go-to-market lead for data center security. Ferron-Jones explains Intel's hardware root-of-trust model and confidential computing primitives such as Software Guard Extensions, SGX and Trust Domain Extensions, TDX. They cover control-flow enforcement and reference architectures for central processing unit, CPU and graphics processing unit, GPU. They describe secure on-prem and cloud AI pipelines, hardware acceleration for encryption and practical deployment considerations.
Ferron-Jones states the CPU is the foundational hardware root of trust and that confidential computing delivers isolation, cryptographic attestation and customer-controlled keys for sensitive AI workloads. They highlight that on-prem confidential AI is deployable today as VMware vSphere, OpenShift and Red Hat Enterprise Linux, RHEL and Ubuntu adopt TDX support. They urge planning for post-quantum migration to mitigate "harvest now, decrypt later" risks and to adopt post-quantum cryptography, PQC, to protect data against future cryptographic threats. Analysts note Intel's security-by-design practices, proactive vulnerability research, bug bounty program and a seven-year security support window.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Securing the AI Factory With Dell Technologies and Intel. If you don’t think you received an email check your
spam folder.
Sign in to Securing the AI Factory With Dell Technologies and Intel.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Securing the AI Factory With Dell Technologies and Intel
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You are already logged into TheCUBE Network as
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Securing the AI Factory With Dell Technologies and Intel.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Securing the AI Factory With Dell Technologies and Intel. If you don’t think you received an email check your
spam folder.
Sign in to Securing the AI Factory With Dell Technologies and Intel.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Securing the AI Factory With Dell Technologies and Intel
Please sign in with LinkedIn to continue to Securing the AI Factory With Dell Technologies and Intel. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Mike Ferron-Jones, Intel | Securing the AI Factory
This interview explores platform-level security for artificial intelligence deployments. In this theCUBE Research interview host Dave Vellante speaks with Mike Ferron-Jones of Intel, go-to-market lead for data center security. Ferron-Jones explains Intel's hardware root-of-trust model and confidential computing primitives such as Software Guard Extensions, SGX and Trust Domain Extensions, TDX. They cover control-flow enforcement and reference architectures for central processing unit, CPU and graphics processing unit, GPU. They describe secure on-prem and cloud AI pipelines, hardware acceleration for encryption and practical deployment considerations.
Ferron-Jones states the CPU is the foundational hardware root of trust and that confidential computing delivers isolation, cryptographic attestation and customer-controlled keys for sensitive AI workloads. They highlight that on-prem confidential AI is deployable today as VMware vSphere, OpenShift and Red Hat Enterprise Linux, RHEL and Ubuntu adopt TDX support. They urge planning for post-quantum migration to mitigate "harvest now, decrypt later" risks and to adopt post-quantum cryptography, PQC, to protect data against future cryptographic threats. Analysts note Intel's security-by-design practices, proactive vulnerability research, bug bounty program and a seven-year security support window.
>> We're back at Securing the AI Factory, made possible by Dell and Intel, and with me is Mike Ferron-Jones. He's the go to-market lead for data center security at Intel. Mike, good to see you.
Mike Ferron-Jones
>> Hey, thanks, Dave. Great to be here.
Dave Vellante
>> Well, thanks for spending some time with us. It's great to have you. And I want to start off, everybody knows Intel, you defined the CPU and you are the CPU company, but the question is, what does that have to do with platform security? Take us through your thinking on that.
Mike Ferron-Jones
>> Yeah, and it's kind of funny. Everybody knows the CPU, of course, is one of the most important compute engines inside a server or a PC in there, but what does it have to do with security? Well, Intel and an Intel CPU is really at the heart of the platform. I mean, this is the device that is connected. It's running all of your software processes, it's managing all your memory accesses, it's managing the devices, and it is involved in all the flows and all the software stacks that are running up above it. So if any of the security software and security measures that you have running up the stack, they can't be trusted unless the hardware underneath them is trustworthy. And so that's really where we play the role. I mean, the CPU is the fundamental hardware root of trust in the entire security stack, so we often tell people it's like that your choice of a CPU is your very first security decision that you are making.
Dave Vellante
>> Well, and of course everybody's familiar with Intel Inside. It became famous. We see the Xeon logos everywhere, but they don't necessarily... I don't think people appreciate how Xeon CPUs and Intel specifically can protect users' systems and data. It's the features that you have in there, the promises that you make to customers. "I'm keeping my PCs longer." How are you helping protect user systems and data?
Mike Ferron-Jones
>> Well, I mean, it's a good question. I mean, there are so many security features inside an Intel platform. There's way more than I can possibly list, but they tend to fall into one of four major buckets. The first one is features that protect the platform. Keep the platform itself secure from attack or corruption. These are the things that basically make sure that the boot integrity is proper, and that no, below the OS, malicious firmware is getting into the system, or things that protect the memory access. So you think of our virtualization technologies or things like our Execute Disable Bit technologies. These keep memory management so that malicious software can't reach out and access contents of another process's memory. The second area is protecting the data, and these are really where our confidential computing technologies come in so technologies like Intel Software Guard extensions or SGX, or Trust Domain Extensions, TDX, come in, these create trusted execution environments where your software can operate inside an isolated, secure enclave, and you can bring your most confidential data in there knowing that it is isolated and protected from any outside software or admins. The third area is enforcing safe software behavior. One of the great things about an x86 CPU is that it's highly flexible and can do all kinds of amazing things and you can write all kinds of great programs to do neat workloads, but that flexibility also creates the opportunity for misuse if you have malicious intent. We put in hardware features, like a good one would be our control flow enforcement technology that keeps individual software processes in their lanes so that they cannot misbehave by say accessing another system, another process is memory or hijacking the control flow. And then the fourth bucket is accelerating the performance of security. So everybody loves strong security, nobody likes performance degradation. We've put technologies into the processor specifically designed to accelerate things like data encryption. And so when you go to a strong encryption algorithm, instead of feeling like the performance is severely degraded, by using these special instructions or these hardware accelerators, you can enjoy that strong security and not feel that big performance bite. You can be back to parity with unencrypted software.
Dave Vellante
>> Yeah. Those are huge, especially the last point you made. I'd love to encrypt everything, but historically we've had to pay a penalty to do so, so that's a major breakthrough. I want to switch to AI. Of course, it's the hot topic right now in technology. The irony is CPUs are even hotter. We've gone from a GPU to CPU ratio eight to one now. We cut that in half because you've got to do all kinds of management when these agents are running around and taking action on behalf of humans. So explain Intel's role here. How does Intel help create more secure AI systems?
Mike Ferron-Jones
>> Well, and the AI system is more than just say training or inference on the GPU. There's all kinds of processes in the pipeline from data ingest, staging, then there's the training, then there's the inference, then there's the output and the processing of the result. And a lot of that happens on CPUs and in partnership with GPUs. Now, the thing about AI systems are amazing and everyone is excited about the value that they have as of their potential, but there's a prerequisite in order to get to value, which is trust. If somebody doesn't trust it, they're not going to use it. And if they're not going to use it, you're not going to get any value from it. The technology areas that we are focusing right now is the combination of confidential computing into an AI context to create confidential AI environments. And with confidential AI environments, the processes that you're running in your AI systems are put inside a trusted execution environment that is cryptographically attested for integrity and is protected from exfiltration or interference by outside software or outside actors, so it goes a long way to helping to build trust in the system. It also allows companies who are say, maybe nervous about bringing in sensitive data into their AI analysis. It's like, "Hey, this is highly regulated data." It's say personal healthcare information or company trade secrets. Running inside a confidential AI environment allows you to process that sensitive data with higher confidence that you're not going to lose control over it. The big characteristics that confidential AI gives you is isolation, hardware based, hardware enforced isolation of the AI process inside a trusted execution environment. Verification, you get a cryptographic receipt that where that application is running has been tested for integrity, and third is control. Data is only released into the confidential trusted execution environment using encryption keys that you control. So whether you're concerned about regulatory compliance or data sovereignty or just classic cybersecurity, you're holding the keys to protect your data. We have confidential AI solutions that are both CPU based so you can run AI inference on the CPU. So like agentic, a lot of AI agents, agentic is great on CPUs, or if you're doing more heavy duty LLMs, we've worked in partnerships with NVIDIA to create reference architectures to be able to do confidential AI that partner trusted execution environments on the GPU with ones on the CPU. It's a great way to increase the trust in the system so you can go toward value.
Dave Vellante
>> Makes sense if it's all about trust. You mentioned confidential computing and confidential several times today. I mean, we've seen confidential computing in the cloud for a number of years. And with AI, we've actually noticed we're seeing a lot more interest in organizations building on-prem AI stacks. They don't necessarily want to move the data into the cloud, rather they want to bring the intelligence to the data. The OpenClaw moment has certainly taken the world by storm and has facilitated interest among other things. My question is, can a customer, can a Dell customer, for example, deploy it today on-prem?
Mike Ferron-Jones
>> Absolutely, and we're what, 15, 20 years into the public cloud revolution and still, there is data that people say, "Look, that's not leaving the house. That is not going outside of the cloud." And so to be able to execute confidential AI locally, the biggest impediment to date had been the availability of enabled software stacks. Particularly on-prem, you rely on usually one of the major vendor software stacks like a VMware or an OpenShift or a RHEL. And it wasn't until recently that those software packages were enabled for technologies like Intel TDX, which is the backbone technology for a lot of the confidential AI. The good news is that those companies are now starting to roll out their updates that include support for Intel Confidential Computing and TDX. So VMware VSphere9, OpenShift RHEL is coming soon, SUSE is not long after that. So there's Ubuntu, our friends at Ubuntu have got a great solution available today. All of those are available to deploy on-prem. And if you wanted to do GPU-centric confidential AI, we have a recipe available, a reference architecture that we partner with NVIDIA that we've developed that allows you to, again, partner confidential environments on the CPU with the GPU for a complete confidential AI solution with GPU acceleration.
Dave Vellante
>> Yeah, that's helpful. I mean, there is a major modernization effort going on. Obviously, the cloud still has tremendous momentum, but we've seen a real resurgence in interest in on-prem. Essentially, substantially similar experience as you're getting in the cloud, the difference is it's under your control. I want to shift topics, Mike. Recently, the world celebrated Quantum Day, and so we're seeing really rising up in the security agenda is the conversation and conversion around post-quantum cryptography or PQC. We've heard some people talk about that as the Y2K of our time. Of course, I remember Y2K well, the whole world had to really respond to that. Is it a similar dynamic here? What are your thoughts?
Mike Ferron-Jones
>> Well, post quantum, it's a big deal. And of course, CISOs and CIOs are paying attention to it now. So just for anybody who is curious, the concern with quantum computers is that they are a new type of computer, different than the classical computers that we know today. The characteristic that they have is their extreme parallelism where a classical computer can represent data in binary, it's a one or a zero. A quantum computer can theoretically represent all possible states, and all possible combinations of data, and that extreme parallelism makes them good at doing things like say climate modeling or physics simulations that also makes them good at factoring large numbers, which if you apply it in the right way, can be used to break encryption keys. The quantum computers are a sufficiently of powerful quantum computer could be used to break today's encryption, and so that's the whole concern that everybody's feeling right now is that, how do I move to encryption algorithms and encryption technology that is safe from these future quantum computers? The comparison to Y2K is a little bit imperfect because we knew exactly when Y2K was. It was January 1st, 2000 when the clocks ticked over from 99 to 2000. With quantum computers, nobody's really exactly sure when this cryptographically relevant quantum computer is going to emerge and in whose control it will be. So some experts say, "Hey, it's five to 10 years away." Some experts say it's 15 years or more. We don't know exactly when it's going to happen, but if you look at the growth rate and the advancement of technology in quantum, it's really more of a question of when, not if. Now, so we are on a quest to help people make this transition from the classical algorithms, the conventional algorithms that we got today over to post quantum and quantum safe encryption algorithms.
Dave Vellante
>> Okay, let's say it's 10 years away, Mike. Explain why organizations need to think about this today. Why do they need to start moving on PQC now? What's the issue?
Mike Ferron-Jones
>> Yeah. And some of the threats, some of the quantum threats, like quantum computers intercepting messaging between two classified entities in a government or something like that, we're probably a ways away from that, that real time quantum-based attack. What is the quantum-based attack that people are worried about today is the harvesting of classically encrypted data today, and then just sitting on it and waiting for a sufficiently powerful quantum computer to emerge that'll allow you to be able to crack that data. This is called a harvest now, decrypt later scenario. And these are the kinds of things that people are worried about now because if they're going to be encrypting their data now, they want to be doing it with quantum-safe algorithms. So even if it's exfiltrated today, it can't be cracked open in 10 years or 15 years with a quantum computer. We need to do a conversion of the encryption technologies that we're using today over to quantum-safe algorithms. Intel's doing our part by converting the cryptographic operations inside our platforms over to new quantum-safe algorithms. That conversion started on our Xeon 6 processors. For those of you that track Intel code names, that was the Granite Rapids generation. And it's the conversion, the walk through complete quantum conversion is going to span about three generations. But by 2029, we expect that all cryptographic operations inside Intel platforms will be using quantum-safe technology, but you don't need to wait. You can start protecting yourself today, particularly against those harvest now, decrypt later scenarios by encrypting stored data with the quantum-safe AES-256 algorithm. One great thing is there's instructions inside today's Xeon CPUs that accelerate that thing. You can flip over to the quantum-safe, more sophisticated quantum-safe algorithm and not feel the big bite of going to that larger key size.
Dave Vellante
>> Okay. Thanks for that. I mean, that makes sense. Despite the horizon for quantum, there are things you can do today to protect yourself from when that becomes a reality. Staying on the security vulnerabilities topic, I mean, every few times a year, you see news about new security vulnerabilities. They oftentimes hit popular CPUs, not just Intel, AMD, ARM. You see new models coming out from the frontier model vendors that can uncover previously unknown threats. It's a very, very fast moving target. What is Intel doing to help reduce and mitigate those types of threats?
Mike Ferron-Jones
>> Yeah, and this really comes down to your CPU vendor's philosophy and investment in security assurance. Intel really takes this seriously and we invest a lot with a lot of people, and a lot of money, and a lot of effort into helping to keep our customers safe. That falls into three main areas. One is the investments that we make and the processes that we make is we build our products and we build our platforms. The second bucket is when you're shipping that product, how are you doing research on it and how are you handling vulnerabilities that are discovered in that processor? And three is once it's running, it's like how long is your tail of support to help keep your customers safe during the lifetime of that product? We use a very intense security by design philosophy. When we are building a product, they go through extensive internal security reviews. We also partner with externals during the design process to get outside eyes on things of how these could be what security issues we should be on the lookout for. We apply a lot of AI modeling into various attack scenarios during the design process to make sure that when we come out with a product, it is as free of vulnerabilities as we can make it. But a modern Xeon processor is tens of billions of transistors. Any device that's sophisticated is probably going, in the face of highly creative and highly motivated adversaries, there's probably going to be a vulnerability discovered during its lifetime. The philosophy that we take is we like to be proactive. We like to hunt and kill our own bugs. We make a big investment in proactive security research, not only before we ship it, but after we ship it. And these red teams are trying to find any way that they can make that system misbehave and create a security problem, and they do it with this express desire to find it so we can mitigate it before it's discovered in the wild by a potential attacker. We do that with our own internal red teams. We partner with outside firms a lot. We have one of the industry's leading bug bounty programs to work with outside researchers to be able to discover any kind of process or that could be a problem, and then do a coordinated disclosure and a coordinated mitigation that keeps our customers safe. And then finally, the last thing I want to say on this is, once our processors launch, we have a standard seven-year support window for those processors that during that seven years from launch until end of life, we are providing security updates for all of those processors that are in their active life cycle. We're not leaving anybody behind, even though somebody's, say you bought your Xeon processor, it's been installed. Hey, even it's been there for four or five years, we're still on the lookout for vulnerabilities and providing mitigations and patches for customers to keep them safe.
Dave Vellante
>> Well, I have three Dell laptops. I think my oldest one is six years old that my daughter now has, so I got two others that are younger than that so I appreciate you guys providing that support.
Mike Ferron-Jones
>> We are on the lookout.
Dave Vellante
>> Thank you. Mike, thanks for coming on. It's been great having you, and thanks for supporting the program.
Mike Ferron-Jones
>> It has been a pleasure. Thank you very much, Dave.
Dave Vellante
>> Yeah, you bet. Okay, you're watching Securing the AI Factory brought to you by Dell and Intel. This is Dave Vellante, keep it right there for more content on this program.
Mike Ferron-Jones, Intel | Securing the AI Factory
search
Dave Vellante
>> We're back at Securing the AI Factory, made possible by Dell and Intel, and with me is Mike Ferron-Jones. He's the go to-market lead for data center security at Intel. Mike, good to see you.
Mike Ferron-Jones
>> Hey, thanks, Dave. Great to be here.
Dave Vellante
>> Well, thanks for spending some time with us. It's great to have you. And I want to start off, everybody knows Intel, you defined the CPU and you are the CPU company, but the question is, what does that have to do with platform security? Take us through your thinking on that.
Mike Ferron-Jones
>> Yeah, and it's kind of funny. Everybody knows the CPU, of course, is one of the most important compute engines inside a server or a PC in there, but what does it have to do with security? Well, Intel and an Intel CPU is really at the heart of the platform. I mean, this is the device that is connected. It's running all of your software processes, it's managing all your memory accesses, it's managing the devices, and it is involved in all the flows and all the software stacks that are running up above it. So if any of the security software and security measures that you have running up the stack, they can't be trusted unless the hardware underneath them is trustworthy. And so that's really where we play the role. I mean, the CPU is the fundamental hardware root of trust in the entire security stack, so we often tell people it's like that your choice of a CPU is your very first security decision that you are making.
Dave Vellante
>> Well, and of course everybody's familiar with Intel Inside. It became famous. We see the Xeon logos everywhere, but they don't necessarily... I don't think people appreciate how Xeon CPUs and Intel specifically can protect users' systems and data. It's the features that you have in there, the promises that you make to customers. "I'm keeping my PCs longer." How are you helping protect user systems and data?
Mike Ferron-Jones
>> Well, I mean, it's a good question. I mean, there are so many security features inside an Intel platform. There's way more than I can possibly list, but they tend to fall into one of four major buckets. The first one is features that protect the platform. Keep the platform itself secure from attack or corruption. These are the things that basically make sure that the boot integrity is proper, and that no, below the OS, malicious firmware is getting into the system, or things that protect the memory access. So you think of our virtualization technologies or things like our Execute Disable Bit technologies. These keep memory management so that malicious software can't reach out and access contents of another process's memory. The second area is protecting the data, and these are really where our confidential computing technologies come in so technologies like Intel Software Guard extensions or SGX, or Trust Domain Extensions, TDX, come in, these create trusted execution environments where your software can operate inside an isolated, secure enclave, and you can bring your most confidential data in there knowing that it is isolated and protected from any outside software or admins. The third area is enforcing safe software behavior. One of the great things about an x86 CPU is that it's highly flexible and can do all kinds of amazing things and you can write all kinds of great programs to do neat workloads, but that flexibility also creates the opportunity for misuse if you have malicious intent. We put in hardware features, like a good one would be our control flow enforcement technology that keeps individual software processes in their lanes so that they cannot misbehave by say accessing another system, another process is memory or hijacking the control flow. And then the fourth bucket is accelerating the performance of security. So everybody loves strong security, nobody likes performance degradation. We've put technologies into the processor specifically designed to accelerate things like data encryption. And so when you go to a strong encryption algorithm, instead of feeling like the performance is severely degraded, by using these special instructions or these hardware accelerators, you can enjoy that strong security and not feel that big performance bite. You can be back to parity with unencrypted software.
Dave Vellante
>> Yeah. Those are huge, especially the last point you made. I'd love to encrypt everything, but historically we've had to pay a penalty to do so, so that's a major breakthrough. I want to switch to AI. Of course, it's the hot topic right now in technology. The irony is CPUs are even hotter. We've gone from a GPU to CPU ratio eight to one now. We cut that in half because you've got to do all kinds of management when these agents are running around and taking action on behalf of humans. So explain Intel's role here. How does Intel help create more secure AI systems?
Mike Ferron-Jones
>> Well, and the AI system is more than just say training or inference on the GPU. There's all kinds of processes in the pipeline from data ingest, staging, then there's the training, then there's the inference, then there's the output and the processing of the result. And a lot of that happens on CPUs and in partnership with GPUs. Now, the thing about AI systems are amazing and everyone is excited about the value that they have as of their potential, but there's a prerequisite in order to get to value, which is trust. If somebody doesn't trust it, they're not going to use it. And if they're not going to use it, you're not going to get any value from it. The technology areas that we are focusing right now is the combination of confidential computing into an AI context to create confidential AI environments. And with confidential AI environments, the processes that you're running in your AI systems are put inside a trusted execution environment that is cryptographically attested for integrity and is protected from exfiltration or interference by outside software or outside actors, so it goes a long way to helping to build trust in the system. It also allows companies who are say, maybe nervous about bringing in sensitive data into their AI analysis. It's like, "Hey, this is highly regulated data." It's say personal healthcare information or company trade secrets. Running inside a confidential AI environment allows you to process that sensitive data with higher confidence that you're not going to lose control over it. The big characteristics that confidential AI gives you is isolation, hardware based, hardware enforced isolation of the AI process inside a trusted execution environment. Verification, you get a cryptographic receipt that where that application is running has been tested for integrity, and third is control. Data is only released into the confidential trusted execution environment using encryption keys that you control. So whether you're concerned about regulatory compliance or data sovereignty or just classic cybersecurity, you're holding the keys to protect your data. We have confidential AI solutions that are both CPU based so you can run AI inference on the CPU. So like agentic, a lot of AI agents, agentic is great on CPUs, or if you're doing more heavy duty LLMs, we've worked in partnerships with NVIDIA to create reference architectures to be able to do confidential AI that partner trusted execution environments on the GPU with ones on the CPU. It's a great way to increase the trust in the system so you can go toward value.
Dave Vellante
>> Makes sense if it's all about trust. You mentioned confidential computing and confidential several times today. I mean, we've seen confidential computing in the cloud for a number of years. And with AI, we've actually noticed we're seeing a lot more interest in organizations building on-prem AI stacks. They don't necessarily want to move the data into the cloud, rather they want to bring the intelligence to the data. The OpenClaw moment has certainly taken the world by storm and has facilitated interest among other things. My question is, can a customer, can a Dell customer, for example, deploy it today on-prem?
Mike Ferron-Jones
>> Absolutely, and we're what, 15, 20 years into the public cloud revolution and still, there is data that people say, "Look, that's not leaving the house. That is not going outside of the cloud." And so to be able to execute confidential AI locally, the biggest impediment to date had been the availability of enabled software stacks. Particularly on-prem, you rely on usually one of the major vendor software stacks like a VMware or an OpenShift or a RHEL. And it wasn't until recently that those software packages were enabled for technologies like Intel TDX, which is the backbone technology for a lot of the confidential AI. The good news is that those companies are now starting to roll out their updates that include support for Intel Confidential Computing and TDX. So VMware VSphere9, OpenShift RHEL is coming soon, SUSE is not long after that. So there's Ubuntu, our friends at Ubuntu have got a great solution available today. All of those are available to deploy on-prem. And if you wanted to do GPU-centric confidential AI, we have a recipe available, a reference architecture that we partner with NVIDIA that we've developed that allows you to, again, partner confidential environments on the CPU with the GPU for a complete confidential AI solution with GPU acceleration.
Dave Vellante
>> Yeah, that's helpful. I mean, there is a major modernization effort going on. Obviously, the cloud still has tremendous momentum, but we've seen a real resurgence in interest in on-prem. Essentially, substantially similar experience as you're getting in the cloud, the difference is it's under your control. I want to shift topics, Mike. Recently, the world celebrated Quantum Day, and so we're seeing really rising up in the security agenda is the conversation and conversion around post-quantum cryptography or PQC. We've heard some people talk about that as the Y2K of our time. Of course, I remember Y2K well, the whole world had to really respond to that. Is it a similar dynamic here? What are your thoughts?
Mike Ferron-Jones
>> Well, post quantum, it's a big deal. And of course, CISOs and CIOs are paying attention to it now. So just for anybody who is curious, the concern with quantum computers is that they are a new type of computer, different than the classical computers that we know today. The characteristic that they have is their extreme parallelism where a classical computer can represent data in binary, it's a one or a zero. A quantum computer can theoretically represent all possible states, and all possible combinations of data, and that extreme parallelism makes them good at doing things like say climate modeling or physics simulations that also makes them good at factoring large numbers, which if you apply it in the right way, can be used to break encryption keys. The quantum computers are a sufficiently of powerful quantum computer could be used to break today's encryption, and so that's the whole concern that everybody's feeling right now is that, how do I move to encryption algorithms and encryption technology that is safe from these future quantum computers? The comparison to Y2K is a little bit imperfect because we knew exactly when Y2K was. It was January 1st, 2000 when the clocks ticked over from 99 to 2000. With quantum computers, nobody's really exactly sure when this cryptographically relevant quantum computer is going to emerge and in whose control it will be. So some experts say, "Hey, it's five to 10 years away." Some experts say it's 15 years or more. We don't know exactly when it's going to happen, but if you look at the growth rate and the advancement of technology in quantum, it's really more of a question of when, not if. Now, so we are on a quest to help people make this transition from the classical algorithms, the conventional algorithms that we got today over to post quantum and quantum safe encryption algorithms.
Dave Vellante
>> Okay, let's say it's 10 years away, Mike. Explain why organizations need to think about this today. Why do they need to start moving on PQC now? What's the issue?
Mike Ferron-Jones
>> Yeah. And some of the threats, some of the quantum threats, like quantum computers intercepting messaging between two classified entities in a government or something like that, we're probably a ways away from that, that real time quantum-based attack. What is the quantum-based attack that people are worried about today is the harvesting of classically encrypted data today, and then just sitting on it and waiting for a sufficiently powerful quantum computer to emerge that'll allow you to be able to crack that data. This is called a harvest now, decrypt later scenario. And these are the kinds of things that people are worried about now because if they're going to be encrypting their data now, they want to be doing it with quantum-safe algorithms. So even if it's exfiltrated today, it can't be cracked open in 10 years or 15 years with a quantum computer. We need to do a conversion of the encryption technologies that we're using today over to quantum-safe algorithms. Intel's doing our part by converting the cryptographic operations inside our platforms over to new quantum-safe algorithms. That conversion started on our Xeon 6 processors. For those of you that track Intel code names, that was the Granite Rapids generation. And it's the conversion, the walk through complete quantum conversion is going to span about three generations. But by 2029, we expect that all cryptographic operations inside Intel platforms will be using quantum-safe technology, but you don't need to wait. You can start protecting yourself today, particularly against those harvest now, decrypt later scenarios by encrypting stored data with the quantum-safe AES-256 algorithm. One great thing is there's instructions inside today's Xeon CPUs that accelerate that thing. You can flip over to the quantum-safe, more sophisticated quantum-safe algorithm and not feel the big bite of going to that larger key size.
Dave Vellante
>> Okay. Thanks for that. I mean, that makes sense. Despite the horizon for quantum, there are things you can do today to protect yourself from when that becomes a reality. Staying on the security vulnerabilities topic, I mean, every few times a year, you see news about new security vulnerabilities. They oftentimes hit popular CPUs, not just Intel, AMD, ARM. You see new models coming out from the frontier model vendors that can uncover previously unknown threats. It's a very, very fast moving target. What is Intel doing to help reduce and mitigate those types of threats?
Mike Ferron-Jones
>> Yeah, and this really comes down to your CPU vendor's philosophy and investment in security assurance. Intel really takes this seriously and we invest a lot with a lot of people, and a lot of money, and a lot of effort into helping to keep our customers safe. That falls into three main areas. One is the investments that we make and the processes that we make is we build our products and we build our platforms. The second bucket is when you're shipping that product, how are you doing research on it and how are you handling vulnerabilities that are discovered in that processor? And three is once it's running, it's like how long is your tail of support to help keep your customers safe during the lifetime of that product? We use a very intense security by design philosophy. When we are building a product, they go through extensive internal security reviews. We also partner with externals during the design process to get outside eyes on things of how these could be what security issues we should be on the lookout for. We apply a lot of AI modeling into various attack scenarios during the design process to make sure that when we come out with a product, it is as free of vulnerabilities as we can make it. But a modern Xeon processor is tens of billions of transistors. Any device that's sophisticated is probably going, in the face of highly creative and highly motivated adversaries, there's probably going to be a vulnerability discovered during its lifetime. The philosophy that we take is we like to be proactive. We like to hunt and kill our own bugs. We make a big investment in proactive security research, not only before we ship it, but after we ship it. And these red teams are trying to find any way that they can make that system misbehave and create a security problem, and they do it with this express desire to find it so we can mitigate it before it's discovered in the wild by a potential attacker. We do that with our own internal red teams. We partner with outside firms a lot. We have one of the industry's leading bug bounty programs to work with outside researchers to be able to discover any kind of process or that could be a problem, and then do a coordinated disclosure and a coordinated mitigation that keeps our customers safe. And then finally, the last thing I want to say on this is, once our processors launch, we have a standard seven-year support window for those processors that during that seven years from launch until end of life, we are providing security updates for all of those processors that are in their active life cycle. We're not leaving anybody behind, even though somebody's, say you bought your Xeon processor, it's been installed. Hey, even it's been there for four or five years, we're still on the lookout for vulnerabilities and providing mitigations and patches for customers to keep them safe.
Dave Vellante
>> Well, I have three Dell laptops. I think my oldest one is six years old that my daughter now has, so I got two others that are younger than that so I appreciate you guys providing that support.
Mike Ferron-Jones
>> We are on the lookout.
Dave Vellante
>> Thank you. Mike, thanks for coming on. It's been great having you, and thanks for supporting the program.
Mike Ferron-Jones
>> It has been a pleasure. Thank you very much, Dave.
Dave Vellante
>> Yeah, you bet. Okay, you're watching Securing the AI Factory brought to you by Dell and Intel. This is Dave Vellante, keep it right there for more content on this program.