AI – The inflection point that is driving change in the industry at a record pace. Hear from Intel on how you can begin your AI journey faster and easier than you think. We will discuss how Intel has addressed AI in unique ways – possibly through technology already in your data center.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
HPE Discover 2024. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For HPE Discover 2024
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for HPE Discover 2024.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
HPE Discover 2024. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to HPE Discover 2024
Please sign in with LinkedIn to continue to HPE Discover 2024. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Simplify Your AI Journey with Intel and HPE
AI – The inflection point that is driving change in the industry at a record pace. Hear from Intel on how you can begin your AI journey faster and easier than you think. We will discuss how Intel has addressed AI in unique ways – possibly through technology already in your data center.
>> Hello, everyone, and welcome back to theCUBE's live coverage of HPE Discover. I'm your host, Rebecca Knight, sitting alongside my co-host and analyst, Dave Vellante. John Furrier is also here with us this week, but I'd like to introduce our first guest of the day of the show. He is Greg Ernst, Corporate Vice President, Intel Sales and Marketing, and GM America Sales at Intel. Thank you so much for coming on the cube.
Greg Ernst
>> Thank you. Thank you, Rebecca. It's exciting to be here and congrats for all your success. This is always the highlight of the big shows around the world.>> Thank you. Thanks for your support. We appreciate Intel coming on over the years. Done a lot with you guys.
Greg Ernst
>> Thank you, Dave.
Rebecca Knight
>> So all three of us are fresh from the keynote in the Sphere, the first ever keynote to take place in the Sphere. We just heard Antonio Neri, Jensen Huang, both of them we're talking about how we are really on the cusp of a major societal, technological transformation with AI. I'm curious if you want to start talking a little bit, Greg, about from Intel's perspective, where we are in terms of this inflection point for businesses that are looking to get going with AI.
Greg Ernst
>> Yeah, no, thank you, Rebecca. And how cool was it?
Rebecca Knight
>> It was .
Greg Ernst
>> The visual optics were impressive. It was my first time in the Sphere, so I know we were all buzzing to get in there and check it out. But Rebecca, from Intel's point of view, really our mission is as a company and we're going to build on decades of what we've done is make implementing AI into the enterprise workflows easier. And that to me, that's what we wake up every day doing was how do we work with standard bodies? How do we work with the ISV community that we've always been known for? I mean, as you know, Pat used to work at VMware, so that's close to our heart. How do we work with the OEMs and what ecosystems do we create? And I'm excited to unpack some of those things because Intel ourselves, we've had really huge announcements the last six, seven weeks documenting that progress that we're doing together with the industry to make it easier. But the potential of AI is incredible and every company's on a different stage of the journey. Some are really out in front and others want to really just learn and be measured in their progress. I'd say most of the clients I've talked to, they've implemented a few workflows consistently on the enterprise side. One just being advanced search, whether that's for external or internal, really all being able to make that content much more searchable. Help desk would be number two, and then software code generation, number three. But eventually at Intel, we believe in this AI continuum where every persona type and every vertical, there'll be dedicated AI workflows and frankly, it'll be required for companies to do it in order to stay competitive in their industry.>> And the market, it's almost like you can't even size it, Greg. It's just enormous. And so to your point, Intel building upon its decades of ecosystem enablement, that's one of the hallmarks. Then this AI wave comes along. It's not like you didn't know machine learning and AI, but the vast majority of the customers weren't focused on it. Now a hundred percent are. And Intel, what Intel's trying to do is unprecedented. Both the shift toward AI, the Foundry initiatives, the global build out that Pat's on a mission to educate people on the importance of really hardening the supply chain. So, you got a lot going on. And so what's it like inside of Intel and the transformation that you're going through now?
Greg Ernst
>> It's torrid. That's a word that we've been using the last three years Torrid. The pace is incredible and Intel, we recognize the huge responsibility that we have because there's two counter forces that are really working. There's one that's incredible applications for compute, which requires incredible power and compute build out. But then there's also our responsibility around sustainability for the globe. And at Intel, we participate in both. We're actually probably the only company that adds scale, produces customized compute for AI as well as the Foundry element. And so for us, every company is a customer, or target customer at least. So, even some of these companies that you would think traditionally are competitor Intels, we're actually becoming a partner and working and making our fabs available to them for the build out. And I'm super proud, everyone at Intel. We are proud of what we have done, where we've built not just a globally diverse, resilient supply chain, but very sustainable. Almost all of our factories return more water to the Earth than we take out. Almost all of our factories are powered with renewable energy. So, every company that's really focused at scope one, two, and three, Intel's a key partner for that because we are the only foundry where that has become a foundation. Renewable energy, returning more water to the Earth's surface than we take out. So, as the world demands this compute, we're proud to be able to supply it. And then of course we have this big $60 billion business where we produce our own semiconductors with the Intel brand and we sell into enterprises all over the world.>> So the scope, I'm not really deep into the scope three-piece, but I'm inferring from what you said that a part of that is your supply chain has to be scope three. Nobody really is there yet, but we're all working toward that. So, that's a key part of it. It's not just what you do, it's what you do, what you do for your customer all the way back to the value chain.
Greg Ernst
>> Exactly. Exactly. The two of them combined and we did recognize, as you said, Dave, AI has been on a rapid ascension the last several years. We have edge inference being huge. A lot of the cloud service providers were using it for better search and recommendation engines. And then clearly like November when the gen AI explosion happened, it's taken everything to a whole different level.>> As you're saying, you've got your fingers in a lot of pies. The core. Let's come back to Xeon a little bit. Give us the update on Xeon 5, Xeon 6. Do you have Xeon 6 news? I'd love to get into the futures there.
Greg Ernst
>> Thank you. As I said earlier, torrid has been the adverb at Intel. We've been busy and we really just fundamentally, what drives our company is Moore's Law. And I know there's all sorts of people speculating, is Moore's law is dead or not? But one of the things that we're really proud of as a company the last four years is we've done five manufacturing transitions. So, each one of those transitions, we increase the transistor density, which again, more compute, less power. Typically, under Moore's Law, you would do one every 18 months to two years. We've done five and four. But then we don't just do it for the physics, we actually build products at scale on each one. And as you said, we announced Xeon 5 just six months ago. We're in high production, we're shipping millions of units. And then at Computex two weeks ago, we announced Xeon 6, which is that next generation. And both families integrate advanced matrix software into it. So, it's AI accelerator built into Xeon. We have one family is focused at these high performance cores. The other one is energy efficient cores that are wonderful for any application that was written cloud first. So, we've been busy, but great partners like HPE are keeping pace with us and rolling out, like in the case of HPE, their HPE ProLiant D380 will support both families.>> If I may.
Rebecca Knight
>> Yeah.
Greg Ernst
>> Yeah.>> I just want to follow up on things correctly. So, five nodes in four years. Pat has, I think said that thousands of times. And when I first heard it, I was like, "No way." And actually doing it because the time to tape out has historically been a couple of years. So, five nodes in four years, again, unprecedented. What's interesting to me, you mentioned torrid a couple of times. Intel was always the conservative one, and now you're doing things that, if you pull this off, this is like we get goosebumps thinking about it. It'd be like near miracle because you got backside power, you got gate-all-around, you're doing NAEUV all at once. I mean, it's never been even contemplated before. And so everybody's of course, rooting for Intel, but it's going to be mind-blowing to see how this all plays out. And it doesn't happen overnight, obviously, but the focus is, it's a renewed focus at the company. It's very clear.
Greg Ernst
>> It's a renewed focus. It's a hard work of 100,000 people. And what we wake up every day is really just how do we stretch the laws of physics? And that, especially now in a world where compute demand is insatiable, that translates into power and performance, which is critical in these big data centers where power is actually the limiting factor. So, driving that Moore's Law, it's a company-wide effort, and I'm just honored to play a small role, which is bringing it to market.>> And doing some wild things with packaging. I mentioned, I'm not expert on this, but backside power means you can put more transistors on the front side and it's more power efficient, and the numbers are just, they're mind-blowing, again, what you're trying to do.
Greg Ernst
>> I'm impressing you the backside power.>> I mean, it is the future. It's the future of packaging.
Greg Ernst
>> Well, yeah, maybe I for the audience.>> Yeah.
Greg Ernst
>> So, what we've done, for decades, wafer had the power delivery, so all the routing, all the wiring and the transistors on the same side of the wafer. The problem with that is the current's got to travel longer distance. So, what does that do? That burns power creates heat. What our engineers and Intel came up with is a novel way, which is transistors on one side, power in the routing on the top side. Now the challenge for that is you can only print on one side at a time. So, we print on one side of the wafer, you have to flip it, get it perfectly aligned with billions of transistors and do it a second time. That's why no one else has ever done it before because of the complexity of that. And the great news is, is it's done, it's proven, and we're going to be shipping products with it.>> And what this allows, if I understand it, the EDA vendors who shipping software to help you get actually lay down transistors actually now have greater flexibility and they can do some amazing things. But it's not just... What do you guys call it? RibbonFET, I think is your-
Greg Ernst
>> RibbonFET for transistor.>> So, they're not only backside power, you're bringing RibbonFET, call it gate-all-around, which is just Intel introduced FinFET years ago, and this thing, you guys just took shipments from the ASML NAEUV machine. I don't really understand it, how this is going to work, but it's like magic that somehow you lay down this polymer and then it lays down the transitions. But you can use the NAEUV to guide the organic laying down of the... It's just, again, I don't really understand it, but that's why I'm saying if you can pull this off. This is literally a miracle that Gelsinger and Intel are trying to achieve.
Greg Ernst
>> Yeah. And again, I mean, I view it's pulled it off, right? So, it's customer's orders are coming in and our engineers have done it.>> Well, all three have to come together with 14A. That's what I'm watching, which I think is 2027.
Greg Ernst
>> That's the sixth node. Yeah, we've done five and four. You're already pushing me, "Greg, when's the sixth one?">> One only because if Intel does that, it's like game over.
Rebecca Knight
>> They wake up every morning stretching the laws of physics. We just heard this from Greg.
Greg Ernst
>> Yeah, we do.
Rebecca Knight
>> Greg, can you talk a little bit more about this AI continuum that you talked about? Because I think that it will help, as you said earlier, every customer is at a different stage in their AI journey. So, can you sort of walk us through how Intel views this continuum?
Greg Ernst
>> Yeah, no, thank you, Rebecca. One of the things, we always challenge ourselves to do more, but one of the things that we recognized is one of the great things that the industry has created is this X86 software instruction set. And that allows developers to write, whether it's for PC CPUs, edge, data center, really leveraged that X86 architecture. So, what we recognized as a company several years ago is we're going to build on top of that with AI accelerators into each one of our product lines. So, that's our piece. W.
Whether it's PC, edge, Xeon, now each product has a built-in AI accelerator. Everything we launched in '24 has it. I think that's important as you look at it for customers, because I would say almost every enterprise company right now, they have a team, a centralized team inside their company that's really focused on what are the AI use cases that they're going to bring to provide benefits to their employees. Now, for us, some of those large language models need to run on a private PC, so that's what our core ultra products do that we launched a few months ago. Some need to run in public cloud, and then some are private LLMs that are going to run on the data center. But the good news for us and for our clients is there's AI acceleration at each step. So, they can choose what's best for their company aligned to their AI policies and beliefs. And they're not limited by the technology. The technology is there for them regardless of where their data resides and how they want to orchestrate these LLMs. For us, when I think AI continuum, Rebecca, that's to me is what I'm saying. So, thanks for letting me expand on it.>> DO you think AI, Greg, will compress the life cycles... It's not a PC show here, but PCs we can all relate to, but also of service? Every cloud vendor has, you look at their 10Ks, they've restated their depreciation schedule to, I think it's now six years. What that does, just makes the income statement look better, which is okay, that's fine. And that's partly why they do it, but it's also their servers are lasting longer, right? You can squeeze one of them. Same thing with PCs, same thing with our phones. Do you think AI is going to change that and compress those cycles?
Greg Ernst
>> Compress it back down, less than six?>> Yeah.
Greg Ernst
>> I think so for sure. As long as companies like Intel and Nvidia, if we're doing one to two launches every year, then eventually the power it takes to operate a system three to four years ago, the power, it's not worth paying that electricity bill when the performance per watt is skyrocketing so fast. But then I think there's also, and this is one of the beauties of our product line at Intel, is we have general purpose CPUs and then we have dedicated accelerators. One of the things I believe is with proper infrastructure management, the server could still be useful, a three, four or 5-year-old server could still be useful, but probably for a different workload. You're not going to use it for training and fine-tuning your latest LLM, something that's four or five years ago. But you may still be able to run a lot of your enterprise apps and be able to continue to eke out the benefits. But I agree with you. I think the job of an infrastructure lifecycle manager is going to change a lot.>> So, today's training server maybe comes tomorrow's inference server, something like that.
Greg Ernst
>> Yeah. I think so. And that's been a big part of, I'd say a lot of the conversation that we had with clients is just around managing the cost. And that's a big part of the conversation we're having with them is, it's don't be so enamored on just one workload that you forget on the continuum of the applications that they're going to need to run.>> Yeah.
Rebecca Knight
>> Exactly. Well, that's great advice and yeah, thank you so much, Greg. A pleasure having you on theCUBE.
Greg Ernst
>> Well, thank you.
Rebecca Knight
>> Very exciting.
Greg Ernst
>> My honor to represent Intel as the first guest of the show, so thank you.>> I know we were geeking out there a little bit, but that was fun. Thank you.
Greg Ernst
>> Dave likes backside power.
Rebecca Knight
>> That's your jam. I get it. I get it. I'm Rebecca Knight for Dave Vellante. Stay tuned for more of theCUBE's live coverage of HPE Discover. You're watching theCUBE, the leader in enterprise tech news and analysis.
>> Hello, everyone, and welcome back to theCUBE's live coverage of HPE Discover. I'm your host, Rebecca Knight, sitting alongside my co-host and analyst, Dave Vellante. John Furrier is also here with us this week, but I'd like to introduce our first guest of the day of the show. He is Greg Ernst, Corporate Vice President, Intel Sales and Marketing, and GM America Sales at Intel. Thank you so much for coming on the cube.
Greg Ernst
>> Thank you. Thank you, Rebecca. It's exciting to be here and congrats for all your success. This is always the highlight of the big shows around the world.>> Thank you. Thanks for your support. We appreciate Intel coming on over the years. Done a lot with you guys.
Greg Ernst
>> Thank you, Dave.
Rebecca Knight
>> So all three of us are fresh from the keynote in the Sphere, the first ever keynote to take place in the Sphere. We just heard Antonio Neri, Jensen Huang, both of them we're talking about how we are really on the cusp of a major societal, technological transformation with AI. I'm curious if you want to start talking a little bit, Greg, about from Intel's perspective, where we are in terms of this inflection point for businesses that are looking to get going with AI.
Greg Ernst
>> Yeah, no, thank you, Rebecca. And how cool was it?
Rebecca Knight
>> It was .
Greg Ernst
>> The visual optics were impressive. It was my first time in the Sphere, so I know we were all buzzing to get in there and check it out. But Rebecca, from Intel's point of view, really our mission is as a company and we're going to build on decades of what we've done is make implementing AI into the enterprise workflows easier. And that to me, that's what we wake up every day doing was how do we work with standard bodies? How do we work with the ISV community that we've always been known for? I mean, as you know, Pat used to work at VMware, so that's close to our heart. How do we work with the OEMs and what ecosystems do we create? And I'm excited to unpack some of those things because Intel ourselves, we've had really huge announcements the last six, seven weeks documenting that progress that we're doing together with the industry to make it easier. But the potential of AI is incredible and every company's on a different stage of the journey. Some are really out in front and others want to really just learn and be measured in their progress. I'd say most of the clients I've talked to, they've implemented a few workflows consistently on the enterprise side. One just being advanced search, whether that's for external or internal, really all being able to make that content much more searchable. Help desk would be number two, and then software code generation, number three. But eventually at Intel, we believe in this AI continuum where every persona type and every vertical, there'll be dedicated AI workflows and frankly, it'll be required for companies to do it in order to stay competitive in their industry.>> And the market, it's almost like you can't even size it, Greg. It's just enormous. And so to your point, Intel building upon its decades of ecosystem enablement, that's one of the hallmarks. Then this AI wave comes along. It's not like you didn't know machine learning and AI, but the vast majority of the customers weren't focused on it. Now a hundred percent are. And Intel, what Intel's trying to do is unprecedented. Both the shift toward AI, the Foundry initiatives, the global build out that Pat's on a mission to educate people on the importance of really hardening the supply chain. So, you got a lot going on. And so what's it like inside of Intel and the transformation that you're going through now?
Greg Ernst
>> It's torrid. That's a word that we've been using the last three years Torrid. The pace is incredible and Intel, we recognize the huge responsibility that we have because there's two counter forces that are really working. There's one that's incredible applications for compute, which requires incredible power and compute build out. But then there's also our responsibility around sustainability for the globe. And at Intel, we participate in both. We're actually probably the only company that adds scale, produces customized compute for AI as well as the Foundry element. And so for us, every company is a customer, or target customer at least. So, even some of these companies that you would think traditionally are competitor Intels, we're actually becoming a partner and working and making our fabs available to them for the build out. And I'm super proud, everyone at Intel. We are proud of what we have done, where we've built not just a globally diverse, resilient supply chain, but very sustainable. Almost all of our factories return more water to the Earth than we take out. Almost all of our factories are powered with renewable energy. So, every company that's really focused at scope one, two, and three, Intel's a key partner for that because we are the only foundry where that has become a foundation. Renewable energy, returning more water to the Earth's surface than we take out. So, as the world demands this compute, we're proud to be able to supply it. And then of course we have this big $60 billion business where we produce our own semiconductors with the Intel brand and we sell into enterprises all over the world.>> So the scope, I'm not really deep into the scope three-piece, but I'm inferring from what you said that a part of that is your supply chain has to be scope three. Nobody really is there yet, but we're all working toward that. So, that's a key part of it. It's not just what you do, it's what you do, what you do for your customer all the way back to the value chain.
Greg Ernst
>> Exactly. Exactly. The two of them combined and we did recognize, as you said, Dave, AI has been on a rapid ascension the last several years. We have edge inference being huge. A lot of the cloud service providers were using it for better search and recommendation engines. And then clearly like November when the gen AI explosion happened, it's taken everything to a whole different level.>> As you're saying, you've got your fingers in a lot of pies. The core. Let's come back to Xeon a little bit. Give us the update on Xeon 5, Xeon 6. Do you have Xeon 6 news? I'd love to get into the futures there.
Greg Ernst
>> Thank you. As I said earlier, torrid has been the adverb at Intel. We've been busy and we really just fundamentally, what drives our company is Moore's Law. And I know there's all sorts of people speculating, is Moore's law is dead or not? But one of the things that we're really proud of as a company the last four years is we've done five manufacturing transitions. So, each one of those transitions, we increase the transistor density, which again, more compute, less power. Typically, under Moore's Law, you would do one every 18 months to two years. We've done five and four. But then we don't just do it for the physics, we actually build products at scale on each one. And as you said, we announced Xeon 5 just six months ago. We're in high production, we're shipping millions of units. And then at Computex two weeks ago, we announced Xeon 6, which is that next generation. And both families integrate advanced matrix software into it. So, it's AI accelerator built into Xeon. We have one family is focused at these high performance cores. The other one is energy efficient cores that are wonderful for any application that was written cloud first. So, we've been busy, but great partners like HPE are keeping pace with us and rolling out, like in the case of HPE, their HPE ProLiant D380 will support both families.>> If I may.
Rebecca Knight
>> Yeah.
Greg Ernst
>> Yeah.>> I just want to follow up on things correctly. So, five nodes in four years. Pat has, I think said that thousands of times. And when I first heard it, I was like, "No way." And actually doing it because the time to tape out has historically been a couple of years. So, five nodes in four years, again, unprecedented. What's interesting to me, you mentioned torrid a couple of times. Intel was always the conservative one, and now you're doing things that, if you pull this off, this is like we get goosebumps thinking about it. It'd be like near miracle because you got backside power, you got gate-all-around, you're doing NAEUV all at once. I mean, it's never been even contemplated before. And so everybody's of course, rooting for Intel, but it's going to be mind-blowing to see how this all plays out. And it doesn't happen overnight, obviously, but the focus is, it's a renewed focus at the company. It's very clear.
Greg Ernst
>> It's a renewed focus. It's a hard work of 100,000 people. And what we wake up every day is really just how do we stretch the laws of physics? And that, especially now in a world where compute demand is insatiable, that translates into power and performance, which is critical in these big data centers where power is actually the limiting factor. So, driving that Moore's Law, it's a company-wide effort, and I'm just honored to play a small role, which is bringing it to market.>> And doing some wild things with packaging. I mentioned, I'm not expert on this, but backside power means you can put more transistors on the front side and it's more power efficient, and the numbers are just, they're mind-blowing, again, what you're trying to do.
Greg Ernst
>> I'm impressing you the backside power.>> I mean, it is the future. It's the future of packaging.
Greg Ernst
>> Well, yeah, maybe I for the audience.>> Yeah.
Greg Ernst
>> So, what we've done, for decades, wafer had the power delivery, so all the routing, all the wiring and the transistors on the same side of the wafer. The problem with that is the current's got to travel longer distance. So, what does that do? That burns power creates heat. What our engineers and Intel came up with is a novel way, which is transistors on one side, power in the routing on the top side. Now the challenge for that is you can only print on one side at a time. So, we print on one side of the wafer, you have to flip it, get it perfectly aligned with billions of transistors and do it a second time. That's why no one else has ever done it before because of the complexity of that. And the great news is, is it's done, it's proven, and we're going to be shipping products with it.>> And what this allows, if I understand it, the EDA vendors who shipping software to help you get actually lay down transistors actually now have greater flexibility and they can do some amazing things. But it's not just... What do you guys call it? RibbonFET, I think is your-
Greg Ernst
>> RibbonFET for transistor.>> So, they're not only backside power, you're bringing RibbonFET, call it gate-all-around, which is just Intel introduced FinFET years ago, and this thing, you guys just took shipments from the ASML NAEUV machine. I don't really understand it, how this is going to work, but it's like magic that somehow you lay down this polymer and then it lays down the transitions. But you can use the NAEUV to guide the organic laying down of the... It's just, again, I don't really understand it, but that's why I'm saying if you can pull this off. This is literally a miracle that Gelsinger and Intel are trying to achieve.
Greg Ernst
>> Yeah. And again, I mean, I view it's pulled it off, right? So, it's customer's orders are coming in and our engineers have done it.>> Well, all three have to come together with 14A. That's what I'm watching, which I think is 2027.
Greg Ernst
>> That's the sixth node. Yeah, we've done five and four. You're already pushing me, "Greg, when's the sixth one?">> One only because if Intel does that, it's like game over.
Rebecca Knight
>> They wake up every morning stretching the laws of physics. We just heard this from Greg.
Greg Ernst
>> Yeah, we do.
Rebecca Knight
>> Greg, can you talk a little bit more about this AI continuum that you talked about? Because I think that it will help, as you said earlier, every customer is at a different stage in their AI journey. So, can you sort of walk us through how Intel views this continuum?
Greg Ernst
>> Yeah, no, thank you, Rebecca. One of the things, we always challenge ourselves to do more, but one of the things that we recognized is one of the great things that the industry has created is this X86 software instruction set. And that allows developers to write, whether it's for PC CPUs, edge, data center, really leveraged that X86 architecture. So, what we recognized as a company several years ago is we're going to build on top of that with AI accelerators into each one of our product lines. So, that's our piece. W.
Whether it's PC, edge, Xeon, now each product has a built-in AI accelerator. Everything we launched in '24 has it. I think that's important as you look at it for customers, because I would say almost every enterprise company right now, they have a team, a centralized team inside their company that's really focused on what are the AI use cases that they're going to bring to provide benefits to their employees. Now, for us, some of those large language models need to run on a private PC, so that's what our core ultra products do that we launched a few months ago. Some need to run in public cloud, and then some are private LLMs that are going to run on the data center. But the good news for us and for our clients is there's AI acceleration at each step. So, they can choose what's best for their company aligned to their AI policies and beliefs. And they're not limited by the technology. The technology is there for them regardless of where their data resides and how they want to orchestrate these LLMs. For us, when I think AI continuum, Rebecca, that's to me is what I'm saying. So, thanks for letting me expand on it.>> DO you think AI, Greg, will compress the life cycles... It's not a PC show here, but PCs we can all relate to, but also of service? Every cloud vendor has, you look at their 10Ks, they've restated their depreciation schedule to, I think it's now six years. What that does, just makes the income statement look better, which is okay, that's fine. And that's partly why they do it, but it's also their servers are lasting longer, right? You can squeeze one of them. Same thing with PCs, same thing with our phones. Do you think AI is going to change that and compress those cycles?
Greg Ernst
>> Compress it back down, less than six?>> Yeah.
Greg Ernst
>> I think so for sure. As long as companies like Intel and Nvidia, if we're doing one to two launches every year, then eventually the power it takes to operate a system three to four years ago, the power, it's not worth paying that electricity bill when the performance per watt is skyrocketing so fast. But then I think there's also, and this is one of the beauties of our product line at Intel, is we have general purpose CPUs and then we have dedicated accelerators. One of the things I believe is with proper infrastructure management, the server could still be useful, a three, four or 5-year-old server could still be useful, but probably for a different workload. You're not going to use it for training and fine-tuning your latest LLM, something that's four or five years ago. But you may still be able to run a lot of your enterprise apps and be able to continue to eke out the benefits. But I agree with you. I think the job of an infrastructure lifecycle manager is going to change a lot.>> So, today's training server maybe comes tomorrow's inference server, something like that.
Greg Ernst
>> Yeah. I think so. And that's been a big part of, I'd say a lot of the conversation that we had with clients is just around managing the cost. And that's a big part of the conversation we're having with them is, it's don't be so enamored on just one workload that you forget on the continuum of the applications that they're going to need to run.>> Yeah.
Rebecca Knight
>> Exactly. Well, that's great advice and yeah, thank you so much, Greg. A pleasure having you on theCUBE.
Greg Ernst
>> Well, thank you.
Rebecca Knight
>> Very exciting.
Greg Ernst
>> My honor to represent Intel as the first guest of the show, so thank you.>> I know we were geeking out there a little bit, but that was fun. Thank you.
Greg Ernst
>> Dave likes backside power.
Rebecca Knight
>> That's your jam. I get it. I get it. I'm Rebecca Knight for Dave Vellante. Stay tuned for more of theCUBE's live coverage of HPE Discover. You're watching theCUBE, the leader in enterprise tech news and analysis.