David Schmidt, senior director, PowerEdge Product Management, at Dell Technologies, and Derek Dicker, corporate VP, enterprise and HPC business group, at AMD, engage in a discussion during Dell's "Is Your IT Infrastructure Ready for the Age of AI?" event with theCUBE's Dave Vellante about advancements in IT infrastructure suitable for the age of AI.
Dicker elaborates on his 65-day journey as a corporate VP at AMD, sharing his expertise in silicon innovation and partnership dynamics. Schmidt, talks about the decade-long collaboration between AMD and Dell, resulting in five generations of central processing units. The conversation explores the nuanced technical aspects and historical context of AMD's Epyc processor lineup, from Naples to the recently launched Turin.
Schmidt and Dicker unveil key partnership elements, emphasizing the deep trust and engineering collaboration that steer product innovation. The evolution of customer requirements and workloads has been a critical driver of AMD's product development strategy, resulting in versatile scalable and power-efficient infrastructure solutions, according to Schmidt. The dialogue also highlights the technological strides in advancing offerings such as Turin for diverse workloads.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Is Your IT Infrastructure Ready for the Age of AI? – NA. If you don’t think you received an email check your
spam folder.
Sign in to Is Your IT Infrastructure Ready for the Age of AI? – NA.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Is Your IT Infrastructure Ready for the Age of AI? – NA
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Is Your IT Infrastructure Ready for the Age of AI? – NA.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Is Your IT Infrastructure Ready for the Age of AI? – NA. If you don’t think you received an email check your
spam folder.
Sign in to Is Your IT Infrastructure Ready for the Age of AI? – NA.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Is Your IT Infrastructure Ready for the Age of AI? – NA
Please sign in with LinkedIn to continue to Is Your IT Infrastructure Ready for the Age of AI? – NA. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
AMD
David Schmidt, senior director, PowerEdge Product Management, at Dell Technologies, and Derek Dicker, corporate VP, enterprise and HPC business group, at AMD, engage in a discussion during Dell's "Is Your IT Infrastructure Ready for the Age of AI?" event with theCUBE's Dave Vellante about advancements in IT infrastructure suitable for the age of AI.
Dicker elaborates on his 65-day journey as a corporate VP at AMD, sharing his expertise in silicon innovation and partnership dynamics. Schmidt, talks about the decade-long collaboration between AMD and Dell, resulting in five generations of central processing units. The conversation explores the nuanced technical aspects and historical context of AMD's Epyc processor lineup, from Naples to the recently launched Turin.
Schmidt and Dicker unveil key partnership elements, emphasizing the deep trust and engineering collaboration that steer product innovation. The evolution of customer requirements and workloads has been a critical driver of AMD's product development strategy, resulting in versatile scalable and power-efficient infrastructure solutions, according to Schmidt. The dialogue also highlights the technological strides in advancing offerings such as Turin for diverse workloads.
David Schmidt, senior director, PowerEdge Product Management, at Dell Technologies, and Derek Dicker, corporate VP, enterprise and HPC business group, at AMD, engage in a discussion during Dell's "Is Your IT Infrastructure Ready for the Age of AI?" event with theCUBE's Dave Vellante about advancements in IT infrastructure suitable for the age of AI.
Dicker elaborates on his 65-day journey as a corporate VP at AMD, sharing his expertise in silicon innovation and partnership dynamics. Schmidt, talks about the decade-long collaboration between AMD and Del...Read more
>> Hi, everybody. We're back here in Round Rock Two. This is Dave Vellante. And David Schmidt is back, and we're joined by AMD's newly minted Corporate Vice President, Derek Dicker. Thanks for coming on theCUBE. It's good to see you.>> Thank you so much for having me.
Dave Vellante
>> So how's it feel? 65 days in? Wow, you must be a seasoned vet by now.>> It's been fantastic. One of the things I love the most is getting a chance to spend time with partners and customers. And I've had the opportunity to get to know Dave a little bit and it's just wonderful.
Dave Vellante
>> It's been amazing. The whole silicon, the AMD story is just incredible. The ascendancy, what Lisa has done is just astounding. Why don't you guys start by talking about the partnership? Where's it go back to? You may or may not have the historical context, but give us your perspective. And then, David, you can chime it.>> Maybe I can share what I've heard so far and Dave can add to it. I think the best part about it is that it's a partnership that goes back almost 10 years, and it starts all the way from the beginning of Epyc's first generation product, which is Naples. And that manifests itself in 14G, right?
David Schmidt
>> Exactly.>> And then we went 15, 16, and 17. So that covered all the way from Naples, Milan to Rome, Genoa, and then Turin, which is our fifth generation product we recently announced. And what I'd say about all of it is that, together, it's been a pretty amazing experience where we've delivered deterministically on time with the technology together as a group, both getting Epyc processors ready and validated. They were designed and architected with input from Dell. But as you stare back in time to see that we've done essentially five generations of CPUs and four generations of Dell power, its service been phenomenal.
David Schmidt
>> If you need any more proof that Derek and I are fully on board and aligned together, he knows all about the Gs. You went all the way back to 14G. You're speaking PowerEdge language. But he's exactly right. I mean, we go all the way back to Naples. And it's like a tour of Italy. We went all the way through where we're today with Turin, and the partnership we've had and going through those different generational launches, those different iterations, we've refined how we work together. We've challenged each other. We've really brought good things to market for our joint customers that we think have really improved the overall solution that we deliver.
Dave Vellante
>> How do you guys think about innovation not only from the chip level or the server level, but the combination? And what kind of engineering work do you guys do together?
David Schmidt
>> Yeah, I'd be happy to take a run at the beginning. I think any partnership needs to be rooted in trust and the ability to have good robust dialogue about what are the challenges that we're trying to solve? What are the customer pain points that exist? And the thing I love about the relationship between our two companies is that there is an active dialogue around that and it results in having very specific conversations around what the architecture of not only the silicon is, but starting with the system level. What are the things that customers care about as they're buying a system? And we work that back down into the device. And if you look at what's manifest with Turin, with our fifth generation Epyc product, you can go back and you can point to specific parts of the feature set that came out of the relationship between the two of us translating into things that allow Dell to offer a full stack of products.
David Schmidt
>> Here's a great example. You look at our systems, and we know our customers obviously very, very well and the power systems that we design, and when we sat down and first started discussing Turin with AMD, we recognized that there's different sizes for different parts of customer journeys, what they need to deploy in their data center. We looked at the core counts, we looked at how AMD was laying out Turin, the Turin stack from top to bottom, and we really challenged each other on what do we need to provide? What is the range we need to provide of capabilities? And we were bringing system design, system ideas to the table of we need the ability to serve customers that need eight cores, 16 cores. It's not all about the highest core counts possible. It's about providing that right range. And by the way, those lower ranges are power balanced as well because there's some customers that need to right size their infrastructure according to what they have inside their data center from a power and cooling standpoint. We really challenge each other to come up with the systems at the lower end of the configuration spectrum all the way up to the top end of the spectrum.
Dave Vellante
>> Okay. So that flexibility for customers, how have the requirements changed since 14G to today?
David Schmidt
>> Oh, a couple of ways. Let's rewind a couple of generations. You were in a predominantly SAS SATA world from a storage perspective.
Dave Vellante
>> Right, yep.
David Schmidt
>> And then, of course, there was spinning media and hard drives, and that necessarily wasn't something that was consuming IO. Now we're in a flash world. We're needing to balance IO between, really simply put, the front of the server and the back of the server. How much are you going to dedicate to your flash storage up front? How much are you going to reserve for IO in the back so that you can move data inside and outside the system? And so that's probably been one of the more impactful changes.>> Totally agree.
David Schmidt
>> From the silicon's perspective, there's been quite a few changes as well. I think core counts is certainly one of them, but we've really been partnering with AMD in terms of, in a two CPU design, for example, how do you balance the lanes that are communicating between the processors? How do you balance that with upfront? How do you balance that in the back?
Dave Vellante
>> It's all about balance, right?
David Schmidt
>> It is, absolutely. And I would add to that, I think if you look at the evolution of Epyc over time, you look at what Turin represents today, one of the things that we have is quite a bit of bandwidth coming in and out of the product. It's essentially 128 lanes of PCIe with the ability to bifurcate those down. Now. Mapping back to the storage comment that Dave was mentioning, as we're finding more solid state coming into systems, the throughput requirements of those drives are dramatically different than a hard disk drive. So you need to feed the cores in the CPU and you also need the ability to take advantage of a large number of these as density increases. And so what we've been able to do with input from Dell is architect a system that allows us to deliver those 128 lanes. They're bifurcatable. You can shove 64 in the front, 64 out the back, have a balanced network connection, but then also have the ability to service your storage in a system. And it's from the deep detailed discussions that we've had as two organizations that we've come to the conclusion that, that needed to be built.
Dave Vellante
>> That's interesting. When you got this slow spinning disk, you can do some other things while you're waiting to do it, right?
David Schmidt
>> Not anymore.
Dave Vellante
>> And you're pushing bottlenecks around. Now, the networking becomes a whole new ball game and you've just opened up the floodgates, really.
David Schmidt
>> That's exactly right. Yeah.
Dave Vellante
>> So would you say that's a big part of your value proposition, is the ability to both anticipate those changes and accommodate them?
David Schmidt
>> I would. I think having such a wonderful partnership enables us to have both the system level view as well as us bringing together the silicon technology capabilities is a great marriage. What we've come out with as a result of this engagement is building products that, as Dave has suggested, scale from the lowest level at eight cores all the way up to 192 cores. And the beautiful part about it, we were talking about this just yesterday, is you can have customers that want to go buy a single skew, but they want to scale that thing up and populate a whole lot into it, drive all the way down and populate a whole lot less, but have the operational efficiency. And the beauty of the products that they've helped us define is that we can do that with a single device, and it all comes down to the architecture of the product.
Dave Vellante
>> So can we get specific on the products and double click on?
David Schmidt
>> Absolutely. We talked about the low end, so let me walk you, and then I want to come back to the high end as well. So if you think about our latest generation that we've launched based on AMD Turin, we talked about it I think October timeframe out in San Francisco. We have two socket designs in 1U packages, 2U packages. We've built an accelerator product, the XC7745 as well, that allows for eight double wides, 16 single wides in a package in the platform. And so that gives us the ability, especially when you look at the rack servers, to serve those customers that are still in a 19-inch environment, they're predominantly air-cooled, they need to have a maximum amount of storage capabilities upfront. We're really excited about 40 dense small factor form drives, the E3.S drives, in the front of the server. That's going to be huge amount of storage that we're already talking to customers about today, especially in some of the software defined storage scenarios that we're helping build. They're looking for that maximum flash density that they can get. And that's one of the form factors that's going to help customers get there. So we have that capability. The air cooling that we provided in our latest generation architecture lets you do the 500 watt processors, the 192 core prox. You can do that in an air cool package. You can do two of those in a 2U system and you can do it air cooled. A lot of times you're going to drive yourself into a liquid scenario in that type of environment. Not so in our PowerEdge platform. So we're really excited about that. A great example where we challenge each other and partnered on innovation is also up at the high end. We came to AMD about a year and a half ago and we showed them our vision for what next generation data centers are going to look like from an architectural perspective. We said, "Customers are going to optimize their power, they're going to provide more power to the rack, they're going to go liquid first and make sure that's part of their data center environment. And we presented the ORV-3 standard to AMD as the design premise and how we could build next generation 21-inch platforms and really take density really up another notch. And that's what we talked about at Super Compute. That was our M7725. It lets you get 27,000 cores in a single rack deployment. Huge amount of cores/
Dave Vellante
>> 27,000 cores in one rack
David Schmidt
>> That's right.
Dave Vellante
>> Enabled by 192 core based stamped in.
David Schmidt
>> So you've got two sleds in a single rack unit, shadow core sleds. And so you're really getting four CPUs in a single OU. And we had this on display super compute. You've got pictures out there for all to see. We're super excited about it and we're having that conversation with all types of customers, electronic design customers, high performance compute Obviously. We're having FinTech customers, financial trading customers are looking at that type of platform because they've recognized that, once you cross a point where you're redesigning your data center, you need to think about your rack scale differently. It's not just about getting a box one or two at a time. It's about thinking about the rack as a unit of scale.
Dave Vellante
>> And what you just described is hybrid air and liquid cooled or is it predominantly liquid?
David Schmidt
>> It is predominantly liquid cooled. So it is direct to chip liquid cooling.
Dave Vellante
>> Yeah, DLC. But you've got a big sector of the market that doesn't want to go there yet.
David Schmidt
>> That's right.
Dave Vellante
>> So you've got an air-cooled option for them.
David Schmidt
>> That's right.
Dave Vellante
>> What about how have workloads changed and how has that informed how you're designing silicon and how you're designing products?>> Do you want to lead off on the workload side?
David Schmidt
>> Yeah, let me tee some things off. When you look at core scaling, it obviously carries with it quite a bit of memory requirements as well. And so you've got memory to core ratios that have to keep scaling. One of the things we've talked to AMD about quite a bit is how do we build the right memory architectures so that we are providing the right amount of memory for all the cores that AMD is delivering. And so the workloads are scaling I would say pretty well with the core counts, but it's only because we're able to design the systems that are providing the right amount of memory. And then, of course, we talked earlier about IO, providing the right amount of IO.>> And I would say from our perspective it's great to have a partner that can provide guidance on where those workloads are, how they manifest into a system requirement. What we have to offer and what we spend a lot of time talking about is the architecture, the actual Epyc CPUs themselves. With involvement from Dell, we essentially have created an architecture that allows us to build chiplets that custom tailor the size of the core counts in any given device. But the beautiful part about it is that the CCDs that make up those chiplets where all the cores are located, we can scale up the number to address higher performance workloads. We can scale it down. And the IO devices that are in there that connect out and deliver all this great IO, they're the same. And so we use that to scale all the way up the stack from an eight core device to 192 core device. The beauty is that they get all the same features inside each one of those Epyc variances they scale up. So you don't have any worry about having differentiation in features at the top of the stack at the bottom as the workloads come in.
Dave Vellante
>> So the chiplet architecture gives you that flexibility to essentially mix and match different, I guess, XPUs, and then accommodate different workloads.>> That's exactly right.
Dave Vellante
>> High performance, it may be training versus inference and you can customize the package accordingly.
Dave Vellante
>> That's exactly correct. Yeah, it affords the flexibility for us to go address a wide variety of workloads at different price points also.
Dave Vellante
>> Yeah, I mean, when you think about what workloads used to be, you had transaction processing, you had analytics, maybe a little data warehousing, maybe some collaboration. Still have that. Now you're injecting intelligence into all of those. You're bringing together more data, much larger scale data, higher bandwidth, with new AI workloads.
David Schmidt
>> And a chatbot to go along with it.>> Right.
David Schmidt
>> And some of the white papers we've been publishing with AMD really demonstrate how you can run hundreds of users on a two socket, 192 core Turin-based PowerEdge, and you can serve hundreds of users with a chatbot in a small language model alongside it. So we think there's a space there, and we're starting to have more and more conversations this year of realizing that not everybody needs some large language model training deployment. It's more about the consumption of it on the inferencing side and being able to deliver the results to your users, your internal users, if you're a large IT department inside an enterprise. How do you do that in a scalable way? And then going back to what Derek said earlier, how do you do it on a platform that you can certify on a common PowerEdge platform with AMD Turin, and you can do it for all shapes and sizes within your server state, within your workloads.
Dave Vellante
>> Yeah, I mean, the whole small language model trend, we've been talking about it for a while, but you've seen it. You see language models running on phones. You saw last summer Databricks came out with their mixture of experts. You saw DeepSeek, what they've done. And so it's just going to continue to get more efficient from a packaging standpoint, lower cost, more intelligent. And then presumably that enables people, your customers, to figure out new ways to deploy all that. So my question is, you guys have to think ahead. You can't just snap your finger and spit out a chip set overnight. So when you look ahead and you put on your binoculars or even telescope, what are the things that you look at to say, "Okay, these things aren't going to change in the next 10 years," you can pick your end, "but more than a year, more than 18 months"? What are the things that you guys think about down the horizon that you're designing for?
David Schmidt
>> Don't want to get too boastful here, but I feel like with our...
Dave Vellante
>> Bring it.
David Schmidt
>> Bring it.>> Why not?
David Schmidt
>> Why not? I talked earlier about our next gen architecture and our scalable architecture with our 21-inch server designs. We really planned that for a multi-generational life cycle because we knew that we couldn't just show up, talk to Derek and Derek's team for a while, do a major generational launch, and then go away for three more years. We knew that we would constantly be delivering the latest technology. And what I don't want to do is go to my customers and say, "Hey, bad news. You got to ship out all that old rack hardware and bring in some new stuff." And so when we think about the power bus, when we think about the liquid manifolds, when we think about just even the rack shape and size, just the physical part of aspect of things, we're really thinking about what happens in the next generation. So I actually can't tell you that we've figured everything out from a silicon perspective or an IO perspective. We are going to constantly innovate and we know that our customers are going to bring really new and exciting ideas that we want to go help them enable. But I do know that we're building that infrastructure around it so we can deliver that to them in not only building the infrastructure, but the services capabilities and the delivery capabilities on the Dell side. We can continue delivering that to them in a scalable way no matter what the technology is inside.
Dave Vellante
>> So you had to make some decisions going through a one-way door, but you had to say, "Hey, we got to set an envelope rack size, watts per rack."
David Schmidt
>> Power per rack, exactly.
Dave Vellante
>> Things like that. Then that informs what you guys are able to. You have to design to that.
Dave Vellante
>> Absolutely the case. I think the other piece is you provision a silicon architecture when you start looking for what the end looks like in mind, and over time you're going back and revisiting always, how are you going to modulate those things? I think the exciting part about it is we're now playing with not just CPU technology, of course, CPU technology, but also GPU technology. And we're working together to go look out into the future. To your point, the amount of innovation, the pace of innovation is insane, but what that does is just applies more impetus on the two of us to remain as close as possible.
David Schmidt
>> Ownership.
Dave Vellante
>> It's like that scene in Apollo 13 where they spill all the stuff on the table and say, "All right. Make this work." You have to work within an envelope and say, "Okay, you've got to make trade-offs to meet the customer demands."
David Schmidt
>> With slightly less duct tape.>> That's right. That's right.
Dave Vellante
>> Yeah, less duct tape. That's good to hear.>> Not a lot of duct.
Dave Vellante
>> Slightly more complicated. All right, guys. Thanks so much for taking some time.
David Schmidt
>> Thank you.>> Thank you.
Dave Vellante
>> Congratulations on the partnership and best of luck.>> Yeah, appreciate it. Thanks.
Dave Vellante
>> All right, and thank you for watching. This is Dave Vellante. Keep it right there. We got more action coming from Round Rock in Texas. You're watching theCUBE.