We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon NA 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for KubeCon + CloudNativeCon NA 2025
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for KubeCon + CloudNativeCon NA 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon NA 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to KubeCon + CloudNativeCon NA 2025
Please sign in with LinkedIn to continue to KubeCon + CloudNativeCon NA 2025. Signing in with LinkedIn ensures a professional environment.
In this KubeCon + CloudNativeCon North America segment from Atlanta, theCUBE’s Rob Strechay and Savannah Peterson speak with Kevin Cochrane from Vultr and Aleks Shargorodskiy from AMD about how their partnership is changing the economics of cloud and AI. Cochrane explains how Vultr and AMD co-engineered the new VX1 data center CPU offering, delivering 82% better performance per dollar and a 33% lower price than leading alternatives, helping customers refresh CPU infrastructure while funding new GPU investments. Shargorodskiy details how Vultr’s global footpri...Read more
>> Good afternoon, open source fans, and welcome back to chilly Atlanta, Georgia. We're here midway through day, one of our three days of coverage on theCUBE at KubeCon. My name's Savannah Peterson. Very stoked for this next panel. We've got a return celebrity on theCUBE, as well as a new friend, to talk about a very exciting partnership. And I got to bring it all to you with the fabulous Rob Strechay. Rob, I really love our commitment to purple today.>> Today, we be purpling.>> Today, we purple.>> Yes.>> And it is working for us.>> Yes.>> Speaking of matching, our other guests are also matching in their black and white. Aleks and Kevin, thank you so much for being here today.>> Thanks so much for having us.>> It is a joy. Kevin, we're making this a thing for us every two weeks now. We just did this in D.C. I love it because it means we get to dig into all the aspects of the business. You brought a new friend on the show today with us, so talk to us. Give us a little bit of the back story on the partnership with AMD and why the lovely Aleks is sitting here with us.>> Well, actually, not so much a new friend. So, we've been working very closely as partners well over a year now. So, Vultr, as you know, has long-standing relationship with AMD as a strategic partner on our core cloud compute side, most recently doubling down on that partnership with our latest VX1 release, which changed the game in unit economics for core cloud compute, offering 82% better performance per dollar than any leading compute plan alternative.>> 82%?>> 82%. We'll talk more about that because it's a really fascinating story. But last September, we extended the partnership with AMD around the Instinct GPU line, taking to market first the MI300X and then subsequently the MI325X and most recently the MI355X. So, that's when we started working here with Aleks, who was on the front lines leading the charge for the adoption of the MI300X Plus GPU series. And it's been a wonderful partnership because together with Aleks, we're unlocking a treasure trove of new opportunities and really bridging the world of CPU compute and GPU compute on the AMD architecture.>> Yeah. So, help us understand, like you said->> It's so cool.... >> you're out there talking to all of these end users and getting and bringing back this information.>> Yeah, you hear everything.>> So, help us understand, because we hear a lot, everybody's like, "Oh, AI. Only 5% is getting..." And I know they're talking mainly agentic or gen AI, but there's also traditional AI as well as I call it, which is the workhorse of many different verticals and things like that as well.>> So, one thing that I've really been seeing a lot more of lately, because everybody... Don't say the word, but we know what we're talking about, the bubble, right?>> Yeah.>> Like is there? Is there not? And 12 months ago when I first started working in this GPU space, specifically on the neocloud side, mostly we were selling into neoclouds and expanding our presence and that's how I got to working with Vultr. But now, we've really expanded the way that we work together on working, not just selling in, but selling through and selling out and engaging at Vultr end customers and working really closely with our engineering teams. Because within the industry, there's the moat though that we talk about that could be difficult to cross. But over the last 12 months, we continue to chip away with the releases of... Committed to open source ROCm and->> Love that you're repping right now. Just love it.>> So, here's how nerdy I am. This is off eBay. This is from Super Compute 2013 when ROCm just launched. So, this is the ROCm launch shirt from Super Compute.>> Okay, first of all, I love that because we'll be at Super Compute next week.>> So, it's vintage?>> I've got vintage swag.>> And I love that you went on eBay so you could have the OG.>> Yeah, yeah, of course.>> This community is so cool. It really is. Okay, so that's really exciting. Let's talk a little bit about what this unlocks. What does this mean you're able to do together? Aleks, I'm going to ask you this question first. I mean I know Kevin's an amazing person, but what's the big Vultr benefit?>> So, the big benefit with Vultr is how their infrastructure is all around the world. They have this deep knowledge and experience of working with enterprise and corporate customers from the CPU, and now they have the expertise to do the GPU and the AI piece. So, they've got all of this expertise without the overhead of potentially some of the bigger hyperscalers. And if you know what you need, they can assist you. And if you know what you need, I can also help assist to get them on AMD. And it's very, very easy. Getting them on for switching over from using CUDA over to ROCm has never been easier before. And it gets easier every two weeks when the releases come.>> Exactly. And that's why this show is so exciting. I mean, look all around you. Look at all the energy, look at all the amazing innovators here. These are the platform engineers. These are the cloud-native developers, these are the cloud architects, these are the enterprise architects. These are the people that are truly actually building the future and the applications that these people are building. They're now, and when they're looking to AI-enable their applications, they're deploying these globally. They're deploying these to support all of their employees. They're deploying these to support all of their end customers, and they're looking for an integrated architecture that can scale cost-efficiently globally of integrated CPU and GPU. And I think that's something that's very unique that we're able to bring to the conversation here at today.>> Yeah, I mean I look at it and, again, talking to some end users, I think one of the things that they're trying to figure out is sovereignty. And sovereignty's become a big thing and it started with->> Super important, Rob. I'm glad you brought it up.>> Yeah, like DORA out of the EU. It started out in France and then has basically taken over the financial services industries over there. What do you see about as we look into '26, sovereignty... I think it's going to continue and I think it's going to come home to the US even more as well even.>> Right. I think there's a couple aspects of this. First of all, we need to look at this from the data perspective. Then, we need to look at it from the overall infrastructure perspective. So, from the data perspective, Vultr has always upheld data residency and data sovereignty. Your data is your data. It doesn't cross geographic boundaries, unless you physically move it. We don't have services that touch your data, manipulate your data, do anything to your data. So, that's always been core DNA of Vultr from day one. But moreover, and I think we saw this most recently with some of the events in the industry is we need to be able to support sovereign cloud efforts where you can set up dedicated control planes in region that can manage a set of cloud resources that don't have external dependencies. And again, this is something that is a unique bespoke offering that we do here at Vultr with our own sovereign cloud offering. And I think this is very important in the context of AMD as well, because we want to set up sovereign clouds that can also help you scale your global workloads, which is the combination of CPU and GPU compute. Sovereign clouds that are only GPU-based don't enable all of the developers here to build and scale the future.>> Absolutely. Well, and you need to be able to meet them where they are, let them ramp up those workloads as necessary->> Exactly.... >> and optimize the flow of that workload or whatever that might be for cost, for a lot of different things.>> Exactly. 100% correct. And again, when you look at this whole crowd here, this crowd is a crowd that's deeply committed to open source. This is a crowd that's deeply committed to open standards. I mean, this is the CNCF after all->> Which is the best part about it.>> Which is the Linux foundation.>> Everyone wants to help each other. Everyone wants to be transparent. It's just a different vibe.>> It's a different vibe. And particularly in the AI world, we focus so much on the GPU, but there's also a supporting software layer as well that's really critical for developers to be able to learn and master. And I think that this gets to Aleks's point about ROCm. ROCm is the software infrastructure that unlocks the power of AMD GPUs, and it really is following a principle of open source and open standards. Six-week release cycles, every release cycle, you're unlocking more power even in older generations of AMD GPUs. So, we love that commitment to open source and open standards, and we love being able to talk about that here with the CNCF and the broader Linux Foundation. Open source, open standards, that's always the key to ongoing innovation and it's only going to accelerate.>> I really feel like open source is having a little bit of a moment right now, where everyone's... We've all known this as community members for a long time, but all of a sudden everybody's like, "Wait, what they're doing over there is working and it's faster and it's more secure and they're able to-">> Correct. 100%.>> "... enable all these different partners." It's really refreshing, I think, for all of us. So, I got to go back to the 82% savings. Bring us back there.>> Okay. So, obviously we're those strong partners with AMD. And what we just spent the past nine months is co-engineering a new compute offering, which is what we call VX1, what we just launched a little bit over a week ago. And VX1 is our new data center CPU option. It's designed for the most demanding workloads on the planet and it changes the unit economics in core count compute. So, against the most standard alternative, I won't name anything specifically, it literally is an 82% performance per dollar advantage on an absolute price advantage. It's 33% cheaper than the next alternative in the market. So, there's just pure dollar savings. But when you couple the dollar savings to the actual performance boost, you get that 82%. And this is super important because beyond just the adoption of GPUs, there is a data center refresh cycle going on right now. People need to upgrade to newer CPU infrastructure, and at the same time, they need to lower their cost of operations because they also need to invest in GPUs. So, the question I ask is, what if you had the best of both worlds, which you've got the best-performing CPU at the lowest possible cost and freed up enough resources to get all your new GPUs for essentially free?>> I mean you are in marketing. You definitely sold me on that. But it really is a nice offering to be able to team together and do that. I suspect some of your customers and the folks on the other side of that frontline really love that. Aleks, I want to ask you a question because I know it's a big passion of many of us in this industry. Life sciences and healthcare, you're seeing some very interesting things there. Can you tell us a little bit about that?>> Yeah, so AI, what I believe is going to unlock that next frontier of just innovation within healthcare and life sciences, particularly life sciences especially. So, AI allows us to take lots of this complex data and look for patterns. And so, doing this at a scale that we've never imagined or been able to do. So, right now, we've last couple of years in life sciences, we've already been doing this work for a long time, but now we're starting to see how impactful the data needs to be for this kind of work. And the models that are needed in life sciences are really, really big. And so, one big benefit of AMD GPUs is we have a competitive advantage in our high-bandwidth memory. So, all these large models that do genomic prediction for folding proteins and drug discovery, they can all fit on a single GPU and I'll be able to run on a single one.>> Saves you a lot of money when you don't have that .>> I have long sleeves on today, but I just got goosebumps. That was one of those moments of->> That's a big savings.>> The time, the cost, and the added security of knowing that that's in one place versus not.>> That's exactly right.>> That's awesome.>> I mean I think we're all seeing it and I think one of having a discussion at lunch today and a woman, they discovered she had breast cancer by using AI. The AI went and looked for these patterns and stuff like that.>> It's amazing. It's amazing.>> It's the best.>> It's not just in the drug discovery side, it's happening everywhere. But like you said, it's these models that are being built out and they're coming, and the inference is coming out to all different geographic locations. Are you->> Well, again, this also gets to the sovereign cloud is at the end of the day, we want to change the world, and one of the best ways to change the world is let's start driving individualized health outcomes for people all around the planet. In order to do that, we need to collect a vast amount of data. We can't just run clinical trials on a particular ethnicity here in the United States. It can't be white males in the United States and we run all our models and all clinical trials. We've got to collect data on all different types of people all around the planet to really uncover what are the different healthcare incidents that they have, what are the predictors of those healthcare incidents, and then what are the unique treatment plans that we could build for them? That's a vast treasure trove of data. This is where it makes sense to have models that can run on a fewer number of GPUs because they're going to be doing more tailored distributed models than ever. And moreover, all that data has to be resident. You're going to collect all of this data from all of this personal information, all these people around, and you're going to bring it into one centralized cluster here in North America? I don't know if that really works. So, you need to be able to run these models and train these models->> Sensitive data.... >> sensitive data, in the countries that they're a resident. And infer them in the countries they're a resident.>> Right. >> And I'm passionate about this, by the way. If you're going to change the world, you got to do it right.>> You tell them. You tell them.>> Because I think you have to go to distribute it.>> You have to.>> Because A, there's not enough power in one place for a lot of this stuff and not enough to get after what you need to do. But you were just saying... And it was funny, I was working with a company and they were looking at moving... What is it? Not mammograms, but like the CAT scans.>> Yes.>> And funny enough, with CAT scans, you have to have all this metadata that goes along with it, the angle of the machine, what type of machine it is. China makes them strip all the metadata out, not just the PII, but all the metadata that goes along with it. So, you can't even use anonymized data when you move it outside. So, to your point, when they're going through and doing all of these things looking for how well do the AI models work on this? They have to do it in-country->> They have to do it in-country.... >> otherwise, they can't even move it out. Are you seeing a lot of this where that distributed nature of->> Oh, 100%. 100%. And again, it's not just distributed GPU clusters, it's just distributed CPU clusters because there's a lot of CPU workloads that have to correspond with them. And so, that's actually a big growth driver for our business as we look forward into 2026, is that I think the world is waking up that decentralized, what we call scale-out architectures matter. We've been talking so much about scale-up architectures, big massive clusters in one centralized location. And as a subject of what we're talking about with our customers now is, "Okay, great. We can scale up, but now how do we scale out?" Because from a compliance perspective, from a safety and security perspective, and just to deliver a better customer experience, scale-out is now the name of the game in 2026.>> I'm really glad that you brought that up because I do think there's always this emphasis on volume scaling up, but when you're scaling out, it's actually when it gets more complex.>> It gets much more complex.>> Yeah. So, that's when you really need your buddies.>> Yeah, especially in healthcare because it's decentralized, so it has to be federated. So, I always call it the millionaire's dilemma because you don't know how much each has or what the data is, but you don't want to tell them directly. So, how do you enable all that through hardware and through software? It's right on the cusp there. And that's going to unlock a whole new wave for rare diseases, particularly where the datasets are sparse. And I'll look at it all throughout the sovereign region.>> Yeah, wow. It just gives me good feels for right future. Speaking of the future, I would love to get your 2026 predictions. You just started to tease at it, Kevin. So, I'm going to go to Aleks first.>> 2026 prediction? I think there's going to be more and more and more inference, and that's what I'm starting to see a more way of. We're going to see AI in our day-to-day life. When we come back from the doctor, and then the next day we're going to get a phone call and you think it's a nurse and you're like, "Oh, this nurse sounds has a little lisp, a little lisp.">> That's my voice, the lisp.>> Yeah, but with a lisp. Then, next thing you know, there's a disclosure that you just talked to an AI agent, right? So, I see a lot more of direct impact from a life->> I love that.>> Yeah, definitely moving from a science project to real-world outcomes. And on our end, we have a very simple prediction. Like I said before, the Vultr you see today is not the Vultr you're going to see in the future. And there's going to be some big announcements coming down the pike, actually even next week, between my friend Aleks and I here with our next steps with AMD. So, stay tuned next week.>> Well, we're you know going to be staying tuned. Are we talking about that at Super Compute next week?>> We're talking about it at Super Compute next week.>> Fabulous. What a lovely note to->> Yes, so more predictions then.>> Yes.>> More to come.>> More to come.>> Can't wait to hear all about it. Aleks and Kevin, thank you so much.>> Thank you so much.>> This has been such a delight, seriously. And Rob, thank you as always.>> Always.>> We purpling strong today.>> Purpling.>> And I hope you're rocking whatever color you're wearing, wherever you might be. We're here in Atlanta, Georgia at KubeCon. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.