We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon NA 2024. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For KubeCon + CloudNativeCon NA 2024
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for KubeCon + CloudNativeCon NA 2024.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon NA 2024. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to KubeCon + CloudNativeCon NA 2024
Please sign in with LinkedIn to continue to KubeCon + CloudNativeCon NA 2024. Signing in with LinkedIn ensures a professional environment.
The conversation at KubeCon revolves around the impact of AI on Kubernetes and how organizations are adapting their applications to incorporate AI technology. Vultr, a key player in the cloud computing industry, provides a range of services including cloud CPU, cloud GPU, and bare metal, with a global presence in 32 data centers worldwide. The company's focus on operational efficiency and customer needs has enabled it to compete with industry giants and maintain consistent growth over the past decade. With a managed Kubernetes engine and offerings like Vultr ...Read more
>> Good afternoon Cloud Native community and welcome back to beautiful and snowy Salt Lake City, Utah. My name is Savannah Peterson. Delighted to be joined by Rob Strechay for a power-packed, but super fun-packed series of days. You and I barely got to eat lunch we have so many friends here.>> Yeah, this has been really crazy. Again, with over 50% being new, this being their first KubeCon as they->> No, I love that that's a thing every year too.>> It's such a fantastic thing. And I think, again, why not? AI is so much being built out on top of Kubernetes and people are here thirsting to understand how to do it better as we've been seeing all day long today.>> They're thirsty for those->> Thirsty.>> They're thirsty for those tools. Rob, I love that. Speaking of tools and cool companies and AI, Nathan from Vultr is here with us. Nathan, thanks so much for coming to hang out.>> Thanks for having me.>> You wouldn't know you got in at 2:00 A.M. last night. I can tell your energy is ready to rock and->> I appreciate that.... >> and it's exciting.>> Well, caffeinated.>> We've got tools. We've got tools thankfully. That's what we're talking about in general.>> Tools for that, too.>> Vultr has been around for 10 years, so has Kubernetes. Pretty interesting parallels there. In case folks aren't familiar because doing a lot right now, what is Vultr up to?>> Absolutely. So I'd love to tell you a little bit about Vultr. So Vultr is the world's largest independently held cloud computing company. What that means is that we offer cloud CPU, cloud GPU, bare metal. And we do that in 32 global data centers around the world. What that means is that->> Wow. That's a lot.>> It is a lot. So again, we've been in business for over a decade and that, as you compare us to hyperscalers, we have comparable footprints to some of the world's largest hyperscalers. We've also been doing it, again, for 10 years, so we have a great operating history as well. What that means, practically speaking, for end-users is that we're able to address about 90% of the world's population in under 40 milliseconds. I think that that's really important for, obviously, your traditional web application, CDN deployments, but also with the advent of large language models and Agentic AI, being able to deliver your AI models to your end customers in the country that they're in, I think is something that's really important.>> Yeah, I mean we see that as being key to your point about Agentic and as people look to small action models and large action models, or collaboration of agents, as it would be, it seems like Kubernetes is built for that, especially when you get into things like inference.>> Absolutely.>> And out towards the edge, like you're saying near the people. Is that what you're seeing and how does Kubernetes play into that at Vultr?>> Yeah, that's such a great question. I think that as you look at, we've seen multiple waves of the evolution of GenAI and large language models, and I think that it's really exciting to see this next wave that's just about to come upon us, which is Agentic AI. And so what does that mean? What that means is that the first wave of large language models, obviously from the companies who are doing foundational model training, the OpenAIs, Googles, Metas, they're releasing these, in some cases, in the case of Meta Llama open these open-source models. And they've been trained on a generic data set. I think that a lot of enterprises today are trying to figure out the best way to incorporate that large language model into their business because they've heard a lot about it. They hear all the buzz of GenAI and large language models, but what does that really mean? How do I really unleash the capability of these models for my business? And that's really where Agentic AI comes into place. And so if you imagine, I think that some companies think that if they just strap on a large language model that suddenly they'll unlock a law. But really what it comes down to is incorporating that large language model into things like IAM for permissioning, internal service APIs for real-time customer data, as well as product documentation pricing so that when you're interacting with a company's large language model, it's actually not just a generic, hello, I'm here, how can I help you? And you ask the first question, it doesn't know the answer. It's actually empowered to make those questions, to respond to those questions with actually insightful answers. But doing so also in a secure way, making sure that there's actually a permissions model that drive, that's underpinning that is super critical. And so the next evolution of this is going to be really exciting. Obviously, Kubernetes being the underlying platform that everyone deploys their models and applications on top of. Obviously, puts Kubernetes in the perfect position to do that. But I think that's really going to be exciting to see the next wave of Agentic AI and what we're going to see in the market.>> I think you're spot on. I saw Salesforce actually this morning, Marc Benioff were saying it's a billion AI agents in the next year, which is definitely a benchmark, but also an indicator of industry drive and progress. And there's obviously a lot there. Okay, I'm still a little bit gobsmacked by the 90% of the world's population in 40 milliseconds data point that you dropped earlier. And I want to tie that together to what you were just talking about, which is security. How are you managing that at the scale that Vultr is? That's pretty impressive. You're doing things extraordinarily fast all around the world in 32 different data centers, which is no joke, quite frankly. How are you doing that?>> Look, there's a multi-layered approach to everything. There's not one silver bullet to how you manage that at scale. And it comes down to really iteratively building out the platform over the course of the last decade. And so some of the key things that we look at, in terms of the security, is making sure that we can actually deploy the services that we run in each of these data centers. And so it's extraordinarily easy for us to turn up another data center pop. It's actually something that we can do hands off. And so if we were to open up a new location, it's a matter of obviously defining the resource, putting that into our core databases, and then ensuring that we have all the systems in place. And then at that point, we've already done all of the automation. And so it gives us the ability to turn up and, in some cases, we've turned on a dozen plus new data centers in a single year. And you think about the scale and the speed and the automation and testing and validation that goes into that. It's really extraordinary. And we have an extraordinary team of engineers and systems administrators and really the entire company who rallies around and is really energized and excited about being able to do that because it really puts us at a super exciting differentiation around some of our other competitors who, if you look at obviously, the hyperscalers, they've been doing the same thing we have. They're in dozens of data centers around the world. And then you compare that to maybe some of these AI neo clouds who have just come up and are figuring out how to operate a cloud in today's environment. And it's difficult. It's actually really difficult to put thousands and tens of thousands of servers inside of data centers and do that globally, especially with the scarcity of power, of the supply chain. And obviously we lean on our supply chain partners. We lean on the data center market and our operators to be able to do that. The output is that we're actually able to do that and deliver that ultimately for our end customers and create a ton of value for them in the process of doing so.>> Do you see that organizations are building out applications differently now that AI is part of it? And where they're looking to have, they've had maybe multi-layer applications, but we are seeing in the research that we do is they're not trying to put out there a chatbot necessarily or just a prompt. They're not trying to compete against ChatGPT or something. Unless they are.>> Unless they are.>> There's a few out there. But most organizations, to your point, I think it's 1% of the data in the world is, corporate data, is actually in LLMs at this point, and that might be by accident.>> That's an interesting data point.>> Yes. 99%. Do you see organizations coming to you and they're saying, "Hey, we're refactoring this application to bring AI in as a piece of it." And how they want to be able to deploy that? Is that a big push within your data centers?>> 100%, absolutely. I think that that is the key. There's been, within the IT community at large, within enterprises, there is a huge amount of emphasis in what is the responsible way to introduce these models to the organization. And I think that there's one thing to have a developer and a development environment play around with a large language model and saying, "Yeah, sure, that's fine." But the moment that you say, "Okay, we're going to actually put this into production," a lot of questions come up. And those questions really involve data governance, privacy, security, who's going to get access to that data. Because fundamentally what you're doing is you're exposing a programmatic interface to potentially your backend OSS BSS data to somebody on the other side of that chatbot. Is it being monitored? How do we deal with potential data exfiltration? Those are all really, really serious concerns. And so it does involve a re-architecture. In some cases, a re-architecture and in some cases intentional architecture of that application. When doing it for the first time saying, "Okay, this isn't another regular web application." Because now these developers are saying, "Okay, yes, this is not just an application. It's going to need access to X, Y, Z." And it's like, "Okay, well what interfaces are exposing to the customer?" "Well, we don't really know. We'll have to see what questions get asked just to see what data gets exposed." And that really triggers a lot of, I think, rightful questions around, okay, how are we actually securing our data, our trade secrets, our business data, our customer data? And so I think there's been a lot of work recently in making sure that the applications are architected in a way that emphasizes privacy and security, which I think is critically important.>> I felt it actually as you were just saying that. Thinking people being like, "Oh, right, we don't actually have any idea what might come up when this happens." Which is definitely an interesting juncture to navigate. One of the things I like about Vultr, and I've watched you guys for a while, you compete with some of the biggest brands on the planet.>> We do.>> What do you wish folks who maybe don't know as much about you knew because you're playing in not just the big leagues, you're playing in the all-star game and crushing it. So tell me a little more about the secret sauce there.>> Look, I think it comes down to operational efficiency and a really rich operating history. I think that there's been some notable examples of people who have either come into the cloud market or people who are in the cloud market trying to compete against some of the giants that we see in our industry. And they have failed. And then you look at a company like Vultr and you say, "Why are we succeeding? Why are we growing so quickly? Why have we been consistently growing over the course of the last decade?" And it really comes down to a relentless focus on operational efficiency, on the customer, ensuring that we're delivering and deploying services and products that our customers want and need. It's a focus on fundamental cloud infrastructure. And I think that that really, if you look at, if you survey everyone around here at KubeCon, there's a lot of platform engineering teams. And what's the prime remit of platform engineering teams? It's to consume fundamental cloud infrastructure. And that could be in the form of cloud infrastructure from hyperscalers. It could be cloud infrastructure in on-prem environments. But consuming infrastructure and then exposing a programmatic way for their application developers to deploy their application onto that platform. And Kubernetes obviously is the de facto standard and the gold standard on how to do that. And so I think that's really ensuring that we're focused on fundamental cloud services. And you look at, again, some of the hyperscalers, you've got 200 plus services. That was great for the first wave cloud 1.0, when it was like, okay, the cloud is amazing. If there's a service, if they've built a service, we will consume it. And you're looking at now with platform engineering teams, it's not focused on consuming every service. It's focused on really consuming that cloud VMs, bare metal in a lot of cases, and then running Kubernetes on top of that and then offering Kubernetes to your internal application development teams. Maybe you're consuming load balancers. You're probably consuming storage. But there's a half a dozen fundamental cloud services that you as a platform engineering team are consuming. And then everything else is skipped. And that creates a huge opportunity for us at Vultr to be able to focus on the fundamentals and offer really, really... If you look, obviously we're integrated with cluster API, we're integrated, we have a cross-plane provider for Kubernetes deploying infrastructure on Vultr at scale, and that's really huge focus of emphasis for us is ensuring that platform engineering teams, I like to say that we are the cloud platform for platform engineering teams. And we have all of the tools, all of the APIs that really are focused on allowing platform engineering teams to consume cloud infrastructure in a really easy to consume way.>> Because you have your own Kubernetes platform as well.>> We do.>> And you maintain that. You work through that with everybody. But to your point, you have some core services that you offer up. What are some of the most popular, because to your point, having worked from one of those other hyperscalers, I lost track when we went across 300 different services. People want solutions not services.>> Absolutely.>> Amen to that.>> And I think you guys are more focused in that direction. So what are some of the key services that you're offering out?>> Absolutely. A fun game to play is which logo does that correspond to the service? And it's impossible when there's 200 plus. But it's very simple. So for AD Vultr, we have the entry point for consuming infrastructure for platform engineering teams is the first question. And so is someone consuming bare metal infrastructure, in which case we have a really robust set of bare metal capabilities. Are you consuming virtual machines? Obviously, we have a virtual machine platform as well. Or are you actually consuming, we offer a managed Kubernetes engine. And so this is Kubernetes that runs on top of ultra infrastructure that allows you to have workload portability from any other Kubernetes instance. Could be something that you grew home-grown and don't want to manage that anymore. Could be a Kubernetes engine at one of the hyperscalers that you want to migrate over to Vultr for either price or performance reasons. And we consistently get feedback that the Kubernetes engine experience on Vultr is something that meets or exceeds expectations when compared to some of the hyperscalers and certainly somebody trying to run it themselves.>> I love that you have your own engine. I'm curious, I've asked a couple really smart people and I'm going to put you into this category now, this question. Do you think that AI is accelerating the adoption of Kubernetes? Are you seeing that within your platform?>> 100%. We absolutely see that. I think that that goes back to the entry point of consumption. And I think that there's an interesting intersection of Kubernetes applications that run on top of Kubernetes and AI, and I think specifically AI applications. And so I think it absolutely increases the adoption of Kubernetes. I also think that the way that you expose the fundamental GPU technology to your pods is also a super important component of that, ensuring that, and obviously with Vultr, I can't speak to anyone else, but certainly at Vultr it's very easy. We hook into the AMD operators, we hook into the Nvidia operators to expose the GPUs to the pods. And so accessing that technology through Kubernetes is actually quite easy. And so instead of, again, having to make the choice, and again, this is also one of the really competitive differentiators at Vultr is that with some of these newer AI neo clouds that have a lot of GPUs and are trying to figure out how to turn that into a cloud, they're typically located in one or two locations and trying to figure out how to add storage and add load balancers and add other services around those GPUs they have. But obviously they have a lot of GPUs, so the price is quite low. Then you look at the hyperscalars and they have the suite of cloud services in a lot of regions, but the price of performance on those GPUs is very difficult. And at Vultr, we have the best of both worlds. We have obviously the global reach of our platform, but also access to a large amount of GPU technology in those locations. And so that really creates a really interesting value proposition for customers who are excited and want to adopt this and need access to that, but are struggling to make that choice. Do I do it over here or do I do it over here? At Vultr, you can do it at Vultr and get both the best of both worlds in the same place.>> To the people who just are hearing about you guys, even though you've been around for quite some time, what would you say to those people who are hardcore Kubernetes? They're sitting here walking around, they're doing PRs, they're involved in the community. How would you say that Vultr fits within the Kubernetes ecosystem?>> Great question. So I would say that we support the Kubernetes ecosystem in many ways. Like I mentioned, we have a cluster API support. We have a cross-plane provider. We have a managed Kubernetes engine, which isn't some homegrown thing that we cooked up in the back room. This is the mainstream Kubernetes engine that we run. Again, like I said, people who come to Vultr will express their excitement about our engine and how it performs. And I would also say that having that run on top of our bare metal infrastructure creates a performance that you're not going to get in some other places. Where if people provision like a thin VM or a single tenant hypervisor, there's a penalty to that. And being able to run Kubernetes on top of bare metal is, customers are able to experience that value every day. And so we obviously are hugely supportive of the Kubernetes ecosystem. We've done a ton of investment in our cluster API and cross-plane providers and really excited to see the adoption of that in the form of people consuming our Kubernetes engine as well.>> What an exciting time for you all. Congratulations.>> Thank you.>> All right, last question for you Nathan, because this has been a blast and I'm sure we're going to have you back. What do you hope to be able to say in London or Atlanta next year at the next KubeCons that you can't currently say today?>> We have a lot of exciting developments coming at Vultr. And one of the things that I'm personally excited about is actually tomorrow we're going to announce the release of Vultr file systems. And what Vultr file systems is read-write many file system that will get attached to your pods wherever they are. And this is something that we've been working on for a long time. I'm really excited about it. And that's going to really, especially for AI workloads where people need access to their data, being able to offer a file system approach to pods is going to allow people to put their either training data or their inference data in a single location on a single volume, mount that to all of their applications, and then serve up their model or do their model training inside of Kubernetes on top of the Vultr platform, leveraging Vultr file systems. We have a lot of other exciting things in the works as well, but that's one thing that I'm personally excited about.>> I love that. Well, we'll have to have you back on the show to talk about all of those exciting things. Nathan, thank you so much. This has been awesome.>> Thanks for having me.>> Yeah, what an exciting time for everyone at Vultr. Shout out to the whole team too. Rob, always a joy.>> Always, always,>> Always, always. And hopefully you're having a joy-filled day, wherever you might be. We're here in Salt Lake City, Utah. Day one of KubeCon coverage. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.