Roman Arcea, group product manager at Google, and Jeremy Olmsted-Thompson, principal engineer at Google, share insights at a recent KubeCon event with theCUBE Research’s Savannah Peterson in this discussion on the future of Kubernetes and the cloud ecosystem.
Arcea and Olmsted-Thompson explore their perspectives on Kubernetes as the unifying standard for infrastructure. The conversation navigates pivotal themes such as standardization, optional complexity and the influence of automation in shaping current and future computing paradigms.
Key takeaways from this discussion include the increasing adoption of Kubernetes as a core infrastructure API and the role of automation in simplifying enterprise processes. Arcea likens the current state of gen AI development to winemaking — an intricate craft for creators — while enterprises, like chefs pairing wine with a dish, need AI to complement their existing systems simply and effectively.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud: Passport to Containers. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Google Cloud: Passport to Containers
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud: Passport to Containers.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud: Passport to Containers. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Google Cloud: Passport to Containers
Please sign in with LinkedIn to continue to Google Cloud: Passport to Containers. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Kubernetes & AI: Crafting the Perfect Compute Pairing
Join theCUBE and Google Cloud for a special series navigating today’s AI landscape for container optimization at scale. Gain from our analyst-led deep dives exploring what it means to train a model, how to evaluate the countless models available in the market, and why AI is the sauce but isn’t necessarily the main dish.
In this video you will learn: - how to get started in AI - what it means to train a model - different model sizes - the importance of inference - how to evaluate the many models available - the current state of AI - optimizing containers and AI at any level of scale - "Farm to Table AI"
Kubernetes & AI: Crafting the Perfect Compute Pairing
Roman Arcea, group product manager at Google, and Jeremy Olmsted-Thompson, principal engineer at Google, share insights at a recent KubeCon event with theCUBE Research’s Savannah Peterson in this discussion on the future of Kubernetes and the cloud ecosystem.
Arcea and Olmsted-Thompson explore their perspectives on Kubernetes as the unifying standard for infrastructure. The conversation navigates pivotal themes such as standardization, optional complexity and the influence of automation in shaping current and future computing paradigms.
Kubernetes & AI: Crafting the Perfect Compute Pairing
search
Savannah Peterson
>> Hello, nerd fam, and welcome back to our exclusive series with Google Passport to Containers. Today's episode is going to take us down a very interesting journey. We're going to be talking about wine, we're going to be talking about buffets, we're going to be talking about compute, the power of automation and the value of open source. It's a segment you're not going to want to miss. My name's Savannah Peterson, joined here with Roman and Jeremy. Thank you guys so much for coming to hang out.
Jeremy Olmsted-Thompson
>> Thanks for having us.
Roman Arcea
>> Thanks for having us.
Savannah Peterson
>> I promise this is going to be fun.
Jeremy Olmsted-Thompson
>> We're looking forward to it.
Savannah Peterson
>> Yeah, good, good. You've been prepped well by Bobby. Really excited that we're going to be talking about inference today. I think it's a conversation, quite frankly, we're not having enough. It's going to be what makes AI real for the rest of the world and the wonderful humans in our life who maybe aren't the Kubernetes nerds that we are. And I can't wait to get both of your hot takes on inference compute and how to navigate all this. We're at a really confusing and complex time, folks are looking for assets like this video to rely on. It's impossible for me not to mention the 14,000 open source fans who are milling around behind us. We are here at KubeCon, taking advantage of this opportunity to get together. Lots of tension sometimes when conversations around open source when it comes to monetization, are people just giving away things for free, competitors working on the same sort of solutions. So Roman, I'm going to open it up to you. What is the open source end game?
Roman Arcea
>> Well, I think we're at the tipping point with Kubernetes here and the ecosystem as well. I think that we've been going through the last 30, 40 years of compute, where we had some luck with some of the standards that emerged in the market. We had good standardization coming from Linux that abstracted that single node compute resource. We had a good run with standardization of the internet and IP protocol. I think it's time for us to start to converge on a standard of provisioning compute capacity in a distributed fashion, and I think Kubernetes is that API that seems to be the most promising right now in the market to give us this unified standard for infrastructure consumption. And I think between Jeremy and I, we're looking at open source, how are we driving this going forward, what are we making out of this? And it's the first time now where we see that it's both the application ecosystem, the developer ecosystem, that wants to integrate with Kubernetes from the upper layer, but it's also the vendors that started to acknowledge the fact that, assuming their infrastructure will be consumed, their offerings will be consumed by the Kubernetes API, it's a very, very reliable path forward for their business to integrate. So we see Nvidia, we see storage vendors, all coming and building their drivers and integrations and abstractions for Kubernetes. So I think that where I stand, I would love to see the end game for Kubernetes to become that IPv4, IPv6-like heart of the distributed infrastructure consumption, where it's safe for large enterprises to know to bet their workloads and their developments for years and years to come, because these guys are thinking 10 years ahead, 20 years ahead, but also for the vendors to come together and give that standardized abstraction to their consumers to make the most out of the products that they're building for them.
Savannah Peterson
>> I'm really glad you brought up standardization, because, and this may be oversimplifying it, but I think of standardization as simplification to a degree. We're not reinventing the wheel every time we do something new or adapting and evolving software, or hardware for that matter, it doesn't really matter. But I think that's so important, and decreasing complexity, always been a thing when it comes to Kubernetes, frankly even more of a thing now on steroids when it comes to AI and building on Kubernetes. Jeremy, I love this note I have here in front of me, what is optional complexity?
Jeremy Olmsted-Thompson
>> Yeah, you said it, I'm going to start with AI. You come to run AI in Kubernetes because Kubernetes can do basically anything. But maybe you don't have that experience actually in Kubernetes and it is a very large surface. And I think simplification, everybody wants simplified interfaces, easier to use, simple use cases, but often we conflate, I think, simplicity with doing less when it's really about making you think about less, right?
Savannah Peterson
>> Yes, decreasing cognitive load.
Jeremy Olmsted-Thompson
>> Exactly, exactly.
Savannah Peterson
>> I love that you just brought that up, yeah.
Jeremy Olmsted-Thompson
>> But the thing is, this huge surface of Kubernetes evolved over time in response to real use cases. There was a good reason for basically every weird little knob here and there. You don't want to have to learn about all of those knobs, but what we've found is you can't really take them away, because whoever needed them is going to need them again. Most workloads need very few knobs, but the set of knobs that all of your workloads needs can be pretty big. When you're getting started, you don't need knobs. As you start to refine and learn more and more, you start to figure out you need to tweak that little thing, that little thing, you customize it to your own taste. And so, the idea of optional complexity is really just removing that cognitive load, it's the complexity's there when you need it, but what you find is we've made good decisions for you out of the box, and we're doing this both in Kubernetes engine at Google and we're working in the community to make it easier to do these things. So you start from a place where we've chosen good defaults, you're in a secure configuration, as you grow and you need to make changes, you only need to worry about that little area that you're running up against. Or let's say you're a startup and you're moving really fast and you forgot to on security, but you found out, oh, we actually did a good job for you out of the box. You read about a CVE that came out and this vulnerability is scary, and you realize that, oh, your configuration is actually protecting you against that. So that's the optional complexity, it's starting from a simple place, but letting you evolve as you see fit into however your workloads take you to really meet the needs of your customers, which is why you're here in the first place.
Savannah Peterson
>> And letting you stay in an environment you're comfortable with. You can learn about those knobs as time goes on, but it's nice when that user interface, that developer interface, is that same experience, and people forget about that. There's a lot of projects, there's a lot of tools, there's a lot of things, people don't want to have to hop between a multitude of different vendors all the time. They want to be in an environment where they feel like they can be productive. As you're talking about the knobs, I'm thinking of our production desk over here. I'm also thinking of Photoshop, the first time you open it up, you don't know what any of that stuff means, you're looking at a lasso, you're looking at a crop tool, you're like, okay, wait, how am I going to do all of this?
Jeremy Olmsted-Thompson
>> Scary wall of choices.
Savannah Peterson
>> Yeah, exactly. And then, slowly playing around, or as you need it, you can build that out. I think that's such an important thing, and I do think that's really what companies are craving. Like you said, now that we're reaching this inflection point, Kubernetes at scale, it's every industry, it's every vertical, it's not folks who might've necessarily been K8s or open source nerds to begin with, so there's a lot there. It's so fun talking to you all, because there's the Google Kubernetes Engine, of course, and then you also have open source investments that you make all the time. Roman, tell me a little bit about that. How do you differentiate that? How do you prioritize? How do you split your time?
Roman Arcea
>> That's a great question, actually. I actually feel like all of this is in great synergy right now. I just talked to you about the importance of that standardized API we're bringing with open source Kubernetes, but why do we care and what do we do about it with GKE? And I like to think about it in very simplistic terms. If I talk to our customers, doesn't matter, that could be a financial institution or someone doing discovery of drugs or whatever you can imagine there. After all, when they come to a hyperscaler like ourselves, like Google Cloud, they have an application they want to run, they have some business critical workloads that they need to deliver, they need to build on top of this. They need to get compute, they need to get compute efficiently, cost efficiently, great price, performance, so what do we do about it? This is exactly what we do about it with GKE. On one hand side, we're making sure that they have this standardized interface, that's portable, that all the vendors and ecosystems integrate into. But then under the hood, it's that whole GKE experience, that whole managed control plane, managed CPU memory, abstracted GPUs, our unique value propositions with TPUs, that come together to give that capacity to our users in a standardized way. So we're very tightly integrating this together and making sure that all the way through, we keep that promise of portability of user workloads when they come to GKE. But also, to what Jeremy said, giving them the simplicity to go and say, "Hey, I want to run my entire business with Kubernetes, I can bring it to GKE at any scale," and it will work, it will work out of the box. So you've seen our announcements on things like 65,000 nodes, right?
Savannah Peterson
>> KubeCon in Salt Lake City, if I recall.
Roman Arcea
>> Exactly.
Savannah Peterson
>> Yeah, yeah, very cool.
Roman Arcea
>> 65,000 nodes, people sometimes ask, "Why would you need that? Why would Kubernetes or GKE even scale to 65,000 nodes?" Well, first of all, there are use cases. But most importantly, when we try to stretch the system, GKE, to the levels required to be able to achieve those results, we're doing two things. First of all, we are improving GKE's offering, value proposition, of this endless, almost open infrastructure canvas on which you can build your business. But then also, we contribute all of those investments back to the open source to make the entire open source product stronger, better, more resilient. Upgrade's big on our radar, because if you think about it, 10 years ago, five years ago, if you'd run a Kubernetes cluster, there would be maybe 50 nodes, 100 nodes. Upgrading 50 nodes or 100 nodes is a completely different experience and business impact than upgrading a 30,000 node cluster. So when we look at our relationship between what happens in open source as a standard, what happens with GKE as that provider of capacity to run your business and your business-critical events at scale, we want to make sure this comes as an entire package, that everyone understands, is easy out of the box, gives you the capacity, removes that capacity when you don't need it so you don't pay for it. But in ways that, again, give you the level of portability that you require for your business.
Savannah Peterson
>> What I'm hearing from you is it's a holistic ecosystem.
Roman Arcea
>> That's it.
Savannah Peterson
>> It's not bifurcated in terms of efforts or investment. It's much more about raising the water level together and that collaboration, both with contributions as well as with customers and with everything else, and looking forward to the future so that everything continues to get easier.
Jeremy Olmsted-Thompson
>> Right.
Roman Arcea
>> That's the battle.
Savannah Peterson
>> I love it. Well, if anyone's going to do it, it's going to be y'all. That's my expert analysis at least. So you mentioned the word compute, there's a concept of compute classes. Can you break this down for us, Jeremy?
Jeremy Olmsted-Thompson
>> Yeah. So a few years ago, with GKE Autopilot, we came out with this concept of compute classes. It's really about creating an abstraction between the platform and the application and abstracting away the categorization of compute that you might need for a given workload. So you've got your general purpose, which can run basically anything. It's some balance of price and performance. It's probably not the most performant thing out there, but it's going to give you a reasonably good value. And again, back to that optional complexity, you don't need to really think about it until you're ready. Then we've got this idea of performance compute class that we introduced, where you can start specifying specific types, a little bit more specific machine types, say you need specific hardware characteristics, we had this accelerator compute class. But what we found is, more recently, we needed more customization. So lately, we've evolved into supporting custom compute classes, which you can actually use in any flavor of GKE, and these let you actually build your own abstraction around the infrastructure. So let's say I want to define my own high performance compute class, it's CPU-bound, let's say I want to use C4 machine types, but I might not be able to get enough of them, or maybe my quota's not high enough, so I want to add fallback to C3, or speaking of AI, we're in this world of scarce high-demand accelerators, so maybe I actually want to have flexibility between availability classes, or I have a reservation. So let's say I'm trying to optimize costs. I could say, "I prefer spot, give me all the spot you can, but if you can't get enough, okay, I'll fall back to on-demand," and we'll reconcile back for you to help you reduce those costs. Or, "I have a reservation and I want to use my reservation, but if I want to burst beyond, I can spill over into on-demand or even spot," and I have this customizability, and the application developers don't really need to think about this anymore, they get to target the platform admin builds and shapes with whatever controls they need, building their own abstraction. And so, the idea is that this simplifies a lot of things upfront, but also over time, as new generations of compute come out, platform admin can just add a new generation to this compute class and make changes behind the scenes. The developer's still targeting, say, a high GPU memory class or a low GPU memory class or a high performance compute class, and not having to be worried about the specifics of what goes into that, but you get access to the whole cloud ecosystem, you get drastically improved obtainability and performance, and all the knobs you need when you actually need them.
Savannah Peterson
>> So it's a little bit like a buffet. I can have a little bit of this if I need it, a little bit of that.
Jeremy Olmsted-Thompson
>> Exactly.
Savannah Peterson
>> I can go back and revisit that if I need some more of it.
Jeremy Olmsted-Thompson
>> Exactly.
Savannah Peterson
>> But it gives you that flexibility. I can imagine that's incredibly mission-critical right now because of the evolving landscape that folks are dealing with, between the explosion of data AI happening and then really trying to evaluate MVP, doing a few ROI case studies or little projects to test out what's going to actually be the thing they make the massive investment in or need that heavy lift of the GPU for. What's one myth in that space that you wish you could dispel? I'm curious, because you probably get to talk to a lot of different customers and company sizes, what do you wish you could tell everyone by waving a little magic wand? And it could be about the knobs or it could be about the buffet or it could be about something else, but this is an interesting opportunity.
Jeremy Olmsted-Thompson
>> I think there's a few things here. Well, first of all, it doesn't have to be scary. You can start and iterate from there, you don't need to understand all the mechanics under the covers before you deploy in the first place. And I think we also see... I don't know, Roman, do you want to...
Savannah Peterson
>> I think that's actually a perfect answer, it doesn't have to be scary.
Roman Arcea
>> It doesn't, yeah, it doesn't. I want to build a bit on your buffet analogy because this one is a good one. There's a lot of talk about AI, training is complicated, inference is complicated, all of this is so hard. Yet, I think about it as more of a restaurant type of experience, and let me tell you what I mean.
Savannah Peterson
>> Please do.
Roman Arcea
>> Yes. Wine for a winemaker is an extremely complicated product. You need to know your fields, your crops, when to pick up the grapes, how to ferment it. As a winemaker, for you, making a perfect wine is an art. That's what you're seeing in the industry with people developing large models, driving the industry forward. For them, that's an art. They bring the best scientists to the table. They develop new model servers for inference. They're trying to accelerate it. They develop hardware that will give the best throughput. They combine various types of GenAI systems in one experience. Yet, when this bottle of wine ships to the restaurant, it becomes part of a restaurant experience, where the chef, what he will do is he will give you a nice starter, he will pair it with a good glass of wine, he will make a main dish, will pair it with a glass of wine. It shouldn't be that complicated for the chef to give you a good glass of wine with what's cooking, and those are your typical enterprises, those are your typical customers and users, because in their world, to run their businesses, they need databases, they need CRM systems, web servers, everything, and then they want to bring in this AI experience to enrich their business, to make it more productive. But for them, it shouldn't be that complicated. And I think if we're going in this direction where we really separate the creators of that awesome wine bottle with the consumers that need to complement their full experience and build their businesses, that's the way I want to think about it, that's the way I would love the industry to evolve. Every one of us deserves a good glass of wine with a good dish.
Savannah Peterson
>> Preach.
Roman Arcea
>> Absolutely.
Savannah Peterson
>> You are speaking my language.
Roman Arcea
>> But we should not own a field in France and pick up the grapes ourselves, right?
Savannah Peterson
>> I can tell you, as someone who grew up on a vineyard and has harvested wine, you are absolutely spot on with that in terms of the complexity, in terms of the nuance, in terms of the little factors that can have a very big impact all the way at the end. But then, it feels so, and this is going to lead me to my next question, it feels so automatic when you're at a restaurant. Oh, here's my list, I just order, optional complexity, maybe I just have the sommelier pick out based on the cheese I'm having to start and the main I'm having later. Bobby would want it to be barbecue in this particular instance. I love this analogy, that's going to be such a great sound bite in this segment, I can tell, that's definitely going on the journey here. But speaking of that experience feeling automatic at least, or feeling easy for that restaurant diner, we've talked a lot about automation for years, but everything seems to be moving a lot faster now because of this AI catalyst behind us. Roman, are you seeing more and more people adopting automation or investigating that as a solution?
Roman Arcea
>> Absolutely. Look, it's actually shocking. We run those Cloud Days with Google Cloud and GKE before any major event like KubeCon or anything, so very often, before the large open source conferences, I get exposed to some customer conversations. Now, it's been very interesting to see the evolution of Kubernetes users towards that magic box automation, how it progressed. Three years ago, when I was sitting with any major customer and we were having conversations of the level of, "Look, you could make vertical pod auto-scaling change resource requests and limits in Kubernetes by itself, it's magic wand." Everyone was like, "Whoa, whoa, whoa. Wait a moment. We cannot do this. This disrupts our experiences, this disrupts our workloads." When we look three years in and I look at the number of customers on GKE that are using magic automation of resource requests resizing then and today, it's a 40x increase. Every single major customer now does that. And when we come into a room-
Savannah Peterson
>> 40x increase-
Roman Arcea
>> It's a 40x increase, exactly....
Savannah Peterson
>> just to make sure everyone caught that, just casual.
Roman Arcea
>> When we come into the room today, there is no more conversation about, "Oh, I'm not comfortable with Kubernetes taking over and automating the actuation of infrastructure." That's not the conversation anymore. The conversation right now is literally, "Look, guys, 10 years ago, five years ago, I had one team in my company doing Kubernetes. Now, I have 100 of teams doing Kubernetes. They're all in different sub-businesses and different PAs. I can't, I don't want anymore to babysit my CPU, my memory, my pods, my everything. What I want is I want a streamlined experience where a product owner, an application owner, says, 'Hey, look, I have to run this business. Those are my objectives.' Make sure you deliver my objectives, and I couldn't care less if you run it on this shape or that shape or whatever, as long as the price, performance, time to market and ease of operations are there." So this is a major trend that we're seeing definitely there.
Jeremy Olmsted-Thompson
>> And I'd like to build on that too. I think myths we'd like to dispel, I think one big one is that automation means no control. A big part of why we're seeing this massive adoption of automation is that we're providing more and more control over how the automation actually works. So you still get control, you just don't have to do the busy work.
Roman Arcea
>> Which is, by the way, not in conflict with one another. It's very interesting that you could give lots of automation, but you could still give users... It's not even control as such, you could give them a proper policy engine, you could give them great observability so that they feel empowered to always set boundaries. Or like we say, when we talk about compute classes, what we're saying about compute classes is, look, compute classes is nothing else in the policy engine that allows our users to consume infrastructure on their terms. It's fully automated, but if you want to set boundaries that are right for your business while still keeping that powerful automation, this is what we're going towards. And more and more things that we're developing and conceptualizing and where we're moving forward is going towards the direction of, look, I'll give you the full automation, but then I'll give you a powerful policy engine that you can use to constrain the behaviors, and I'll give you deep levels of observability so you can really understand what's happening under the hood and make a call either does the right thing for you.
Savannah Peterson
>> I'm so glad you brought that up and we just went there, I think that's such a good point. Just because something is working and you don't have to babysit it, doesn't mean that it's off there doing nefarious things or going to break in a second and you wouldn't be alerted to that. That observability is the core part of that.This conversation's getting real spicy, and I bet that babysitting analogy really resonates with you with the two youngsters.
Jeremy Olmsted-Thompson
>> Yeah.
Savannah Peterson
>> Okay. Impossible to talk about AI like this and not talk about the cost of infra to support that and what that's built. How are you guiding customers, Jeremy, I'm going to turn this to you, how are you guiding customers through that journey, and how long before we won't be having a conversation about the cost of infrastructure as a result of this AI adoption curve we're on?
Jeremy Olmsted-Thompson
>> So I think cost really matters. The accelerators that you need to run these workloads, whether you're using our TPUs or you're using GPUs, whatever, they're not cheap. But it's easy to focus on the direct infrastructure costs, I need this many nodes and they cost this much, and not to really think about what really matters, which is the whole workload cost, it's the cost to meet your latency guarantees for your customers and get the throughput you actually need. And Roman mentioned a lot of work going into high performance infrastructure, a big part of this, it's not just because, of course, everybody loves fast things, but it's also that if you want to maintain a constant request latency while you have a burst of traffic coming in, because let's say some ad went out or some event's happening, if your infrastructure takes a long time to start up, to expand when you scale out, and you have to carry this big heavy buffer. So you're not really thinking about, oh, this workload needs this GPU and it costs this, you also need some percentage overhead that's just always there, and the faster we can make things start, the less buffer you need, that can have a really big impact on costs beyond just the finding a slightly cheaper dollar deal.
Savannah Peterson
>> No, it adds up over time.
Jeremy Olmsted-Thompson
>> Exactly, exactly. But building on that, it's not just enough to have those facilities. A big theme of everything we're talking about here is it has to be really easy to use them in an intuitive way without having to dig down and understand how the wine is made. It's just being able to very easily configure what you need and know that it's going to start fast enough so that you can bring down that safety buffer and really reduce your costs.
Savannah Peterson
>> I think that's such a good point. I think of 3D images rendering and how long that used to take versus now, we're talking days and hours versus seconds, the cost of that then drops so significantly, or like turning on a car in the middle of the winter and it takes forever versus the first time you turn on an EV and you don't even know if it's on because it's so quiet.
Jeremy Olmsted-Thompson
>> Yeah, exactly. And that startup cost, you're spending gas as the car's sitting there trying to warm up the engine, wouldn't it be nice to just not have to do that anymore?
Savannah Peterson
>> Yeah.
Roman Arcea
>> And you know what I'm seeing right now in this regards is that we're, again, there are lots of tipping points happening in the industry, with this one specifically is that with the advancements that we're doing in compute capabilities, infrastructure, I think we're getting at the point where provisioning a GenAI workload starts to become more of the same type of latency as a normal Java enterprise workload. I think we're at the point where GenAI workloads are starting to get normalized, and when they start to get normalized, the important aspect of the cost, and of course GPUs, they're a bit more costly than CPUs in memory, but in the end, the way I think about it is, what is the business value per dollar that you're achieving? And I think where we want to be with our products, with our customers, with where Kubernetes evolves, is that you only pay for as long as that system delivers you that business value, and it has this over-provisioning thing, it's fine to over-provision if that gives you returns, but it's not okay to run and pay for infrastructure, run and pay for overhead, if that doesn't return anything back to you. And I think if you frame that problem this way-
Savannah Peterson
>> That's a great point....
Roman Arcea
>> it's very clear to understand where are we going or where we should be going with those investments.
Savannah Peterson
>> Absolutely. I think that's a great point. Man, you've dropped some knowledge on this segment. Okay, I have one last question for you. We've talked about food, we've talked about wine, we've talked about babysitters and cars, now we're really covering all the bases. Taking off your acronym and solutions hat and putting back on your human hat, not that that isn't human, that sounded awkward, what do you hope that the rapid revolution that we all get to be a part of on this Kubernetes and AI journey does for your family, for your loved ones, for the people in our lives in the future?
Jeremy Olmsted-Thompson
>> For me, I'm excited to see how it can bring more capability to people. Things that used to require a team of people can now be done by maybe a single person. Things that required a huge budget can maybe be done by a small budget. I think the opportunity is just massive, to try things out and experiment. If you can leverage AI to try an idea and see how it's going to land without having to have a massive investment, we can bring a lot out there. I think I'm very much looking forward to my kids seeing a very different exciting world with a lot more possibilities and a lot more options leveraging some of these tools that we're lucky enough to get to help facilitate.
Savannah Peterson
>> Love that. What about for you, Roman?
Roman Arcea
>> Wow. There is a lot of promise of AI in these big baths of healthcare and everything, and I could go there and be cliched, and I really hope that this happens, or I want to keep it a bit more down to Earth. If I look at my career, 20 years in tech, I think we still under underappreciate the amount of routine non-value tasks each of us does during the day, basic things, scheduling calendar times, approving expenses, figuring out what's for dinner.
Jeremy Olmsted-Thompson
>> Taking notes in a meeting.
Roman Arcea
>> Taking notes in a meeting, here you go, recording good quality videos. Where I wish to be is I hope that as we advance in the industry here, we can start to dedicate more and more time to thinking creative work. Some people say GenAI will replace creative jobs as the first thing first, but that's not what I'm seeing. I've been using that technology for at least a year now, a year and a half, and what I'm seeing is that really deep thinking, really advanced thinking, still requires humans, and the most important is even if you have the idea, making sure that everyone else comes together to make it happen requires still influence, requires human influence. So can we get to those ideas faster? Can we spend more time talking to one another and building a better world? I hope that's a place where my kids will be living in, doing less routine that doesn't matter, talking more to one another, making things happen.
Jeremy Olmsted-Thompson
>> Right. It doesn't replace the artist, it gives them a programmable paintbrush that can do so much more.
Roman Arcea
>> Exactly. Yeah, let's build orchestra.
Savannah Peterson
>> Yes, here we go. And our future winery and our restaurant and everything else.
Roman Arcea
>> Sounds good.
Jeremy Olmsted-Thompson
>> Sounds great.
Savannah Peterson
>> Yeah, I'm in. Roman, Jeremy, this has been absolutely fantastic. Thank you both so much for taking the time today.
Roman Arcea
>> Thank you.
Jeremy Olmsted-Thompson
>> Thank you. Thank you so much.
Savannah Peterson
>> And thank all of you for tuning into our exclusive series with Google Passport to Containers. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.