We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon NA 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for KubeCon + CloudNativeCon NA 2025
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for KubeCon + CloudNativeCon NA 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon NA 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to KubeCon + CloudNativeCon NA 2025
Please sign in with LinkedIn to continue to KubeCon + CloudNativeCon NA 2025. Signing in with LinkedIn ensures a professional environment.
play_circle_outlineExploring Day Zero Events: OpenShift Commons, SecurityCon, Backstage, and Ford Motor Sports' Virtualization Modernization Journey
replyShare Clip
play_circle_outlineDiscussion of AI workload enhancements in OpenShift 4.20 and specific features offered.
replyShare Clip
play_circle_outlineWells Fargo's use of OpenShift for AI workloads in fraud detection and chatbots.
replyShare Clip
play_circle_outlineEnhancing AI Security: Addressing Customer Data Safety with Post-Quantum Cryptography for Future Challenges
replyShare Clip
play_circle_outlineModernizing Virtual Machine Farms: Embracing OpenShift for Trustworthy Containerized Environments and Future-Ready IT Infrastructure
In this conversation from KubeCon + CloudNativeCon North America 2025, theCUBE’s Rob Strechay sits down with Red Hat’s Shane Utt (senior principal software engineer) and Jimmy Alvarez (senior principal product marketing manager) to unpack Day Zero takeaways and the latest OpenShift momentum. The duo recaps a jam-packed OpenShift Commons alongside co-located events like SecurityCon, Backstage, IstioCon and EnvoyCon – highlighting real customer stories (including Ford motorsports) and early signals on what’s next for cloud-native networking. They mark the “rele...Read more
exploreKeep Exploring
What events took place on the day before the official start of the conference?add
How is AI evolving within the context of OpenShift and what new features are being introduced to support AI workloads in the latest release?add
What was Wells Fargo's experience with OpenShift and AI in their fraud detection and internal IT support?add
What are the security concerns associated with AI workloads, and how is the company addressing them?add
What trends are currently being observed in virtualization and customer migrations to modern platforms?add
>> Hello, and welcome back to Cold-lanta. We're here for KubeCon + CloudNativeCon 2025 North America. We're having a great time warming the place up, although you won't notice that yet because it is still frigid. It was below 30 here this morning when I came over, but to help us warm things up again today, I'm really excited, I got the folks from Red Hat on. Jimmy Alvarez, who's the senior principal technical marketing manager for Red Hat, and Jane Utt, who's the senior principal software engineer for Red Hat. Welcome on board, guys.
Jimmy Alvarez
>> Thank you. Great to be here.>> So I think we'd be remiss if we didn't talk about yesterday a little bit, because I went over, I was over at Red Hat Commons, OpenShift Commons yesterday. Fun, it was packed, it was a great time. There was a lot of other Day Zero stuff going on. Why don't you start off with-
Jimmy Alvarez
>> Yeah. Even though that today's the official first day of the conference, obviously, yesterday we had Day Zero, which is our OpenShift Commons. Obviously, we had several events besides that. We had SecurityCon and Backstage, and we had a lot of great customer stories that we heard. One of the good stories that we heard was from Ford Motor Sports, so Motorsport Ford, the company, the auto company.>> Yeah, they're doing a lot.
Jimmy Alvarez
>> They were talking about their journey with OpenShift. They've been a longtime customer, and they were talking about virtualization and how they're able to modernize the virtualization infrastructure. We heard from the community as well and all the different projects, so it was really exciting. It was a really good time. A little bit exhausting already. Crazy, we're day one.
Shane Utt
>> It's like a whole convention within itself.
Jimmy Alvarez
>> Yeah, exactly. It's a whole->> I was going to say, yeah. We-
Jimmy Alvarez
>> It's a micro convention, right? So it's like four or five things going in one time.
Shane Utt
>> Very packed.
Jimmy Alvarez
>> Very packed.
Jimmy Alvarez
>> Yeah. Like I was saying, I was at a IAC Conf Connect last night doing a panel as well, and I think when you start to look at it, you're a maintainer, you're involved in the community. How was your day zero yesterday?
Shane Utt
>> So I went to some of the side cons like IstioCon and EnvoyCon, which those two in particular are very networking focused, and it is, it's a packed full convention in a day. The topics are... It's fast and you're getting a lot of information really quickly because everybody's packing it into the one day, but there's a lot of good insights, and that's often where people, especially doing lower level stuff and Envoy in particular, that's where they're coming out and saying, "Hey, here's the new thing we're doing," at KubeCon, at the colo events. So it's often enough the first sign of things to come and stuff like that happened at those colo events.
Jimmy Alvarez
>> Yeah, I agree. I love Day Zero. That's why I come in early every year, because we're not here until today, day one, and it's worth it, so again, if you're out there, definitely come in early. And I know there was stuff even over the weekend with Regex and some other stuff going on. But let's shift gears a little bit because I think it's been about half an hour since I said the word AI, and I don't want to get kicked out of here yet. What's going on from an AI perspective with OpenShift? You have OpenShift 4.20.
Jimmy Alvarez
>> Released today.
Jimmy Alvarez
>> Yeah, released today, smoke them if you got them.
Shane Utt
>> .>> So, absolutely. And I think, again, when you look at it, how are you seeing this evolve as AI and OpenShift is under more and more AI?
Jimmy Alvarez
>> For what we've seen, obviously our customers, the way that we build OpenShift is an end-to-end platform to support any sort of workloads that a customer can throw at it. AI for us is just that, another workload. When you get down to it, that's really what it is, so being able to have the full support of the full stack end-to-end is very, very important. So there are several features that we are talking about within 4.20 around the data sovereignty, being able to enhance AI workloads, monitoring, enhancements all around that to support those workloads. Great enhancements around networking as well, right?
Shane Utt
>> Big release with OpenShift AI 3.0 on 4.20.
Jimmy Alvarez
>> And 4.20.
Shane Utt
>> So a lot of new things for AI specifically in this release.>> And networking, you got to feed the data to the GPUs and networking is the glue that brings that all together, so again-
Shane Utt
>> Yep, a big part of the training workloads, and that's also generally over a network is how you're delivering all of this value.
Jimmy Alvarez
>> Yeah, interesting enough actually, so Wells Fargo, obviously who are a good customer of ours, they were talking about a KubeCon in London about their journey with OpenShift and AI, and the way they implemented it for the AI or ML workloads is for the fraud detection. So you know those texts that you get saying, "Hey, did you make this purchase? Did you do this? Did you do that?" All that is OpenShift underneath the covers. So we've been able to work with that with Wells Fargo and improving and enhancing all of that with those workloads. Also, all of their internal IT support as well, they leverage chatbots like Lightspeed to be able to sort through the documentation and all those new tasks that really speed up things. So Wells Fargo was one of our greatest customers that has been leading the charge on the AI space.
Shane Utt
>> And to stick with that for a second, because you talked about Lightspeed. Help people understand what Lightspeed is.
Jimmy Alvarez
>> Yeah. So Lightspeed is just a chatbot agent that allows you to inject your own data, so whatever that might be, your documentations. For example, for Wells Fargo, the way they leverage it is for investment forecasting, so they're able to sort of forecast and look at different investments and their financials and figure it out, what the trend is and allow them to make proper decisions on where those investments should go.
Jimmy Alvarez
>> Yeah, and that's part of OpenShift AI and all that.
Jimmy Alvarez
>> That is part of OpenShift AI and the whole platform as a whole.
Jimmy Alvarez
>> Yeah. No, I think it's great, and I think when you look at it, there's so much confusion around AI in general and the infrastructure under it, and I know we'll hear here in 10 and see from all over how well people are building on top of AI, to put it mildly. I think Kubernetes is becoming really the core infrastructure.
Jimmy Alvarez
>> The glue, yeah.
Shane Utt
>> And I think 2025 has been the year where it's starting to really get real for people, and they're starting to do real workloads.>> Yes, definitely, and I think that we've learned so much about AI and agentic, and to your point, the Wells Fargo stuff is really what we would call traditional AI.
Jimmy Alvarez
>> Correct.>> It's not the gen AI stuff, which is still the workhorse of what's going on anyways. Let's switch gears a little bit because AI is AI, but Red Hat has always really been deep into the security area and customers have tons of security concerns, especially as AI and the data is moving around more. What are you seeing from a security concern perspective from the customer engagements and what's going on?
Jimmy Alvarez
>> Yeah. Obviously with these AI workloads, there's a lot of security concerns about where you are feeding that data to go through and AI do their training, so there's concerns where the data might run in the cloud and might run on-prem, right? So with OpenShift, what we enable, because we own the full stack end-to-end, from the platform to the containers to the networking and everything in between like the GitHub and the point applications, all of that makes it to where it's a nice transition. So our customers are very concerned about security, so security has always been first for us. With 4.20, we continue to enhance those features and work around making sure that everything that we provide for our customers is always secure. Especially with networking and AI, prompt injection, it's huge, huge concern. That's something that you've seen from a networking perspective, and we're looking at some of that.
Shane Utt
>> Yeah, so security perennial. It's the job that's never done. Two major... There's a bunch of fronts->> The only time it ever shows up is when it goes wrong.
Shane Utt
>> Well, but that's->> You want it to be transparent, yeah.
Jimmy Alvarez
>> You don't want to be in the news. The whole thing you don't want to be in the news.
Shane Utt
>> But I think many companies are starting to get to the point where they're like, "Oh yeah, we got to be more proactive about this. Two of the fronts that we've been focusing on more lately have been a future front of post-quantum. So we're pretty convinced that quantum is going to happen in the next five plus years, so we're doing PQC, post-quantum cryptography, across the entire stack. So networking and every other org, we're implementing quantum-resistant algorithms everywhere, and then current day, the front is AI. AI represents a number of new interesting vectors for security, and I would say agentic probably may be a little more so than just AI in general. Agentic's an interesting focal point because you are now giving a machine more agency over making choices and driving its own workflows. It's not decisions in the human sense, but it's making decisions based on interpreting natural language, so that's very dangerous as it turns out.>> And you have more going over the network as well, like you have MCP, right?
Jimmy Alvarez
>> Yes. MCP gateways.
Shane Utt
>> And people are definitely concerned about security when it comes to MCP and things like that.
Shane Utt
>> That's one of the big areas we're focused right now. So one of the things, I think a lot of people right now are thinking about security around agents. They're thinking of it in terms of agents, like agent security, the guardrails at the edge of the agent, so on and so forth, but the tools, like MCP servers, that's the first line of defense. And what we've seen a lot of is people building MCP servers just lightly wrap their APIs or something like that. It's just like, "It's a light wrapper. Just let the agent go at it." And actually, so people think about the intelligence is all on the agent, but we are thinking about it more like, yeah, there's intelligence there, but a lot of the intelligence actually needs to be at the MCP server level. The MCP server is the first line of defense against the agent doing something terrible, something destructive, dropping your database, so the MCP server should be intelligent enough to know when a destructive action is happening. And the MCP protocol, the model context protocol has elicitations as a feature there, basically to send something back to the user. For instance, if somebody were using an agent and said, "Hey, I need you to create me an ingress gateway on Kubernetes." Great, fine, that's pretty straightforward. Now, it's been running for a while. "Hey, I need to change something with this. Maybe I need to change the ports or something." An MCP server is the first line of defense to tell you, "Hey, you're trying to drop a port, you're trying to get rid of a listener on your gateway. That has all of production behind it." Right. Send an elicitation back to the user, bypass the agent. "Are you sure you want to put production down today?" SO that's one of the areas where it's the first line of defense for that kind of thing, so we're thinking about that a lot in terms of how you develop MCP servers. And then on the other side, the MCP server responses are prompts. They're effectively the same thing as a prompt. So we're building with defense in depth strategies in mind, thinking about things like elicitation, and we're building something called the MCP gateway right now, which is a gateway like you would think of, it's based on Envoy, for bringing control and management, API management kind of things, security off, all these sorts of things on top of the MCP server. So they're not exposed, there's a control layer over them, and do things like MCP guardrails at that level in addition to doing guardrails at the normal level, which is the agent level, and that stuff we're experimenting with right now is guardrails at that level to protect the prompts, stuff like that.
Jimmy Alvarez
>> Belt and suspenders.
Shane Utt
>> Yeah.>> I think you have to when you talk about security, but one of the things, let's shift a little bit. I think obviously, we're a couple of years into Broadcom buying VMware and some changes to their licensing and stuff like that. OpenShift Virt has been out for a bit now. What are you seeing with customers, especially as they're modernizing their VM farm and they're trying to do AI and things of that nature? What are you seeing there?
Shane Utt
>> VMs are here and we're all about it.
Jimmy Alvarez
>> That's right, which is crazy to think because I've been doing VMs since the early 2000s and I never thought that we'd be talking VMs again. But here's the interesting thing that really is key for our customers, is having a trustworthy platform that allows them to not only run their legacy, if you will, virtualization platform, but also their modernized containerized platform in a secure manner. It's very important for our customers to have a platform that they could go end to end and build their workloads there. We're seeing a lot of our customers go through that journey right now. As a matter of fact, a pretty big customer is going with a massive, over 40, 50,000 VMs that are migrating over to OpenShift, so if you think about it, it's a massive shift. It's not something that happens overnight. You can't decommission all those VMs right away. There's applications that depend on it, so the idea is to provide all the building blocks that exist within OpenShift and enhance them with the OpenShift virtualization, KubeVirt, and allow our customers to have that trusted platform that allows them to modernize their applications as they're building their Kubernetes environments.
Shane Utt
>> Yeah. So deploy your VMs. If you have a transition where you want to take VMs and eventually move those apps and containerize them, great. If you want to leave them on VMs, also, great, and we're trying to modernize, like he said, that experience so you can do things that are natural already with pods and stuff like that, just normal Kubernetes workloads, like expose your VMs. If you need ingress for your VMs, expose them via Gateway API, or if you need more isolation for your VM networks, put them in our new user-defined networks, put VMs inside of UDNs, stuff like that. So it's just as much the same as we can make it for pod workflows.>> Yeah, and we did a little preview of what was going on at KubeCon and we actually got into some of the LLMD stuff and things of that nature, which is a big topic as well.
Jimmy Alvarez
>> Huge, yeah.>> But as we look out into next year, what do you see on the horizon from an OpenShift platform perspective? What do you hope to be talking about here when, I don't even remember where we are next year, but-
Jimmy Alvarez
>> I think we're going back to-
Shane Utt
>> It will be Amsterdam and then Salt Lake.
Jimmy Alvarez
>> So back in Salt Lake.
Jimmy Alvarez
>> Back in Salt Lake.>> It will probably be about the same temperature.
Jimmy Alvarez
>> I don't know if that's going to be great for me.>> Maybe snow again. Yeah.
Jimmy Alvarez
>> As a Florida boy, you can see me with a puffer tracker right now. I don't know.>> I know, but maybe we'll get a little more snow in the mountains and not on us again.
Shane Utt
>> I will say, I never thought I'd see icicles hanging from buildings in Atlanta.>> Yeah, this is-
Shane Utt
>> That was a new one for me.>> 29 degrees this morning was ridiculous.
Jimmy Alvarez
>> Insane.>> So what do you hope that we can be talking about in Salt Lake next year?
Jimmy Alvarez
>> Well, I think it's been 15 minutes. As I said, AI, right? So AI.>> Yeah, exactly.
Shane Utt
>> It's going to be a big focus.
Jimmy Alvarez
>> Yeah, AI is obviously a big focus. I think AI security and general governance. I think the biggest thing with AI, obviously we've seen the shift of our customers where they were working on it on development environments, playing with chatbots and those things, and now it's actually evolving into more agentic AI, all these MCP servers, so enhancing all and going down that path, that's where we see a lot of our customers go now. They're ready to bring those workloads from development where they just have a couple of developers playing with a couple of LLMs to more like LLMDs, MCP servers, all those things. So you can talk a little bit about-
Shane Utt
>> Yeah. Where we were back in London was we were talking a lot about the 20 to 30% problem, and it's all about performance and cost. So we've got AI, we're starting to see real use cases, so now we're running them in production, models are getting bigger. How do we optimize for performance and cost? And the 20 to 30% problem that we were dealing with was the idea that a lot of people running these models for production are only utilizing 20 or 30% and there's a lot of waste. And so we've done a lot of things since then, and this is happening now and also going into the future. We're working in upstream on a lot of these things, so like LeaderWorkerSet in Kubernetes, which if you're not familiar, is the idea of basically LeaderWorkerSet is like instead of having a pod, you have many pods that act as a big super pod. That better resembles most modern AI workloads, so we have that now. But one of the things that it allows you, for instance, is you can separate that over nodes, so basically all those pods within the LeaderWorkerSet can then share a giant model they wouldn't normally fit on. Maybe they are running more commodity style GPUs or something like that or just cheaper across all the nodes, and run that huge model parallelized over all the nodes, stuff like that. There's also job set, and then we have our own build of Kueue with a K.
Shane Utt
>> I know, just to be-
Shane Utt
>> Kueue with a K. I'm not-...
Jimmy Alvarez
>> confusing with a K.
Jimmy Alvarez
>> It used to be... Yeah.
Shane Utt
>> I'm just going to call it Kueue with a K because it's easier for me.
Jimmy Alvarez
>> It's great. It's easier. It's easier.
Shane Utt
>> Yeah. So we've had a lot of success with that. Actually, IBM's Vela supercomputer, I don't know if some people may have seen recent blog posts about that, but we used Kueue with a K and some other techniques to bring that, so that's running OpenShift and it's... That supercomputer is running OpenShift and then using Kueue with a K to get itself up to 90% now. It was really low in terms of being able to optimize and utilize as much as it can, and basically Kueue with a K, it's like bringing classic HPC scheduling stuff that was already pretty well established, and then just dropping that on top of the Kubernetes scheduler for AI workloads and saying we're going to just optimize this for these more specific use cases, it's not as good with the standard scheduler, and get up to 90% in plus and stuff like that. And then there's a bunch of other stuff going on like DRA as the API for claiming these resources and stuff, so that's where we are and where we're starting to head, but the big thing in my mind seems to be, so the use cases for generative AI are there, but I would say they're a little more specific. Agentic feels very broad. There are good use cases to do generative. You might do knowledge bases, chatbots, you might do search engines. There's all these different use cases that work, but it doesn't cover everybody across Kubernetes. Agentic does. Everybody's going to want to automate their platform and their workloads with agents, so everybody needs agents. So that's the big thing that seems to be. AI is really starting to hit hard now with Agentic and that's going to be a huge focus for 2026.
Jimmy Alvarez
>> Yeah, and data sovereignty too. Data sovereignty is something that, especially we've seen it in Europe with Dora. Obviously, because we own the whole stack end to end, it's really interesting to be able to support all of that, to be able to have all your data within your regional space. We're starting to see that also in the US with some defense contractors and things of that nature. So I think those are really the main themes, AI, security, data sovereignty. That is what we're looking forward to in 2026, enhancing that.
Jimmy Alvarez
>> Yeah, no, I think that sums it up. Now we can just shut the show down because we talked about everything. You guys have been great. I really appreciate you coming on board.
Shane Utt
>> Thank you.
Jimmy Alvarez
>> Thanks for warming up the room with me here today.
Jimmy Alvarez
>> Thank you.>> I really appreciate it, and thank you for watching this episode from KubeCon + CloudNativeCon North America 2025, live from Cold-lanta. See you soon.