Paul Nashawaty of theCUBE Research hosts a conversation with Rhys Oxenham of SUSE, vice president and general manager of artificial intelligence, at SUSECON 2026 in Prague. Oxenham explains agentic infrastructure, hybrid and sovereign deployments and how the SUSE technology stack including Rancher Prime and SUSE Linux Enterprise Server supports private enterprise AI; they emphasize the need for governance, observability and guardrails when deploying agentic AI.
Nashawaty notes the industry shift from experimentation to production in 2026 and warns about shadow AI, underscoring the importance of infrastructure modernization and utilization to achieve return on investment.
This discussion provides practical guidance on designing AI infrastructure for private enterprise, preserving data sovereignty through hybrid deployments and operationalizing agentic AI with governance and observability. Topics include Rancher Prime, SLES, infrastructure modernization and edge computing.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
SUSECON 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Susecon 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Susecon 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
SUSECON 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to SUSECON 2026
Please sign in with LinkedIn to continue to SUSECON 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Rhys Oxenham, SUSE
Paul Nashawaty of theCUBE Research hosts a conversation with Rhys Oxenham of SUSE, vice president and general manager of artificial intelligence, at SUSECON 2026 in Prague. Oxenham explains agentic infrastructure, hybrid and sovereign deployments and how the SUSE technology stack including Rancher Prime and SUSE Linux Enterprise Server supports private enterprise AI; they emphasize the need for governance, observability and guardrails when deploying agentic AI.
Nashawaty notes the industry shift from experimentation to production in 2026 and warns about shadow AI, underscoring the importance of infrastructure modernization and utilization to achieve return on investment.
This discussion provides practical guidance on designing AI infrastructure for private enterprise, preserving data sovereignty through hybrid deployments and operationalizing agentic AI with governance and observability. Topics include Rancher Prime, SLES, infrastructure modernization and edge computing.
In this interview from SUSECON 2026 in Prague, Rhys Oxenham, vice president and general manager of AI at SUSE, joins theCUBE Research's Paul Nashawaty to discuss how open-source infrastructure is bridging the gap from AI experimentation to enterprise production at scale. Oxenham frames SUSE's approach through two distinct lenses: AI for infrastructure, where intelligence is embedded into how enterprises deploy, manage and operate systems, and infrastructure for AI, where open platforms provide the foundation for running AI workloads. He highlights how agentic...Read more
exploreKeep Exploring
How does SUSE view and implement AI in enterprise infrastructure—specifically the concepts of "AI for infrastructure" and "infrastructure for AI"?add
How are you applying your private AI initiative to the next generation of governance, particularly with respect to data/digital sovereignty and compliance with EU regulations?add
How can organizations avoid vendor lock-in and maintain control, independence, and the ability to pivot when deploying sovereign AI workloads, and how does SUSE’s AI stack enable hybrid deployment across edge, data center, and public cloud?add
How can organizations ensure high utilization and best-fit placement of hardware when scaling AI infrastructure?add
>> Hello, and welcome back to SUSECON 2026, coming to you live from Prague. My name is Paul Nashawaty, I'm the practice lead and principal analyst at theCUBE Research, and I'm here with Rhys Oxenham, GM of SUSE AI. Rhys, how are you doing?>> I'm doing well. Thank you so much for having me.>> Absolutely. Great to have you on. You are here to talk about AI infrastructure and infrastructure for AI.>> That's right.>> There's a lot of AI in there.>> There's definitely a lot of AI in there. And maybe I could start by breaking that down in terms of how we see the world of AI.>> Absolutely.>> So, we see it through two distinct lenses, AI for infrastructure and infrastructure for AI. Let's start off with AI for infrastructure. This is where we're starting to see the world of enterprise IT become much more intelligent. So, it's where we're seeing the integration of intelligence directly into the ways in which we deploy, manage, life cycle, interrogate, and look after systems within the enterprise. So, what we're doing within SUSE is we're effectively extending our existing portfolio to become much more intelligent. We're integrating with the likes of MCP servers to bring the world of agentic AI through to really benefit our customers. Then when we talk about infrastructure for AI, this is where we're building a solid bedrock or foundation for running AI workloads, AI capabilities directly within the enterprise.>> Okay. I mean, that makes a lot of sense. I mean, I like where you're going, the distinction between the two. We often talk about AI as a market of something new, something that's happening with MCP servers and agents and connecting these things. But what I really like is you're overcoming the production gaps and you're kind of building not just for just general environments, but you're looking at things like sovereignty and you're looking at secure and intelligent enterprise. You mentioned that. And that's something I'd like to double-click a little bit down on. AI for infrastructure is infusing AI into the day-to-day operations, right? Is that kind of where you're going with this?>> Yeah, exactly right. Organizations have a huge amount of infrastructure, of course, right?>> Yeah. Yeah.>> And the world of agentics or agentic infrastructure where AI is fundamentally helping organizations kind of accelerate that, solve problems autonomously, take action on behalf of the users, saving them time, leading to more time for innovation, and really helping them make a difference within the enterprise.>> Yeah, absolutely. So, I think that AI is a tool for sure. AI allows for productivity, allows for operational efficiencies, right? And it's the difference between having a manual screwdriver and a drill type of thing. And you can get things done faster, right?>> You can get things done faster, but really the drill is deciding exactly where to drill and when and for how long.>> Yeah.>> It is the one that is making that decision.>> I like that distinction too.>> And so, when you think about, "Well, how do I cross that production gap or that production chasm?" Really it's about how do I do that safely and securely? If these agents are taking responsibility or they're actually performing the action on behalf of the user, how do I make sure that that agent is doing it in a way that is aligned with corporate policy? How do I make sure that it is not going to lead to any adverse effects? And we've already seen in the industry occasionally where AI is taking actions, who is responsible when things go wrong?>> Yeah.>> So, governance, security, observability, these things start to become incredibly important.>> So, okay, let's talk about that then, because the use of AI, it helps accelerate your workloads. It helps accelerate your environment, but it also can potentially accelerate problems, security, challenges. Let's talk about guardrails. The bumpers are in place. What we see in our research is 50%... Actually, this is of August 2025, 50% of production code was written with AI. We reran that in November and December and found that what jumped up to 70% to 90%.>> Sure.>> So, there's a lot of AI being used for productivity, right? And then we also see that the citizen developers with the lines of businesses are developing applications. If we don't have those guardrails and bumpers in place, how do we make sure that people using AI are doing it most effectively?>> Yeah. I think that's a fundamental problem. What we see in the enterprise is that there's a huge amount of expectation, there's a drive, there's a lot of pressure to really innovate. And I think the ultimate kind of enterprise threat, and it's what I'm going to be talking about later in the week, is shadow AI.>> Yeah.>> I think when there is pressure, executives that I talk to, they are really wanting to see kind of tangible outcomes and return on investment for what they're doing with AI. And because there is the expectation, at least the pressure, which leads to individuals within an organization going down the route of shadow AI. And then organizations, they lose control over their data. The governance that they try to put in place is no longer... It's not actually able to take control of anything because effectively people are going out of that kind of safety net.>> Yeah, but it's a slippery slope, right? I mean, what we found in 2025 was 25% of IT budgets were allocated to AI initiatives. Not really sure what they were using these AI initiatives for, but to your point, organizations don't want to be left behind. They want to make sure that they're spending and reporting to the board going, "Hey, we are spending on AI projects." Well, I think that 2025 was a year of innovation, experimentation.>> Yeah. Right.>> 2026 is the year of implementation.>> Exactly. I think you're absolutely spot on. Over the last few years, we've seen a lot of experimentation. Pilots are relatively straightforward to demonstrate value, but then how do you take that from that initial, "Okay, we've proven it here. How do I then do it at scale in production with critical data, with the need for that safety net and that governance?" That is the production gap or the production chasm that our customers are having to cross and where we are helping them solve it.>> So, let's talk about that because that is important distinction here. We had Peter on earlier today. We were talking about the virtualization environments, the VMs encapsulating, obviously really important, right? But a lot of organizations are encapsulating their heritage environment, and they're going to encapsulate them, and that's going to be around for a long time. So, bridging that production gap of heritage environments to new systems of engagement is an important kind of distinction. What do you advise clients to do or prospects to do on their journey?>> Yeah. So, I think what we're really seeing is kind of, and I'm sure yourself and Pete talked about this earlier, we're kind of seeing this through two fundamental directions. The one is modernization. So, we talk about modernization of infrastructure, moving some more kind of legacy infrastructure over to a more kind of maybe modern Kubernetes oriented, single control plane, single API infrastructure, and then there's acceleration. And so acceleration through artificial intelligence, how can I leverage my existing infrastructure? Much of it may be more traditional virtualization as opposed to containers. How can I then use AI to really accelerate my business outcomes? How can I get that return on investment? So, that's what we're really trying to support our customers on. And what we're really focused on and what my team is really trying to deliver is what we call private enterprise AI. Now critical to understand, when we say private, we don't mean kind of locked into a single data center. This is not a private cloud type term. Private being intelligence belongs to them as an organization. They have freedom to deliver it wherever it makes sense for them as a business. So, we're fundamentally talking about hybrid infrastructure when we say that.>> Well, let's talk about that because that's important to understand here. I mean, especially when we talk about sovereignty, right? Data sovereignty. We're in Europe, right? It's a big topic to consider, right? But there's also things like the EUCRA that's going into effect, right? These applications have to be in compliance, regulations and governed in an appropriate way. There's some stiff penalties if it doesn't go into effect.>> Huge. Yeah, absolutely.>> How are you taking that private AI initiative and applying it to this next generation of governance?>> Yeah, indeed. So, let me just first start by saying that digital sovereignty is not a European compliance check mark anymore. I think the kind of common perception in the market is that this is what it is, but I think the reality is that every organization in the world needs to think about their independence, their autonomy, their resilience planning for what they do with their infrastructure. And so, what we are doing at SUSE is we're really providing customers with the choice, the choice in where they deploy, how they deploy. And so, our software is just at home within a local data center as it is running in the cloud or at the far edge. And so, we want to provide that common fabric that is, again, open in terms of open source, open standards and interoperability, but fundamentally giving you that safety net, that observability, the security so that organizations can really feel safe wherever they want to deploy that infrastructure.>> Yeah, that makes sense. I want to talk a little bit about that. I was wondering when customer choice was going to come up, because I mean, that's obviously the big tagline, which I think is important to note.>> Yeah, absolutely.>> But talk a little bit about the migration and movement, right? We see and hear sometimes that there's a potential movement of moving from the hyperscalers to a private environment or to a sovereign environment. I know with Rancher, I know with the different SUSE products, it makes it a little bit more seamless and easier to do, but what does that mean for the AI perspective?>> Yeah. So, what I typically say is that you really need to, as an organization, need to be able to own your intelligence. If you are somewhat locked into proprietary stacks or you don't have the freedom to really operate that intelligence within your own realm and control your data, then you really have a serious problem with lock-in.>> Yeah.>> You don't have that independence and the autonomy that I was mentioning earlier. And even some of the data that we have done on our side, we actually launched an AI and cloud survey over the last few months, and it really showed that the majority of those that we surveyed are really starting to prioritize hybrid infrastructure for where they deploy their sovereign AI workloads. So, I think this is a really important figure because it states that again, the majority of the market is really looking for freedom from lock-in and independence. And so, from our technology perspective, to answer your question, through the underlying technology that we have, our SUSE AI stack is built on top of SUSE Rancher Prime and SUSE Linux Enterprise Server, we are able to deploy that infrastructure wherever the customer needs it. It's at home at the edge, just as it is in the data center or in the public cloud. So, we really give that complete freedom to deploy anywhere that suits the business. And critically, it's about being able to pivot. If I as an organization have deployed within a particular realm, I need the ability to pivot to another infrastructure platform, provider, whatever it might be simply when business changes or there's another requirement for me to be able to do so. Without being able to pivot, you expose yourself to significant risks.>> Yeah. I think that makes a lot of sense. I mean, pivoting and again, the choice of keeping it flexible, what do you think it means from a scale perspective? Because this is also, as we talk about... I mean, obviously AI is your focus point. Scaling is a big factor here, especially when we start looking at utilization of GPUs and what that means. And not to get too technical about it, but more talking about the different data sets that have to come in, that's part of the whole solution. What does it mean for scale?>> Yeah. I completely agree. I think with AI, we are seeing a huge amount of... I guess just fundamental scale of AI is hard to comprehend. You look at the amount of data centers that are being built, the amount of infrastructure, hardware that is being procured. I think it's even difficult for many organizations to even get access to hardware. So, when we talk about scale, we also talk about ensuring utilization and best fit for workloads. And what I mean by that is, if I'm an organization, I've purchased this hardware, I want to make sure that hardware is running at 100% utilization at all times.>> Yeah.>> I want to get that return on that investment. And so, yes, scale in terms of sheer number of systems, the amount of compute I have access to, but again, I want to make sure that that is fully utilized. So, as an organization like SUSE, we want to provide the tools, the observability, the capabilities, the automation to make sure that when organizations are leveraging our software, they really can run it at 100% and that we can call out areas that are underutilized and to really make sure that we can do best fit placement.>> I like that. I like that response. I mean, that's important to note, right? I mean, you purchase it, you want to use it.>> Absolutely.>> And having underutilized environments, that's not healthy.>> Correct.>> So, all right, let me ask you this. We're sitting here a year from now at SUSECON 2027 in Dallas, right?>> Yes.>> So, we're sitting there. AI obviously is in a fast changing environment. What do you see happening over the next year?>> Well, let me first off by saying, I don't think anyone can really predict what is going to happen in the world of AI over the next 12 months.>> Yep.>> However, you talked about how over the last few years it was more about experimentation, pilot, this year being more about production. I think we're going to be in a much better place to understand whether that production implementation, crossing that chasm was successful or not. What were the lessons that we learned? And also which technologies in terms of ways of doing things, which were the successes, which actually succeeded. And so I think that these next 12 months are going to be really, really interesting in the world of AI.>> I agree with you. I think it's going to just continue to drive, I think ROI is important. I think expanding into new markets is definitely important.>> It is. But the one thing that I think is going to really change over the next 12 months is the adoption of agentics within the world of AI, specifically within the enterprise world. You're going to see a lot of announcements this week around what we're doing with agentics. You spoke with Peter Smails a little while ago. They're putting a lot of agentic capabilities into the Rancher portfolio.>> Yes.>> My colleagues and the Linux team are also doing the same. We're here with a lot of partners this week talking about how we're integrating with them and really making the world of agentics feasible for the enterprise.>> Yep.>> And so, I think we're going to start seeing a lot more integration there.>> Well, Rhys, that's a great place to leave it. That definitely is a lot happening. I'm really excited. The show floor is really busy. There's a lot of activity. The partner ecosystem is popping. I'm really, really excited about what's happening for you, but thank you for being on today.>> Thank you so much.>> Thank you. And thank you for watching. My name is Paul Nashawaty coming to you live from SUSECON 2026 in Prague, coming to you from theCUBE, your leading source of tech news.