In this interview from Nutanix .NEXT 2026, Rajiv Ramaswami, chief executive officer of Nutanix, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the company's evolution from a hyperconverged infrastructure leader into a full agentic AI platform. Ramaswami charts Nutanix's deliberate shift — from HCI and end-user computing to a platform capable of orchestrating containers, virtual machines and AI pipelines across hybrid and multi-cloud environments. He argues that AI has moved from experimentation into operations: enterprises must now manage not just applications and infrastructure but data pipelines, model governance and autonomous agents. A core technical focus is GPU optimization — Nutanix has extended its hypervisor to handle topology-aware workload placement and key-value cache offloading, keeping GPUs fully utilized and driving down cost per token, the unit of economics enterprises are now watching most closely.
The conversation also explores how Nutanix's expanding ecosystem — more than 100 partners at the event, spanning cloud, server, storage and chip providers — validates its platform ambitions. Ramaswami details new AI PaaS services set to launch this summer, offering developers curated open-source components alongside cost, access and governance controls for model deployment. The discussion shifts to sovereign clouds, where governments worldwide are funding infrastructure build-outs not just to protect data but to stimulate local economies and help domestic industries adopt AI at scale — an opportunity Ramaswami sees as particularly significant for Nutanix. He identifies simplicity and governance as the company's core long-term bets: the more accessible and cost-effective AI becomes to consume, the faster enterprise adoption grows, and sound governance ensures security, data privacy and access controls keep pace. From defining hyperconverged infrastructure to becoming the platform of choice for every application in an AI-driven world, Ramaswami outlines why Nutanix is positioning itself as the enabling layer for intelligence everywhere.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Nutanix .NEXT 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Nutanix Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Nutanix Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Nutanix .NEXT 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Nutanix .NEXT 2026
Please sign in with LinkedIn to continue to Nutanix .NEXT 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Rajiv Ramaswami, Nutanix
In this interview from Nutanix .NEXT 2026, Rajiv Ramaswami, chief executive officer of Nutanix, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the company's evolution from a hyperconverged infrastructure leader into a full agentic AI platform. Ramaswami charts Nutanix's deliberate shift — from HCI and end-user computing to a platform capable of orchestrating containers, virtual machines and AI pipelines across hybrid and multi-cloud environments. He argues that AI has moved from experimentation into operations: enterprises must now manage not just applications and infrastructure but data pipelines, model governance and autonomous agents. A core technical focus is GPU optimization — Nutanix has extended its hypervisor to handle topology-aware workload placement and key-value cache offloading, keeping GPUs fully utilized and driving down cost per token, the unit of economics enterprises are now watching most closely.
The conversation also explores how Nutanix's expanding ecosystem — more than 100 partners at the event, spanning cloud, server, storage and chip providers — validates its platform ambitions. Ramaswami details new AI PaaS services set to launch this summer, offering developers curated open-source components alongside cost, access and governance controls for model deployment. The discussion shifts to sovereign clouds, where governments worldwide are funding infrastructure build-outs not just to protect data but to stimulate local economies and help domestic industries adopt AI at scale — an opportunity Ramaswami sees as particularly significant for Nutanix. He identifies simplicity and governance as the company's core long-term bets: the more accessible and cost-effective AI becomes to consume, the faster enterprise adoption grows, and sound governance ensures security, data privacy and access controls keep pace. From defining hyperconverged infrastructure to becoming the platform of choice for every application in an AI-driven world, Ramaswami outlines why Nutanix is positioning itself as the enabling layer for intelligence everywhere.
In this interview from Nutanix .NEXT 2026, Rajiv Ramaswami, chief executive officer of Nutanix, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the company's evolution from a hyperconverged infrastructure leader into a full agentic AI platform. Ramaswami charts Nutanix's deliberate shift — from HCI and end-user computing to a platform capable of orchestrating containers, virtual machines and AI pipelines across hybrid and multi-cloud environments. He argues that AI has moved from experimentation into operations: enterprises must now manage no...Read more
exploreKeep Exploring
How has Nutanix's business strategy and product focus evolved over the past several years?add
How should enterprises adapt from a cloud operating model to an AI operating model, and what are the implications for hybrid/multi‑cloud workloads, infrastructure, data and model pipelines, autonomous agents, governance, and overall IT architecture?add
How should a company address customer concerns about where to run enterprise AI (on‑prem vs. neocloud vs. hyperscaler) given CapEx constraints, the need to use ecosystem partners (like MongoDB or NetApp), and expectations of platform neutrality and heterogeneity?add
What are the deployment environments for agentic AI inference, and what do customers expect from platform providers—particularly regarding token-based costs and cost‑control measures?add
If intelligence (AI/agentic capabilities) is added across your platform and pushed to the edge, what will happen next for the platform and its customers, and which platform priorities (e.g., simplicity, cost-effectiveness, governance, security, privacy, access controls) do you believe will create the most value for developers and businesses?add
>> Welcome back to theCUBE. We are coming to you from Chicago and we are streaming live from Nutanix .NEXT. I'm Alison Kosik. I am hosting the show today alongside my fabulous co=pilot, John Furrier. He's the co-founder and co-CEO of SiliconANGLE, theCUBE. We are winding down the day. What are the themes that come to you?
John Furrier
>> Well, the theme here is that the agentic and AI factory operating model has changed. The world wants to create a layer to power the agents, yet have the stability of the cloud native wave that built over the past 15 years. And this next segment with the CEO of Nutanix really is going to be enlightening. Keynote was very strong, not a new information, but building on stuff we've heard before, Kubernetes, containers, VMs, all powering this next wave of growth that's going to hit developers, IT buyers, influencers, and decision makers in the industry, and then the C-suite. In all markets, as AI becomes the thing in all places, all software all the time, powered by AI infrastructure. So it was super exciting.
Alison Kosik
>> All right. Let's bring in the CEO of Nutanix, Rajiv Ramaswami. Thanks so much for coming on theCUBE. And I think the first question that comes to mind is, how do you see the business overall changing? Is this a true re-architecture moment?
Rajiv Ramaswami
>> Yeah, first of all, thank you, Alison. Thank you, John. It's great to be here as always. So our business has shifted very decisively over the last several years, and deliberately so. So maybe I'll take a step back and say where we were, where we are now, and then we can talk about the future, which a lot of this .NEXT year was about the future. I mean, I know John, you've known us for many years, tracked us for many years. So if you look at the past, Nutanix was an HCI company. HCI company, and when I got here, maybe six years ago, now almost, when I talk to customers, they say, "Yeah, Nutanix, great platform for running HCI and my end user computing workloads." From there, we went to building out a full platform that could run everything, and then we expanded on our multi-cloud approach to be able to work with all the big cloud providers to take our offerings on the public cloud. And then more recently, we've expanded from being a company that's just focused on HCI and hybrid multi-cloud to becoming a platform company in multiple ways. We are not just vetted to our own storage anymore, we support a whole range of ecosystem storage partners. And you saw many of those being announced at the show, new ones being Gleno and NetApp in particular with some of the other ones coming to market with Dell and even Pure. But then we didn't stop there, right? Now, we are paving the way for the future because our customers are going through a massive architectural shift, right? They are moving from ... Of course you got to run today's applications. Of course, they're mission-critical. You've got to keep your businesses going, but you got to think about tomorrow. And what is tomorrow? Tomorrow very quickly is every new modern application being built, is containerized, it is orchestrated with Kubernetes, but more than everything else, tomorrow is about the world of AI and how it's going to impact how businesses operate. And if you don't make use of it, you're going to be left behind from a competitive perspective. So it's all about AI influencing agentic AI where you now move from simple inferencing to being able to really have agents delegate stuff to agents and have more autonomy and enable more autonomy in the enterprise, make things happen so much faster. So that world is our future as well, because we are now moving to this new world of providing a complete platform to enable organizations, companies to be able to go build and run these agentic AI applications in a very easy turnkey way, and also focused on building an ecosystem to support all of that. So that's kind of really what this conference was anchored around. I mean, we spent a lot of time talking about that vision and where it's going.
John Furrier
>> On your keynote, Rajiv, you talked about a couple things that you frame things differently. I want to get your thoughts. It's not just a cloud platform operator. You mentioned AI factories. I'm going to read some of my comments I wrote on X and I want to get your reaction. You were framing the hybrid cloud as moving to an AI operating model. For the past decade, the enterprise conversation is centered around hybrid and multi-cloud workloads. Where do workloads run and how is the infrastructure distributed? This year it's changed. You've outlined the transition from cloud operating model to AI where the enterprise must manage not just the applications and infrastructure, but the pipelines of data models and increasingly autonomous agent. In this world, AI is not experimental, it's operational.
Rajiv Ramaswami
>> Yes.
John Furrier
>> What's your reaction to that? Because we're hearing execution that this AI is forcing a re-architect of IT, not just cloud, IT and clouds, cloud and AI operating models coming together. What's your vision and what's your reaction to that?
Rajiv Ramaswami
>> Oh, 100%, right? I mean, if you think about a CIO today, that CEO has to be thinking and they are about how they can enable AI in the enterprise and do so with the appropriate governance in place, provide the flexibility to all their developer teams to go transform the business. And that becomes more and more central. In fact, after my sessions, we spent a fair bit of time with our CXO or customers who are here and you know what? All the things they heard about, this was top of mind for them. How can you enable me to get AI deployed in my enterprise? Yes, they have to worry about their hybrid and multi-cloud and their VMware migrations. All of that stuff is going on for sure, but they're very focused on what's the future for them, and that is around AI.
John Furrier
>> On the evolution, hybrid cloud is table stakes. Kubernetes is the foundation. Platform consolidation has come up. And then the expansion of VMs and containers, but VMs have a new role. Containers and AI stacks are emerging. How do you talk to customers when they say, "Hey, I want to run enterprise AI, but I'm not sure I want to spend a lot of CapEx. I might go to a neocloud, I might go to a hyperscaler, but I need to use partners like MongoDB or NetApp or anyone in the ecosystem, but they got to be designed in."
Rajiv Ramaswami
>> Yeah.
John Furrier
>> How do you talk about that? Because you're telegraphing neutrality. You got to deal with AMD just recently and NVIDIA. So you didn't really ... It's like neutral. Pick your platform. Heterogeneous, check. What are customers saying? Because your ecosystem is forming to be really elite. The numbers are huge.
Rajiv Ramaswami
>> I mean, I tell you, I mean, one of the things that I'm most proud of as a company is how this ecosystem has grown around us over the last few years. I mean, look at it this year, over a hundred partners sponsored the event and you can look at the expo floor here behind us and it's buzzing. And we've got all the major names in the industry here, the major cloud providers, the major server providers, the major storage providers, the major chip providers. I mean, it's just everybody's here, right?
John Furrier
>> Why is that? What's the reason?
Rajiv Ramaswami
>> Let me tell you why, right? It is a sign that what we are doing with ... As you said, you use the word platform and we take that very carefully, right? To me, a platform is around ... The value of a platform is directly tied to the ecosystem around it too. And all these partners are seeing the value of the platform that we bring today to the market and realizing the value of being integrated together with us, right? Because it benefits them. It benefits the customer. And that's really why you see this. We've been very deliberate about building that ecosystem over the last couple of years and to where it is today. And now, by the way, it's not done by any means. Now, we've got a whole new ecosystem that we're just starting to build around AI. And we announced, for example, a set of AI pass services today that help developers actually build their AI applications. And that's just the beginning, and that catalog is available this summer, but we've got a lot more to do. But this app section layer that we provide continue to provide choice. You talked about AI being now running on everywhere, right? The inferencing in agentic AI is going to run in the public clouds. It's going to run in neoclouds. It's going to run on-premises. And we want to be the platform that can help our customers run with all of this. By the way, cost per token. Token is now the unit of economics right here. And I talked to a whole bunch of customers whose main focus was, "Can you help me get my token costs under control?" And a lot of that, providing visibility into what's being consumed, providing guardrails around what's being consumed, trying to reduce the cost by optimizing what you do. All of those become table stakes, and that's what we're doing.
John Furrier
>> Rajiv, I was talking to Alison earlier today about Nutanix. And I said, "Well, Rajiv, he loves to come out and talk about architecture." It's like Jensen because that puts all the architecture as a masterclass. You've been doing this for a while. You always lead with the architecture. Everyone knows you're super technical. You wrote the book on technology we've talked about in the past, but this year is different. You had the architecture. I want to get into what's changed. We'll put a pause on that. But it also transitioned to a very successful partner ecosystem portion. This is a very familiar pattern. You see with NVIDIA, you see with AMD, ecosystems that are actually designed in is a very nuanced point. So you got all the moves down. So let's start with architecture. What's changed this year? Okay? What is going on there? There's some real technical advances. What were you presenting? What was the rationale? What was the key points?
Rajiv Ramaswami
>> Yeah. Look, I think the interesting thing is our core platform is evolving to become an agentic AI platform. And a lot of the work is to be done in the core itself. I mean, we focus from an AI perspective on four layers where we are innovating. So we talked at the very top about this AI services. And on the AI services, I think it's two things. It's providing this ecosystem again of curated open source components that people can then build applications with. It is providing cost, governance, access control to all the models that people need to run. That's the second big piece as well. And then underneath that is the optimization of the infrastructure to run all of this so that you can drive the cost per token down, optimize it, get better performance. That's actually very core to what we do for as a technology company. We've been doing this for compute centric workloads forever, with the hypervisor capabilities. And what we've done now is to optimize that hypervisor for GPU centric workloads. Being aware of topological constraints and optimizing how workloads get placed onto GPUs and CPUs and the memories so that we can optimize and get best performance. That's actually hard work. And we've done that work, right? As an example.
John Furrier
>> What does that mean? Because NVIDIA's big story is that, and this is Jensen's direct quote. You don't want to let those GPUs be idle. It's just money sitting there.
Rajiv Ramaswami
>> Absolutely.
John Furrier
>> Does it affect that and token price?
Rajiv Ramaswami
>> Listen, you want to make the maximum use of GPUs that you buy, that you have, right? Yes, GPUs setting idle are bad because think about it. On the one hand, you're spending more and more tokens, and if you're going to need to buy more and more GPUs to go use that, it's not efficient. Same thing as we saw in compute centric workloads. Back before virtualization came on, utilization was very low with virtualization became much higher. Same thing is happening now with GPUs. It's the next big resource to be optimized.
John Furrier
>> What's different about those two paradigms? Conceptually, I understand it. What's different with GPUs? The storage, the HPM, what's architecturally different?
Rajiv Ramaswami
>> Yeah. So there's a different set of constraints that you have to optimize on these things. You want to make sure, for example, you can actually make good use of the memory that you have on the GPUs. And for example, offload some of the stuff that gets used frequently, like key value cache. So that's something you've got to offload. You've got to make sure that the workloads are placed so that whenever GPU talks to memory, you want to be making sure that's as local as possible. You don't want to be able to have to go across through the CPU to the other CPU in the socket and more minimize and try to have locality of everything so that you can get the best utilization and performance. So it's kind of the same problem, but with a different set of things to optimize and manage.
John Furrier
>> Okay. So the ecosystem?
Rajiv Ramaswami
>> Yeah. So that was the second piece. I just to finish up on this one. And so then there were a lot of innovations we talked about on the data side, right? Taking raw data that's coming in with our data hub and being able to really pass it, put it, filter it, get rid of private information, for example, out of it, and then vectorize it and put it in a form that can be easily consumed by the AI applications. That's a very important thing. You need to be able to simplify it. AI is only as good as the data that you feed into it. So making sure that you feed the right data into it is very important. So these are the technology innovations that are the pillar of this agentic AI stack.
John Furrier
>> All right. So now, ecosystems is proof that platforms work. You're not only a platform, I'll say it, you might say it differently, but you're also a system. Operating model implies operating system.
Rajiv Ramaswami
>> Yes.
John Furrier
>> You look at all the successful AI companies. They're all software based. They're all eventual property. They're systems.
Rajiv Ramaswami
>> Yes.
John Furrier
>> They're not just software. Software is everywhere. As customers have their software expanding, talk about that system, because your ecosystem is proof. I mean, the logo slide last year was like half, maybe more than half less than that was.
Rajiv Ramaswami
>> Yeah. Look, I mean, I will perhaps in this case use the word system and platform a little interchangeably, because the system effectively is everything coming together, right? And that's what the platform is doing. And the fact is, this is the thing, right? I mean, if people, the more ecosystem that you have around this platform, the more tentacles you get. And what happens is that the customers get a much more complete solution that they can deploy in the market, and it helps them address very specific problems. Okay? So in this world, for example, supply chain constraint, they want to use and sweat the hardware as much as they can. So the fact that we are broadening our ecosystem of your storage vendors that we support is very critical to them, right? That allows them to modernize, get to running these new container work or everything-
John Furrier
>> So it takes the supply chain risk off the table?
Rajiv Ramaswami
>> Takes that off the table, right? It helps ameliorate that quite a bit, right? That's just one use case for them. Now, as they go to this cloud native world, they're now trying to figure out how they can manage their cloud native environments and their current environments rather than having silos bring them all together. And while doing that together, you also want to be able to stand up these container environments very easily. So container environments need networking, they need storage, they need observability, they need disaster recovery, all those things you want to provide. And part of that is provided by the ecosystem that we build around us. So this is, again, where it all starts coming in to solve real customer problems.
John Furrier
>> The thing that jumps out at me, you talk about on stage, but it's also globally being discussed. And by the way, software gives a lot of optionality, certainly the supply chains wants many examples, but sovereignty is huge.
Rajiv Ramaswami
>> Oh, yeah, it's a new thing.
John Furrier
>> Now, you go back say four or five years ago, sovereignty was cloud sovereignty, protect the data, GDPR, privacy. It's shifting now with AI, and we're seeing this at The Ray Summits coming up in July will be there in Europe specifically that sovereignty in the country is economically driven.
Rajiv Ramaswami
>> Yes.
John Furrier
>> Services, telcos are going to adopt a service-based model soon with the edge. You start to see economics with AI applications in the countries. So sovereignty has now got two components, data, but that data now is economically valued because it's got tokens. Speak to the sovereignty upside for you.
Rajiv Ramaswami
>> By the way, so I think this whole move towards sovereignty is here to stay, first of all. I don't think it's going away. It's going to become even more important. And there's a few other things that are driving sovereignty also, which is more self-reliance, more the ability, not just the data piece, which is always part of it, but you want to have your own infrastructure. You want to be in control of it yourself. You want to have US citizens run it and manage it and not be dependent on outside parties. And to the extent they can, you also want to make sure your technology supply chain is robust to support all of that. So that's happening everywhere outside the US and even inside the US, there's concerns about sovereignty. So you go to Europe, you go to Asia, it's everywhere, right? And that represents a huge opportunity for Nutanix because, again, this is one of the fundamental things. We enable sovereign clouds to be built that meet these needs, provide the self-reliance capabilities, provide the ability for these folks to operate their own infrastructure in a way that matters to their countries and allows them to build. I mean, for example, with these AI sovereign clouds being built, countries are trying to stimulate the economy there, right? In fact, there's many government incentives out there to say, "Let's go build this because we believe that this is going to help local businesses use AI easier."
John Furrier
>> Have you seen governments get more involved-
Rajiv Ramaswami
>> Oh, 100%....
John Furrier
>> with the sovereign conversation?
Rajiv Ramaswami
>> 100%. A lot of it is government driven outside.
Alison Kosik
>> And which side of the conversation do they fall?
Rajiv Ramaswami
>> Oh, the first thing is that it's a government initiative. In many cases, make sure they have multiple sovereign cloud providers in their countries. The second is they do provide incentives for them. They might finance some of this build out. They might try to bring their own government organizations as initial customers into these. So they want to use this as a stimulus for their economy, to your point, say, "Look, we need our capabilities so that our companies, our industry in our country can actually go out there, get AI trade, AI enabled, and hopefully create more economic value as a result."
John Furrier
>> Rajiv, you're a builder, you're a technologist, you're the CEO. I want to get your personal perspective on this. If you can inject intelligence into the world, your platform, which will hit to the edge, whether it's end user computing for VDI replacements or IoT devices or interfacing the chat, what happens next? And what happens with the Nutanix platform as intelligence comes in? Certainly the technicals are out there, that flywheel's going to kick in. If you're an AI intelligence layer, what happens?
Rajiv Ramaswami
>> We are the enabler for intelligence is the way I would think about us, right? Because I mean, by having this platform approach, we allow people to go build all these different AI applications, which last year was simple inferencing today, it's agentic AI tomorrow it's physical AI and robotics, right? I mean, sitting here, I wouldn't really know what's going to happen with that intelligence. There's all kinds of things that could happen. And I don't know, right? I mean, but the possibilities are endless. And just by enabling this, I think it's really going to help transform how every company, every business operates.
John Furrier
>> What's your big bet in the platform? Because once that scales, if you can do the work now, it's like, if you do your homework now and get it right, is it governance? What are some of the elements that you really think about with your team that's going to create a lot of value and allow that to be extracted by developers, business and verticals of all types?
Rajiv Ramaswami
>> The first thing is make it simple for people to consume AI. I mean, that's important, right? I mean, it's not easy, right? And just like everything else, the simpler it is, the more adoption you're going to get. The more cost-effective it is, the more adoption you're going to get. And then once you've got the basic stuff, then you've got to make sure that this is properly governed, make sure you have the security, privacy around the data, make sure you figure out who's accessing what, what permissions you're giving it, you're protecting your data, you're protecting all different tenants or users who are using that. All that becomes parcel of the enablement of enabling customers to consume AI, and that's what this is all about.
John Furrier
>> All right.
Alison Kosik
>> Nutanix, I have to ask this. I'm just curious what you're thinking. Nutanix helped define what hyperconverged infrastructure is.
Rajiv Ramaswami
>> Yes.
Alison Kosik
>> What do you want the company to be known for in the next five years?
Rajiv Ramaswami
>> Yeah, we truly want to be the platform company where all applications run, right? Today's applications, tomorrow's applications, in this new AI world, we want to become this platform of choice for our customers all around the world.
Alison Kosik
>> Okay.
John Furrier
>> All right.
Alison Kosik
>> Rajiv Ramaswami, thank you so much for joining us on theCUBE. It was fantastic.
John Furrier
>> Congratulations.
Rajiv Ramaswami
>> Great having us. Thank you all.
Alison Kosik
>> Thank you. Thanks for watching theCUBE. I'll see you next time.