At Nutanix .NEXT, theCUBE’s John Furrier and Bob Laliberte sit down with Tarkan Maner, CCO of Nutanix, and Kevin Deierling, SVP of networking and storage at Nvidia, for a powerful discussion on the next chapter of AI and infrastructure. Together, they unpack how today’s AI evolution demands strategic collaboration, seamless integration and speed at scale.
Tarkan Maner shares how Nutanix is helping enterprises build intelligent, agile operations with AI at the core. He outlines the company’s blueprint-driven approach to secure, scalable deployment, designed to boost productivity and transform customer experiences. Kevin Deierling adds Nvidia’s perspective, emphasizing the foundational role of high-performance networking and data acceleration in supporting AI workloads.
Both leaders agree: the future belongs to organizations that adopt AI boldly and efficiently. From customer-facing apps to infrastructure layers, Deierling and Maner explain how their companies are aligning to remove friction and deliver end-to-end enterprise AI solutions. It’s a call to action for businesses ready to lead with intelligence.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Nutanix .NEXT 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Nutanix .NEXT 2025
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Nutanix .NEXT 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Nutanix .NEXT 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Nutanix .NEXT 2025
Please sign in with LinkedIn to continue to Nutanix .NEXT 2025. Signing in with LinkedIn ensures a professional environment.
At Nutanix .NEXT, theCUBE’s John Furrier and Bob Laliberte sit down with Tarkan Maner, CCO of Nutanix, and Kevin Deierling, SVP of networking and storage at Nvidia, for a powerful discussion on the next chapter of AI and infrastructure. Together, they unpack how today’s AI evolution demands strategic collaboration, seamless integration and speed at scale.
Tarkan Maner shares how Nutanix is helping enterprises build intelligent, agile operations with AI at the core. He outlines the company’s blueprint-driven approach to secure, scalable deployment, de...Read more
exploreKeep Exploring
What is the event being covered by theCUBE in Washington DC and who are some of the industry leaders being interviewed?add
What are the benefits and opportunities that AI presents for businesses according to the founders and innovators?add
What are the key areas of the partnership and ecosystem that people should be aware of?add
What is the significance of the platform and the Nvidia relationship in terms of networking, re-architecting, and performance for Nutanix customers and future customers?add
What is the focus of the announcements made today in terms of technology and partnerships?add
>> Welcome back everyone, to theCUBE's live coverage
here in Washington DC for Nutanix NEXT 2025. I'm John Furrier, host of
theCUBE with Bob Laliberte, Paul Nashawaty, who's out
there getting briefed, he's getting all the data for you. We have all the great people
coming here, telling the story of the future of AI,
future infrastructure. We got two industry leaders,
entrepreneurs here on theCUBE, Tarkan Maner, Chief
Commercial Officer of Nutanix. Great to see you, Tarkan. >> Good to see you.
- Kevin Dearling, SVP Networking >> and Storage at Nvidia,
formerly at Mellanox.
Tarkan Maner
>> A key linchpin for the
success of what we now see as the super clusters and the AI factories, both CUBE alumnis. Guys, thanks for coming on, both industry leaders,
thanks for coming on.
Tarkan Maner
>> Thanks so much, great to be here. >> Guys, 15 years of theCUBE, >> I can't say I'm more excited right now because the energy at
the infrastructure level, the change over there, the
middle layer called cloud native, called distributed computing, and then of course the agent hype,
which some are saying is going be like 100X SaaS. SaaS was easy, you just build
an app, you put it on your PC, you put it in the cloud, put
it in the app store, boom. Not so much, the complexity
with AI isn't that easy. So the startups are kind
of born in the enterprise, all the hot consumer, even consumers. So there's a whole nother level
of builder entrepreneurship, enterprises are transforming. This is a big part of the
Nutanix value proposition. So guys, what's your reaction to that? Obviously, GTC, I observe
that, from the founders, from the innovators, what are they looking at, what's it like? >> Yeah, I think AI is going
to transform every business, it is completely transformative. People ask me, should they be
afraid of AI? I'm like, "No. Only thing to fear is if you're
not embracing it quickly. " Because it is so amazing, we see massive productivity
gains from AI across all lines of businesses, so it's really important. We have a great partner here with Nutanix. When we started working
on our AI data platform and all of our Nvidia AI enterprises and NIMS, Jensen said, "Well, listen, Nvidia makes things fast, we have partners to make things easy. " And I love that because Nutanix is a great example that makes things simple,
secure and scalable. >> 100%. Tarkan, you're an entrepreneur, you're also an investor,
you're also a leader at Nutanix. You've seen the waves
before. What is your take? Because you are doing a lot of things right now.
What's your perspective?
Tarkan Maner
>> Look, first of all,
super exciting to be here. Thank you for being here
and obviously, Kevin, my brother from a different
mother is here next to me. We're going to do a
lot of things together. As you saw, some of these
announcements have been made. A big one is obviously
enterprise AI focus. You asked a question about the fear. We're in Washington DC, FDR almost 80 years ago said it, right? "The only thing to fear is fear itself. " Let's not fear it, right? Having said all this, I think
the opportunity is immense. Beyond all the goobly-gop, I'm
going to make it very clear. All the enterprises we talked to, and you saw some of these
companies today, Tractor Supply, with Micron, with Moody's on
the stage, they're all going with enterprise AI, and the goal is to give
better customer service, better operations, better
security, better manageability, and give more profit to their customers. At the end of the day,
this is a huge opportunity. The entire market is changing and we're really, really excited to be in the forefront
of this with Nvidia. >> Yeah, so one of the things
I wanted to touch upon,
Bob Laliberte
>> because I do a lot of research, hear what people are coming back
and saying about AI adoption, and it's not so much about
maybe there is some fear, but it's also they don't
know what first step to take. And so a lot of them are
looking for blueprints and things like that to
help them accelerate those enterprise AI adoptions. What are the two of you
doing together to ensure that when you're launching
your enterprise AI with Nvidia components, that
it's a really tight solution that organizations are
able to rapidly deploy?
Tarkan Maner
>> You want to help take that first? >> Sure, I'll start. I think
you said the word blueprints.
Bob Laliberte
>> Blueprints is how we build agentic flows. So you have now AI as it used to be, you would ask a question
and you'd get an answer. One-shot, foundational
models for inferencing. Today, inferencing is
much more interesting and effective, so it's agentic AI. We have AIs talking to
AIs, talking to Ais, we have reasoning models. And putting those together,
we call that a blueprint. And then packaging that and delivering it to a customer so that they can start using
it, really, the key there is to take lines of business,
whatever business you're in, take your domain expertise and layer it on top of that platform. And so we build blueprints. We've got a partner here
that can pull it all together with all of the necessary
software orchestration, bringing the data close, making it secure and scalable and simple.
Tarkan Maner
>> Yeah, just tying to do this, all the blueprint work has been fantastic because it makes it real for the customers to apply those right applications
to the right workloads. Give an example, we are
working with law firms. They are crazy about document management, but intelligent document management. Today, all these people
are working on this search, managing documents. Guess what? Now with specific
blueprints for law firms, with our compute storage data management, Kubernetes layer will
deliver the right blueprint with Nvidia to deliver applications to support those law firms to do this. This is now in financial services. And multiple use cases, multiple workloads in multiple
verticals are becoming a customer space for this exact blueprint- centric deployment and delivery.
Bob Laliberte
>> Yeah, that's so important. I remember a year ago I was at
an event and the customer was up on stage, talking about
the work that they were doing and putting their model together, and at the end of it, they
said, "We hit 3% efficacy. " And everyone was kind of
looking at him like, "What? " And he's like, "But that's
okay because we got started. We're happy with that because
it was our starting point. We've learned so much
that we now know we're going to be able to ramp this up. " They'll get quickly. I think the thing that I like most about what
you're talking about is that will enable organizations
to get to a high level of efficacy very quickly without
having to hit those hurdles that those original deployers did.
Tarkan Maner
>> Yeah, I think we call it recall. When you look at a vector database, you're doing a similarity search. We're running RAG, retrieval
augmented generation. You have an enormous amount of documents. I couldn't even imagine reading a 10th of the documents at Nvidia. We want to read all those documents, let an AI summarize things for us, be able to query against it and get responses. We need accuracy, we need
to get the right results. We have guardrails to do that,
that's part of our blueprint, is to make sure that
you're seeing the data that is safe and effective. And on top of that, we need security. And this is where it's great
with a partner like Nutanix. They've got their entire storage and unified storage platform. All of the securities and access controls that are built into that can flow into the AI workflows and the blueprint so that
when you look up something, you're seeing only the data
that you're allowed to see and querying against that data. So making it all simple so that you can hit very, very effective. We're going to see 90% plus
recall where you're going to get accurate information
that's secure and simple.
Tarkan Maner
>> 100%. >> I asked Debo, your AI Chief
Scientist, about AI factories, >> and he's like, "Oh," because he grew up in a family where factories, he was a lot of exposure. He's like, "Oh, factories, you have input and output, data in, tokens out. " So he oversimplified but he's not wrong. I mean, technically there's
a lot going on in the system. So I have to ask you guys, because as the platform story continues to accelerate in the industry,
you guys have a great story here talking at Nutanix, the AI factory is becoming a
very hot topic in the largest enterprises in the hyper scale. Every single enterprise is
going to look at this idea of enabling this next generation. So the question is how does
it actually run at scale? Blackwell, big time announcements at GTC. We had a little side
conversation around Dynamo and KV cache. There's a lot of nuance to
this, so it's platform specific, but if you're going to have
a platform like Nutanix and a partner like Nvidia,
how do you run this stuff? What are some of the core things
people should think about? Because the blueprints are
going to get up and running.
Tarkan Maner
>> 100%. - Got the blueprints,
we're off to the races. >> Not reference architecture, blueprint. We'll come back to that later. What is going on, how
do you run these things? What is Dynamo? What's
the performance look like? How much is it going to cost? Are you bending the cost curve
in favor of the customer? These are the questions.
Tarkan Maner
>> Let me give you a Nutanix perspective because a lot of this
stuff is running on Nvidia infrastructure, software/hardware
with our software. You saw some of the examples today. We're talking about
thousands of nodes, hundreds of thousands of cores transforming and getting ready for these
AI specific workloads. Moving from gen AI into now, agent AI. And now we totally agree with Jensen and with Kevin, this
is a 100X opportunity. This is just the first inning. You saw some of the
customer examples today. One of our financial services customer, we're doing now a fraud detection
application, all agentic. The entire application is going
to run on thousands of cores and the entire deployment is through an agentic model
running on our security manageability automation capabilities. And the key thing, to your point, John, you are spot on, scale. Can we scale this to a new level? And that's what we're seeing right now. At the end of the day, enterprise AI, overall framework coming
from Nvidia and opens up all the
doors for us, not only with one vertical and one use case, but on multiple verticals. From fraud detection and security to intelligent document
management to customer service, the opportunities are limitless.
Tarkan Maner
>> Yeah. What's great about all
of this, we're talking about that scale, people thought
that the scaling was over with the training and that's it. We're seeing test time
scaling now, inference scaling with agentic workflows. And to do that effectively, you need a really effective platform where networking is critical to that. And so I came in through an acquisition that Nvidia made five
years ago of Mellanox, which was the networking company. The great thing is we've
been working with Nutanix for over a decade and they've
actually deployed all of that high performance
networking, the RDMA technology, the connection to the storage, all of that is really the substrate, the foundation upon which
you can scale out AI. That's the reason Jensen saw before anybody else that
networking was going to be super important for AI. It's great that we have
all of that history. They're really in a
unique position in terms of providing the flexibility. AI is changing, I can't tell you how fast. I can barely keep up with my inbox and things come out like
the new DeepSeek multi- latent attention models, really
changed everything in terms of how we go about it. It's a fantastic development for us, but you need the flexibility to do that. What I love about this
platform with software to find everything, you can
have more compute, more storage, you network everything together, it just scales out beautifully. >> Would you describe the
partnership as tightly coupled? >> Is it at an engineering level,
so there's crosspollination? >> Absolutely.
- What are the >> areas that people should know about? Because I think this is a
differentiator for you guys.
Tarkan Maner
>> Great question. Look, this is a complete tight product and go-to-market partnership. In upcoming days and weeks and months, you're
going to hear more about the things that we're working on. We did the first foray today with our new Nutanix Enterprise AI version Two. Version One was a gen AI
specific, version Two today, we announced with Nvidia
is agentic specific. And one key thing also to this
is the complete ecosystem. We have about 86 partners here today and all our OEM partners
from Dell to Lenovo to HPE, to Supermicro to Cisco,
they're part of the ecosystem. We're doing new work right
now with Fujitsu in Japan, specifically for Japanese market. All of those things are
our go-to-market modality with the right OEM partners,
right GSI partners, SV partners, and critically, all the ISV partners providing those applications running on the system. So it's an ecosystem play end- to-end, and this is your
John, you love ecosystems. >> I love ecosystems.
- And the great thing is your >> ecosystem is our ecosystem. >> Our partners are the same partners, so we're enabling this together to bring it out to the market. >> It's not siloed. It's
not a siloed relationship because you guys are tightly integrated. >> Absolutely, 100%.
- Awesome.
Bob Laliberte
>> Yeah, the other thing I like >> and what I've heard as well, is >> that when we think about
AI factories, a lot of people think about just
the backend training data centers, but it's really
a lot more than that. Especially as the inferencing
plays out to the edge, there's going to be a lot of compute needs to be connected, all that stuff. That's where platforms can
play a huge role in driving operational efficiency
and scale and agility. I'm wondering if you could
touch upon that a little bit and about how you're enabling
not only the training environments, but the
inferencing and the fine-tuning and things like that, so the whole piece as a single platform?
Tarkan Maner
>> Absolutely, very good point. Kevin kind of touched on this earlier, but a little bit from
a Nutanix standpoint, this is all about efficiency
at the end of the day. Look, we believe at Nutanix
with our ecosystem, passion, with passion comes practice. Because if you love what
you do, you do a lot. And as more practice,
we deliver more profit to our customers and
profit to our shareholders. We believe the new way of
doing things with Nvidia and their entire new strategy into their roadmap, into the future. And what we are doing, supporting them with the right security,
right systems underneath with storage management,
with network management, with compute management, our
goal is to make sure this is cost-efficient, cost
and that's a key thing. Because I don't want to
have customers, "Oh, I spent millions of dollars on this. It's one-time usage and I
don't make money out of that, it's going to be not good. " So if you're trying to reuse things, make sure it's efficient,
as Kevin talked about, from software to hardware to services for the right deployment, right outcomes. >> I would say I love the passion angle, but I would just also add technology. Dr. Rajiv, people don't
know he's a doctor.
Tarkan Maner
>> Absolutely.
- He's got a PhD, he wrote a book on >> optical networking. >> Absolutely.
- They all have Stanford PhDs done, >> so high technical IQ culture, Nvidia.
Tarkan Maner
>> I mean, Jensen, who I've
had on theCUBE many times, he's the only CEO I've heard
on stage, on mainstream keynote to say the word computer
science six times. He's proud and loud about the fact that it's a computer science revolution. Passion, tech savvy, I mean
it's a good culture fit. >> Oh, absolutely. And the
great thing is you asked about where this is going. We're seeing a new scaling
law, which is going to kick in with physical AI. And AI is going to be
everywhere, it's going to be where the data is. And that data can be at the edge and we're going to have smaller, lower power inferencing machines there. They need to be very close if
you're talking about robotics in a factory or something. We're going to move up into enterprise, it's going to be on-prem. There's going to be co-lo and
there's going to be cloud. That entire landscape, it's
going to be across all of that. We can scale across that platform. That's the beautiful thing
about a software-defined that we have hardware partners that really can be
anywhere on that spectrum, and it's going to be where the data is. There is data gravity. We're moving computes
to where the data is. That's part of what our AI data platform, which Nutanix is part of, is
putting the GPUs right next to the data so that we can do inferencing, we can do real-time embedding. >> I want to get your perspective. Bob and I talked about, he
heads up our research arm for the networking. Now, there's old-school
networking, there's new- school networking, kind
of a hybrid kind of class. I want to ask you, Kevin and
Tarkan, I want you to chime in. You guys both have expertise
in the tech industry. What's the big thing about networking that's going on right now? Because you look at
networking and even storage, because of the factory
architecture, the relationship of what they do changes and networking has always
been that area that we used to say, "Oh, we'll get to it sometime. " As a legend in networking and as infrastructure leaders, what is the big thing going
on with networking and storage and compute where we see that's clear? But those are the two
wild cards in the design.
Tarkan Maner
>> I'll let my PhD answer
on the network side and I'll chime in. >> Dr. Kevin, go.
- With agentic AI,
Tarkan Maner
>> you're actually accessing
a massive amount of data. >> And I think Jensen, five years ago or six years ago when he acquired
us, he said, "People think that a computer is this box
that you wrap sheet metal around that has a processor in it. That's not what a computer is. " He said, "The data
center is the new computer. That is really what an AI computer is. " And so that realization that you need data at AI scale, I mean, it's data center scale, it's massive, so you need very tight coupling. Really, when you have a computer that's at data center scale, the network is the new back
plane of that computer. And so being able to get very,
very high performance, again, this is why it's so important
that we have a 10 year history of innovation even before we were part of Nvidia to make that efficient. As you do agentic AI flows, you're going to be grabbing massive amounts of data. AI grows data and actually, binary data is much, much larger when it
becomes a vector database or a KV cache that we talked about. Doing that efficiently,
Dynamo you mentioned, is how we're going to move that data very, very efficiently between nodes. It's more important than
anyone ever realized, I think other than maybe Jensen. >> Tarkan, you've got the platform and Nutanix, you mentioned scale, check. What's going on in the platform? What does it mean for Nutanix customers and future customers of
this Nvidia relationship with all this networking,
re-architecting, get that performance, squeezing it out of it?
Tarkan Maner
>> Look, tying to exactly
what you talked about, Kevin, around networking. Now think about this, we are
the granddaddies, grandmothers of networking, compute,
storage, all coming together. That's all the hyper-converged
story of the past. Now taking that to a platform level, you saw the pure announcement today. Now getting into both block and file storage capabilities
with our storage partners, Pure is the first one we announced. Last year, we announced
obviously Dell, Powerflex story, you talked about this. You heard about this. We extended that to our
networking partners as well. Now you have an end-to-end
story and you saw the theme. I don't want to sound too
marketing mumbo jumbo, but look, this event is
all about run anything anywhere, right? Throughout multiple
clouds on any platform. You mentioned Rajiv, you mentioned Jensen, EQ meets IQ, right? There are tons of stuff
on technology side, but the key thing also,
as I mentioned, apply that technology to do
right workloads for profit for our customers and partners. Having said all this, we
are tying this to Dell, we're tying this to Supermicro,
we're tying this to HPE, to Cisco, to Fujitsu, to
Lenovo, to all our OEM partners because we want to make
sure on product innovation and in go-to-market deployment, we have the entire market
working in harmony. That's the big thing from
this event as we move forward. >> Yeah, what I loved about
this is you mentioned HCI 10 minutes into our conversation
and virtualization. 10 years ago, that would've been it.
Tarkan Maner
>> 100%. - But that's the
substrate that makes it all easy. >> So we're dropping AI workloads
into this beautiful substrate that you've built that's
all virtual machines. People can just plug it in,
containers now that you've got. >> It's the back plane. >> It's the back plane again. >> So we're the hardware back plane, they're the software back
plane, it all comes together.
Bob Laliberte
>> I don't know, Kevin, but
my big takeaway from this conversation is networking's back. >> Yes, it is.
- It never left. >> All right, what's next
for the partnership?
Bob Laliberte
>> You've got a lot of challenges,
obviously enterprise with NIMS, a lot of traction
in the ecosystem there. I love Nvidia, two sides
of the supply chain. They've got the ecosystem on one side and they've got the gear on the other, you guys got the platform. A lot of interest in NIMS
that's going to enable some of the RAG and early agentic. What's next for you guys in
the platform and enterprise?
Tarkan Maner
>> Look, I'm going to
give you a tech answer and a business answer. On the tech, you saw some
of the announcements today. Big focus on multi-cloud,
big focus on multi-cloud and the workloads supporting
that, like AI workloads, agent- centric workloads and making sure, again, run anything anywhere. Make it simple. Tying to this, look, from the partnership perspective,
I'm going to reach out to all of my customers and partners. It's all about customer-partner intimacy, making sure we work with
our partners very tightly. We have a partner exchange with 2, 000 partners in the room
in about 10 minutes. We're going to share with
them how to make money, we're going to share with our customers how they can save money. At the same time,
solution differentiation. That's why all this network
compute storage work around Kubernetes, around
VMs, Kevin and Kevin's team. In the next few weeks, you're going to see more agentic story with our platform supporting
it from a security, manageability, availability,
reliability perspective at scale and making it obviously very safe because we don't want people to fear AI. We want to make sure that
folks are embracing AI. So those are some of the
things that we're going to focus on. And the thing is, none of
this is not going to work unless we are specifically
providing operations with scale and excellence. That's why we're focusing
on customer support, customer service, customer delight. Keyword, outcomes. We want to make sure
people don't spend millions of dollars with no outcome. Outcomes are critically important to us. >> So you're the dashboard of the factory, >> but you also kind of got
the operational tooling. Would that be an oversimplification of it? >> Pretty much.
- The dials? What about you? >> Sovereign gen AI has come up.
Tarkan Maner
>> How do you guys see
that as an opportunity? >> Yeah, so sovereign is huge. I think the key for the future for us is really all the verticals. You talked about financial services, but there's healthcare, there's manufacturing, there's engineering. We don't build solutions,
we build frameworks, and that's a key that
we offer those as NIMS. That's an Nvidia inference, Microservice. It's just a container that you drop in. It uses that whole substrate that we have. And what we want to do
is enable our partners. They don't need to become data
science experts, AI experts. They should use these tools and these frameworks to build that out. Sovereign is a great example,
so sovereign AI in countries around the world, they may
not have giant football field- sized data centers, they have telcos. And so we're doing a lot of
federated communications. We're putting Telco in there because the first job of
a telco is 5G going to 6G. >> The data center.
- We run that on top of our GPUs >> as a software-defined service. >> And those same GPUs now and the whole AI platform
there can run all of these other different vertical
workloads for inferencing. Huge opportunity for
countries around the world. Why would you take your cultural
heritage, your language, and train that in some other location, put it into some big cloud? They're going to build
that on-prem, so fantastic- >> We got to cut, I wish we had more time, >> but the edge is a whole
other conversation. Talk about the long tail of size and scope of what a data center
looks like in terms of size, there's the monster data centers, and then hey, telephone
pole could be a data center. >> Absolutely.
- I mean, come on. Or a computer. >> Yeah, totally.
- Guys, thanks so much. >> Two industry legends, Tarkan, >> Kevin, thanks for coming on theCUBE again. >> Thanks for having us.
- Great to know you guys >> and thanks for being part
of theCUBE, appreciate it. >> Excellent.
- Thank you, two. >> All right, for Bob
Laliberte, I'm John Furrier.
Bob Laliberte
>> You're watching theCUBE. Stay
with us, we'll be right back >> after this short break.