We just sent you a verification email. Please verify your account to gain access to
Dell Technologies World 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Dell Technologies World 2025
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Dell Technologies World 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Dell Technologies World 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Dell Technologies World 2025
Please sign in with LinkedIn to continue to Dell Technologies World 2025. Signing in with LinkedIn ensures a professional environment.
Ajay Mungara, senior director of developer products and ecosystem for data center and AI at Intel Corp., and Chris Branch, AI strategy sales manager at Intel, join theCUBE’s Savannah Peterson and Dave Vellante at Dell Technologies World 2025 to explore the evolution of enterprise AI. The discussion focuses on simplifying AI deployment at scale while preserving flexibility across on-premises environments.
Mungara highlights the challenge of moving AI from prototype to production, emphasizing Intel’s collaborative approach through the Open Platform for...Read more
exploreKeep Exploring
What are the challenges and considerations involved in deploying AI applications at scale?add
What are the factors contributing to the development and adoption of standards in technology, and how do companies like Intel approach this uncertainty?add
What are the main challenges and considerations for scaling AI proof of concepts to full production and integrating AI into businesses?add
>> Good afternoon AI fans, and welcome back to Dell Tech. We here in fabulous Las Vegas, Nevada. My name is Savannah Peterson, bringing home our three days of coverage here with Dave Vellante. Dave, what's the coolest thing you learned this week?
Dave Vellante
>> Big theme today is solutions for on-prem AI and making that real.
Savannah Peterson
>> Yeah, it is. It is. And I think we couldn't have better people to talk about that than Ajay and Chris. Thank you so much for being here, guys.
Dave Vellante
>> Hey, guys.
Ajay Mungara
>> Thank you.
Chris Branch
>> Thank you for the time.
Savannah Peterson
>> So we've got the Intel booth right behind us, and I was personally quite delighted when I saw you brought back the Intel inside branding. It's actually on our Dell AI PCs right now too. What's going on at Intel? Ajay, I'll start with you. It's an exciting time.
Ajay Mungara
>> It's definitely an exciting time for us. And what we are really focused on right now is there is a lot of hardware. There is a lot of AI talk. And everywhere you go here, everybody's an AI company. Every use case is an AI use case. Everybody's talking about the digital assistance, agentic workflows, all of it. But to make AI real, it is very simple as well as very complex at the same time, very simple because to start an AI prototype, it will probably take you an hour to get going to develop something like a chat GPT application, very quick, very easy. But when you have to deploy that at scale, either on the cloud enterprise or a hybrid, it gets really complex. You need to worry about-
Savannah Peterson
>> Yes it does.
Ajay Mungara
>> All of the performance. You need to worry about the optimizations. You've got to worry about how do I scale the compute, the Kubernetes layer that comes with it, the VLM layer that does the inference scaling? You need to worry about how do I take this model and run it in the most optimized way? What's the TCO for it? So what we really are trying to do is bring that simplicity of use starting AI prototype and bring that same type of a model on-prem or on the cloud by abstracting all of that complexity and by providing these API endpoints, which are more standard API endpoints like OpenAI API endpoints or Llama API endpoints. So that way your customer can start a prototype and take that prototype and scale it on-prem at a production scale. So that's what we are trying to do.
Savannah Peterson
>> That's not a simple task. What are some of the challenges that you are actively overcoming right now as you're going through this, Chris?
Chris Branch
>> Yeah, good question. When we talk to a lot of the customers that we're seeing here at Dell Tech World, we see kind of a bifurcation. We've seen some customers that are looking to build their own models and do major work, and then we see other customers like Lowe's during the day one keynote that simply wanted to use models and then impact their business in a very tangible way.
Savannah Peterson
>> Definitely.
Ajay Mungara
>> The difficulty there is that they might start on open AI with ChatGPT, get a POC going, but when they talk about how do I actually scale that into production, then all of the challenges Ajay had mentioned come into play. Do I really need to buy a large amount of infrastructure? What software stacks do I need to run? Am I going to have a vendor locked in? How do I do data management? What about security? All of these factors then pile together in creating an overwhelming situation, and that's what we're looking to solve and simplify.
Dave Vellante
>> So the open platform for Enterprise AI, the acronym is of OPEA. How do you say that?
Ajay Mungara
>> OPEA.
Dave Vellante
>> OPEA?
Ajay Mungara
>> Yeah.
Dave Vellante
>> OPEA. Okay. What exactly... Well, you announced it, what, a year ago?
Ajay Mungara
>> Yeah.
Dave Vellante
>> What exactly is it? What's the tech behind it? And what does it do?
Ajay Mungara
>> So we have like over 50 partners now at OPEA. AMD is a contributor and active member of the TSC there, along with Intel and many other companies. So what we were trying to do with OPEA is create those solutions, the GenAI solutions, the GenAI infrastructure stacks. And how do you evaluate the performance of your GenAI solution end-to-end? Okay, what's the infrastructure that you really need? And how do we do it in a very open way? And if you look at the Intel AI Enterprise inference solution that we talked about, or the RAG solution that we are talking about, all of it is part of the OPEA open source community. So anybody can pick up that code. They can start an example. They can take up all the infrastructure components of microservices that we have, and they're able to quickly deploy enterprise solutions at scale. That was the whole idea behind it. And so it's like cross-platform. It works on Intel. It works on any other platform. And it works on Xeon. It works on Gaudi. It works in the cloud. It works on-prem. So the whole idea is how do we change the conversation from infrastructure elements or microservices or frameworks or models to actual solution outcomes that the customers are looking for? How do I do an agentic workflow? How do I do a chat Q&A? How do I build a content summarization or code generation type use cases? And what is the real things that you need? And compute is way below there, right? You need the compute, of course, for everything to happen, but you have all those layers of value that comes on top of it and all of it done in a very open-source way. And that's really what is OPAE for us.
Dave Vellante
>> So it's an abstraction layer. You've open-sourced it.
Ajay Mungara
>> Yes.
Dave Vellante
>> Obviously, you guys are committers. What's the uptake been? What's the contributions been? Where are people focusing?
Ajay Mungara
>> Yeah, we are getting a lot of contributions from the community today. We are getting contributions from AMD. We are getting contributions from Neo4j. We are getting contributions from Infosys. A lot of SI partners are coming in and contributing. And our recent member, NetApp, which is also contributing significantly, they launched this AI mini pod recently, which brings AI closer to the data. Because there is only two ways, right? AI needs data. You can take data to AI into the cloud, or you bring AI closer to on-prem to the data that you already have as an enterprise. So all of these products are really launching at scale now, that are leveraging the microservices that OPAE offers. And the enterprise inference stack and the enterprise RAG stack that we have is also secure. It's validated. It's scalable. It takes into account all the privacy concerns that you may have inside an enterprise, all of that abstracted from your different hardware layers.
Dave Vellante
>> I am excited to hear Neo4j in there. So you've got a graph database capability, which can help us harmonize the data and prepare for agentic.
Chris Branch
>> Yes.
Savannah Peterson
>> Y'all have been partners with Dell forever, since the dawn of time basically. How does that partnership play a key role right now in making sure that you're able to keep up with the velocity of everything? Chris, I'll start with you.
Chris Branch
>> Yeah, absolutely. So we've been a partner with Dell for... You said it yourself... forever, right?
Savannah Peterson
>> Yeah.
Chris Branch
>> And going back, Xeon and CPUs have been a vital component of our partnerships for a long, long time. And as Intel gets into AI and it advances some of the AI work that we're doing and accelerator work that we're doing, we're developing things like the AI factory, which is a multi-node cluster. We're putting together solutions and services to go with that. Then individual systems, exactly what Ajay had mentioned, trying to make it simple for customers to have inferencing in a box, something that they can adopt readily. And then on the AI PC side, we're going all the way at the edge where in-customers are able to adopt these systems so that you see Intel and Dell working together to impact customers' lives from the laptops that you're using all the way back into the cloud and everything in between.
Savannah Peterson
>> Yeah, yeah. It makes a lot of sense. Ajay, when you talk about the open source community, we're big open source fans over here, but why is this so important right now during this moment?
Ajay Mungara
>> See, it's very, very important because the AI is evolving so fast. Every layer of the stack is evolving really fast, right? And with speed comes some risk, because if you do things in open, then people can point problems that you may not have discovered yourself. At every layer of the stack, the more that you go open, the more you're public about the code, the more you're there, out there and you're saying that, "Look, everything that I'm doing is out here," and if a customer likes the code and if they find some issues or they want to extend that open-source code, that's where they can add their own value on top of it. So we have to enable for all of the ecosystem to add their own value at different layers of the stack, different services, different components. And if you're not open source, pretty sooner or later, it'll become very difficult for you to stay on that commercial stacks, right? And there is value, right? I mean, You take the base core open-source value, and then on top of that you add something unique to it. But that unique will not stay for too long, because there is some other open-source community out there that's already creating that in the open, right? So you're stitching all of that together, packaging it and making it available to your customers at a very rapid scale.
Dave Vellante
>> So I want to pick up on something you said about every layer of the stack is changing. You guys know a lot about stacks, kind of built the stack of the last era. So you obviously compute IO, networking, and then software on top. How are you thinking about this new parallel era? And what does it mean for, two things, one is the internal plumbing, the architecture, but also the partnerships and the go-to-market strategy? How is that evolving to keep up with the AI pace?
Ajay Mungara
>> It's a great question, in fact, right? See, because that is the choices that you have to make. And you have to give that choice at every layer in the stack. Say, for example, like the open inference stack that we have works with OpenShift AI. It works with a standard open-source Kubernetes. It works with any scale ray. So whatever you want, whatever the choice that the enterprises already have made, your stack and the layers on top, like your AI layers, your inference scaling layers, your digital assistants, all of those layers, the security layers like key management, those layers need to bolt on to your existing infrastructure. You cannot come in and say, "Hey, look, it's a vanilla system. You're going to start from scratch, and I'm going to give you everything." That's not going to work. Because every company has modified the innovations that's happening at every layer of the stack, and they made it theirs, and now you're telling them that, okay, now how do I bolt on this AI solutions on top of that existing infrastructures that you're managing? And that flexibility that you need to bring to these enterprises is... See, or you can have a complete closed-source model, and you can say, "Use everything mine or nothing." It may work for some cases, but not everywhere.
Dave Vellante
>> So, Chris, how has that evolved? You were thinking around go to market and partnerships and just the overall strategy.
Chris Branch
>> Yeah, you're exactly right. What we're talking about with customers though is that while there's a lot of companies that enjoy geeking out and talking about speeds and feeds and underlying network topologies, most really don't. They want to solve a problem.
Savannah Peterson
>> Right.
Chris Branch
>> Right?
Savannah Peterson
>> Solution-driven activity, yeah.
Ajay Mungara
>> Yeah, exactly. So how is this going to impact my business outcome? And then how can I get there very quickly? So while we enjoy talking about all the technology behind it, you're exactly right, some of those partnerships and ISV partnerships and then OSV partnerships are really critical for us to make it easier for our customers to do the adoption, but then simultaneously within the hardware architecture, what we want to do is make it easier to be able to implement more stable, more reduced supply chain risk. So we're also doing things that are... We talked about having multiple vendors. So if we can abstract, if we can make it simple, then it gives customers choice, reduces their risk. And if we can do things like reduce our reliance on specialized networking and use open standards for networking, for example, then we can focus on providing customers choice without having specialized components that then increase that risk. Again, simplicity, less risk, business outcomes.
Savannah Peterson
>> Speaking of simplicity, APIs are having a bit of a moment right now with agentic. Talk to me about what y'all are doing for API endpoint security.
Chris Branch
>> Oh, man, you talking about security?
Savannah Peterson
>> Yeah.
Chris Branch
>> Security specifically, let me tell you about what-
Savannah Peterson
>> If you're that excited .
Chris Branch
>> I'm going first, and I'm excited about it. I want to tell you briefly what we're doing just in the demo in our booth. And the reason why I'm excited around agentic and this API endpoint thing is because in the past everybody had to develop on the silicon itself, and it was complicated. It took forever. But with the agentic workflow combined with APIs, what you can do then is have a dashboard that runs multiple models simultaneously. So if you're integrating within a CRM or a JIRA tool or something like this, you're going back and accessing data or getting the information from the web, if you have a chat interface, you're analyzing images, all of those different things might take different models, but also different infrastructure to run those models. What then that agentic workflow with these APIs allows is for companies to run those on different systems at different times in different locations without changing any of their code. Now, what you-
Savannah Peterson
>> That's a big deal.
Chris Branch
>> Exactly.
Ajay Mungara
>> Exactly.
Chris Branch
>> So now that with-
Savannah Peterson
>> I see why you're excited.
Chris Branch
>> Exactly. So with that implementation, what we have in the booth is an HR implementation as well as a defect detection. And then all of the assets and all the components with IT are actually running on different infrastructure. And we can move the infrastructure it's running on cloud, on-prem, NVIDIA, Intel, with a click of a button.
Ajay Mungara
>> And without changing any code.
Dave Vellante
>> Yeah, that's sweet.
Ajay Mungara
>> Because as an enterprise-
Savannah Peterson
>> That's awesome.
Ajay Mungara
>> You don't want to maintain different code bases, because whether you are running on Xeon on-prem, you're running on Xeon in the cloud or Gaudi or any infrastructure, you don't want to change code. You just want to maintain your application, and you want to switch where your API is going to go talk to. And that is why standards are important. And I'm telling you, the next era is going to be... The standards that we are seeing today, the Llama API, the OpenAI API for inference, you're having MCP, you're having agent-to-agent, these standards become so important so that way the innovation can blossom in so many different ways-
Savannah Peterson
>> Oh, yeah.
Ajay Mungara
>> So that the users can take advantage of it, so that the agent built by one company can talk to an agent built on another company. They can collaborate together to get you the best outcome. How cool is that? Right?
Savannah Peterson
>> Oh, it's super cool.
Ajay Mungara
>> Without that standard, then this agent can't even discover the existence of another agent. So it doesn't even know what to do. So if you're having those standards, then it really enables innovation to blossom. Thousand flowers can bloom everywhere, and you have to just stitch it together to get your best enterprise outcomes. And that time-to-value you will only get if you have those level of standards. And as Intel, we are embracing all of these standards. We don't know which one will really take off, which one will become the thing. See, in 2018, 2019 timeframe, OpenAI, ChatGPT, OpenAI APIs, and now it's a de facto standard for inference. Today we're talking about agent-to-agent, MCP, Llama. We don't know which one is going to finally become the de facto standard. But even if it is short list of one or two or three, it's still manageable, but I think the industry will converge to one sooner or later.
Savannah Peterson
>> I think you're absolutely right about that. And standardization, not the sexiest term in the world right now, but it's honestly one of the more important vital building blocks of what's going to happen and the agentic success that companies are going to realize.
Dave Vellante
>> And enable scale.
Savannah Peterson
>> Exactly. It won't happen without that. So it has to happen. I'm curious, one final question for you both. When we're hanging out at Dell Tech World 2026, what do you hope to be able to say then that you can't say today? Ajay, I'll start with you.
Ajay Mungara
>> What we will be able to say is how can we make all of these great AI POCs, like proof of concepts or prototypes, how can they actually scale at production? How can they're actually getting the benefits of AI? We can talk technology all day long, right? I can talk about, oh, the world is moving to agents, world is doing this, but if the customers, if the AI outcomes... See, in 2007, 2008, everything was a smartphone, right? Everything was a smartphone app. In 2000s, it was everything was a website. Now nobody talks about like starting a business without a website, right?
Savannah Peterson
>> Right.
Ajay Mungara
>> Nobody talks about starting a business without a smartphone app. Now, next year, and pretty soon, if you're not having AI as an integral part of your company, of your product, of your service, it becomes so much so that you're-
Savannah Peterson
>> .
Ajay Mungara
>> Not going to have a conversation. So here we are talking AI everywhere. Now, tomorrow AI will become an integral part of everything that every company does. And that's what I think, I'm hoping that we will start having those conversations. Why are you not using AI, right?
Savannah Peterson
>> Yeah. I love it, Ajay. What about you, Chris?
Chris Branch
>> There's something I'm hoping to not hear, which is AI is hard. We consistently hear that. Every presentation I go to says, literally starts with the first slide, "AI is hard. It's complicated." And I don't want to hear that anymore. And I think that's the vision and the journey that we're on. We want to solve the hard problem, and so we can get to the point where we're saying we are impacting business outcomes, we're solving with agents, we have robots, we have all of the fun, interesting things that we want to go do out in the world, but not that the back-end infrastructure is a limitation or a complexity that we're dealing with. We want it to be transparent, easy to use, and a lot more enjoyable for us to go solve real-world problems rather than worrying about how we're going to implement that.
Savannah Peterson
>> I love that answer.
Dave Vellante
>> Yeah, me too.
Savannah Peterson
>> Great answer.
Dave Vellante
>> AI is just here. AI is just here. .
Savannah Peterson
>> Well, Intel inside, AI is here. I mean, it's definitely how it's going to be. Ajay and Chris, thank you so much for taking the time.
Chris Branch
>> Thank you both.
Ajay Mungara
>> Thank you so much.
Dave Vellante
>> Thank you guys.
Ajay Mungara
>> Thank you both.
Savannah Peterson
>> Thank you.
Ajay Mungara
>> It's a great conversation.
Savannah Peterson
>> And thank all of you for tuning in to our three days of live commentary at Dell Tech World in Las Vegas, Nevada. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.