Jeff Aaron of HPE Networking participates in a conversation at MWC26 Barcelona on artificial intelligence, AI, edge on-ramps and hyper-converged edge architectures. Aaron brings deep networking and product marketing expertise and examines how networking, compute and storage converge at the edge to support AI workloads, the emergence of AI factories and routing use cases, and the practical implications of hyper-converged edge designs referenced in theCUBE Research coverage. They outline HPE announcements such as PTX12000, MX301, JCNR and ProLiant integrations and emphasize self-driving AIOps and partnerships with NVIDIA. They state that carriers need low-latency and lossless routing, compact edge on-ramps and unified management to scale AI to the edge. theCUBE analysts highlight the commercial opportunity for telcos and the operational value of a unified control plane.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
MWC Barcelona 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for MWC Barcelona 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for MWC Barcelona 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
MWC Barcelona 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to MWC Barcelona 2026
Please sign in with LinkedIn to continue to MWC Barcelona 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jeff Aaron, HPE Networking
Jeff Aaron of HPE Networking participates in a conversation at MWC26 Barcelona on artificial intelligence, AI, edge on-ramps and hyper-converged edge architectures. Aaron brings deep networking and product marketing expertise and examines how networking, compute and storage converge at the edge to support AI workloads, the emergence of AI factories and routing use cases, and the practical implications of hyper-converged edge designs referenced in theCUBE Research coverage. They outline HPE announcements such as PTX12000, MX301, JCNR and ProLiant integrations and emphasize self-driving AIOps and partnerships with NVIDIA. They state that carriers need low-latency and lossless routing, compact edge on-ramps and unified management to scale AI to the edge. theCUBE analysts highlight the commercial opportunity for telcos and the operational value of a unified control plane.
In this interview from MWC Barcelona 2026, Jeff Aaron, vice president of product and solution marketing at HPE, joins theCUBE’s John Furrier to discuss how AI is revitalizing networking infrastructure and moving data-centric architectures to the edge. Aaron highlights how routing has evolved from simple "plumbing" into a strategic priority, enabling telecom operators to monetize new capabilities through AI factories and high-performance clustered systems. He explores the synergy between HPE and Juniper, emphasizing how the combined portfolio provides the ultr...Read more
exploreKeep Exploring
What is driving the trend toward hyper-converging storage, compute, networking and AI workloads at the edge, and how does this differ from previous hyper-convergence in the data center (including performance and operational implications)?add
What are the main networking use cases for AI workloads, and how is the company addressing each of them (including its partnership with NVIDIA)?add
What are the main focus areas and recent product announcements to support telcos moving inference to the edge—specifically around high-performance routing, autonomous AIOps, and integrations with third‑party LLMs?add
How has HPE embraced self-driving networking, and what effect have recent acquisitions had on its networking business?add
>> Welcome back, everyone. I'm John Furrier, host of theCUBE with Dave Vellante who's out on assignment right now, getting some action on the show floor. This is MWC live coverage, three days, and we're bringing down all the action. As AI comes into the telecom operators infrastructure and carriers, and as the enterprise want to leverage the edge, a lot of new capabilities are emerging. It's a super exciting show, as AI is infusing into data-centric architectures. Jeff Aaron's here. He's the vice president of product and solution marketing at HPE. Welcome to theCUBE. Good to see you.
Jeff Aaron
>> Yeah. Thanks, John. Yeah, it's good to see you again.
John Furrier
>> So, I love this show. I always loved MWS because I'm a networking nerd by heart. And I always said networking is always the last area to get innovated. But now, with AI factories, you're seeing the tokens and you see NVIDIA and all the success people are doing with AI large language models, frontier models. It's inevitable the edge will have AI factories or high-performance, large-scale clustered systems. But networking always is the fabric that makes the connected systems work better. So, you bring in the far edge, the edge, you got wireless, wireline, you've got a lot of backhaul, but telecom has always been like sitting there waiting to take advantage. It's like hanging around the barbershop looking for a haircut. They missed the cloud. But I think AI, they're perfectly positioned to provide the kinds of services and maintain their position. This seems to be the theme of the show.
Jeff Aaron
>> I think so.
John Furrier
>> Yeah, agents are being discussed, but the high level direction is telecom has an opportunity to leverage their position, have an architecture that's data-centric and monetize-
Jeff Aaron
>> That's correct. Yeah....
John Furrier
>> in new ways. What's your reaction to that?
Jeff Aaron
>> Yeah, I agree 1,000%. So, it's funny. We have multiple businesses, right? And one is our routing infrastructure business and they keep joking that routing is sexy again. It used to be plumbing, but no, now it's very strategic to adding up the value chain and a big reason for that is AI workloads. They're moving everywhere now. They have to move to the edge. And for them to move to the edge, you got to get them outside of the factory and to all the locations. And so, yeah, we're right in the core of that and it's super exciting.
John Furrier
>> And the other thing too that I think positions you guys well is that the data coming from existing models to the edge is clearly going to be an interplay. And we've talked on theCUBE, certainly with you guys before, many times that data will move to the compute. If the compute memory at the edge, that's cool. But the edge is so big, it has to talk to other edge nodes. So again, routing concepts come in handy.
Jeff Aaron
>> Yeah, I think that's exactly right. So, if you look at the networking bit alone, there's four areas that come into bit. There's scale out, scale across, scale up and on-ramp. So, two are within the data center, scale out and scale up, but scale across an edge on-ramp basically mean you got to figure out how to connect to those areas, and those are just networking. What you also touched on is, so how do you take all those networking components, which we like to play in, and also combine it then with the compute components, which HP also likes to play in. So, this is why we're super excited about our position right now, because it's all coming together with networking and compute and all the different areas.
John Furrier
>> Yeah. I'm super excited when you were coming up. Before we came on camera, I was just publishing on LinkedIn, I published on theCUBE Research, the new hyper-converged report I put out. And the thesis was the edge is going to hyper-converge. We've been doing theCUBE for 17 years, we've covered hyper-converged in the data center. Then, it became unconverged, but then you got the super factories. But talk about the importance of this hyper-converged direction because that's going to change the game, just like it did in the data center, storage, compute and networking, but now you got storage compute networking and AI workloads. So, talk about what's different now at the edge as you look at this trend towards collapsing all that capability and making it high performance?
Jeff Aaron
>> Yeah. So, I mean, a couple different things. One is we talked about performance. Now, more than ever, you got to figure out how to do ultra-low-latency lossless connectivity to the edge, right? And I mean, that was always important, but now I think especially to maximize the value of the GPU. The other thing I think that is often overlooked is just the operational aspect of that. So, if you do have your compute and your network together, how do you manage that as a system, instead of different disparate parts, in terms of provisioning and in terms of management, in terms of troubleshooting and even bringing that into a common AI engine to troubleshoot that. So, I think that those are some different areas that are evolving here.
John Furrier
>> Right. Paint the picture for the edge because when people think about AI factories, they think about the NVIDIA systems, they're monster racks, they're dense, the memory's really close to the GPUs, you got all kinds of subsystems around interconnects. It's very engineered, big thing. That's not always the case at the edge. With prints are very diverse from very low power to some power, maybe some threshold of some higher power, but you're not going to have the ability to have a monster rack.
Jeff Aaron
>> No, you have to have footprint power, right? I mean, it's all constrained.
John Furrier
>> Compute and memory seem to be the core things, whether it's GPU or CPU or XPU. Talk about that dynamic and what that means to the distributed computing.
Jeff Aaron
>> Yeah, it's a great point. So, look at NVIDIA first, what you just mentioned. I think one of the things that's super interesting about HP networking and NVIDIA is that prior to the acquisition, Juniper didn't do much with NVIDIA, right? But now, that things are moving out to the edge and the core, we've actually established a partnership around routing, right? So, how do you do that edge on-ramp? How do you do that DCI, that factory-to-factory control, which is not an area that they primarily focus on, so they are looking for routing to do low-latency, low-loss infrastructure. So, that's an area where the partnership comes in. And so, to support that, exactly what you said, you need additional hardware and operations on our end. So, like our MX301 is a small platform, right? You got to stick that into these edge devices, fit within the footprint, be powered, or we come out with things like our JCNR, which is our containerized router, which runs on a ProLiant server. So, again, sometimes you don't even have the infrastructure or the space to even put a router in there, so it's got to be on your compute platform. So, these are some of the things we're looking at, right? How do you bring these together? How do you make it as dense as possible, as low power as possible? And then how do you bring in the ops? How do you tie that together?
John Furrier
>> I'm really glad you brought NVIDIA. I'd love you to clarify the relationship with NVIDIA, so you guys and networking. Because a lot of people think, "Oh, NVIDIA's got networking." And they do within their systems, there's a lot of networking and LLM routing that's going on within that little system. Again, this clustered system, but that's not like classic routing. So, describe why this horizontal or routing specifically and how that relates to the benefit of NVIDIA or-
Jeff Aaron
>> That's a great question. So, it comes back to what I mentioned earlier is that the way we view it, there's four networking use cases for AI workloads, right? There's scale out where switches talk to each other. There's scale up where within the switch it talks to each other. And that's where NVIDIA spectrum really plays, right? That's our primary market and that's where they're primary going out. But in addition to that, there's scale across where the AI factories talk to each other, data science talks to each other, which is traditional routing, big routings, very high buffers, very low loss, big, big iron there. And there's the edge on-ramp, how do you actually get it into the cloud? Which is more of our edge routers, inference routers. And so, that's where we started to partner with NVIDIA going back to our HV Discover in December and there's more announcements that will be coming with these guys. But again, how do you take that partnership for different use cases, whether it's AI factories, AI grid, to focus on more D-RAN and those environments, but it's a nice synergy there.
John Furrier
>> It's a great comment. I asked Jensen that his GTC in DC, the recent one he had, next one's coming up this month. I asked him, "What about AI factories at the edge and maybe metro?" He said there'll be many factories. He kind of answered it, but he allowed the dots to be connected by not answering it directly. He didn't say, "We'll have an NVIDIA factory at the edge," but he was implying was others will. So, you agree that you guys see AI factories at the edge, just not the big honking-
Jeff Aaron
>> We do. It'd be a different size, but inference in particular is moving to the edge. I mean, it's just a fact of life, right? And so, you need to figure out either it's an AI factory or some offshoot of that. Yeah, how do you move it to the edge? How do you get that data there and how do you incorporate that back into the system and how do you manage it all as a single operational-
John Furrier
>> I mean, it's so obvious to me, distributed computing is a bunch of factories talking to each other, doing the kinds of policy that's not networking policy, but kind of the same. AI workloads have all kinds of different things going on. Just bringing the right model down. So, networking combined with AI-
Jeff Aaron
>> And compute, yeah....
John Furrier
>> and compute. Unpack that because I think that's what people tend to miss. It's classic networking, for sure. I'm moving packets around, but there's workloads in there too that have policy. So, policy is threading across-
Jeff Aaron
>> I think you're exactly right. I mean, from an HP standpoint, for us, it's where is the AI stored? Where is the AI served? And how do you move the AI? So, it's compute, storage and networking all contribute to how you do that. And the ultimate goal, the ultimate holy grail for a provider is a common operational platform for that. So, whether it's GreenLake or OpsRamp, how do I pull all those things together so I have a unified view? If something goes wrong, which is it? Why did the model fail? And so, us being able to do that is key to what we delivered to the equation.
John Furrier
>> Jeff, I think you nailed the show in one sentence, bringing AI to the providers is really what's kind of happening. Talk about some of the news you guys had because you had the on-ramp. Explain some of the things that you could doing now here at the show and how that relates to the providers and the carriers having a model architecturally and also business model.
Jeff Aaron
>> Yeah, thanks for asking that. So, there's a bunch of things that we're primarily focusing on. So, the first is, as we talked about, is performance, right? Like we said, the telcos need performance as you move inference to the edge, as you scale between AI factories. And so, to that end, we announced some new routers. Our PTX12000 in particular, 12000 is a big hunking router. It's probably the highest performing router on the market, right? 800-gig ports, 54 ports per line card, up to 32 line cards. So, we're talking big, massive type of infrastructure. Shortly before that, we also launched the MX301, so that's those edge on ramps. So, we're providing the hardware to do that, which is super interesting. But on top of that, we're all about the self-driving network. How do you add the agentic autonomous AIOps capabilities with that? So, to date, our routing infrastructure was already tied to our missed AIOps, so how you do things like pulling the data and add a conversational interface? And what we announced in addition to that is we also edited an MCP server, so you can also work with a third-party LLM. So, if you're not using us, you want to use GPT to query your routing infrastructure, we announced that as well. And then, the third thing is we've added our JCNR, our containerized router to our ProLiant platforms. So, it's just a validated solution that comes in and says, "Okay, for those smaller data centers where I can't put a router in there, let me look at HP to bring it all together under a common platform."
John Furrier
>> And that addresses the footprint, the ProLiant piece. So, I have limited form, small little cabinet. I remember the old days of the telephone closets, remember? But a lot of these footprints look like that.
Jeff Aaron
>> They do.
John Furrier
>> Small cabinets, I got some power.
Jeff Aaron
>> Absolutely right. You may not even be able to fit a 2RUPTX or 2RUMX, so you're dealing with even tighter constraints than that.
John Furrier
>> Talk about the agentic piece and how that relates to how you guys see the upgrade cycle. Because we're predicting, we haven't published the numbers yet, but looking at some of the big providers, North America alone is billions of dollars, maybe $300 billion roughly. One carrier is like close to $100 billion of refresh. Retrofitting either central offices or facilities. They have towers, they have power, they got connectivity, maybe a small cabinet or a big cabinet, but it's not a monster 20,000 gigawatt.
Jeff Aaron
>> Yeah, but even then, if you're talking about distributed cabinets, like one of the biggest examples that Juniper and Mist and HPE always did around agentic and self-driving was the elimination of truck rolls, right? So, now more than ever, if AT&T doesn't have to ship someone out to rural Iowa or downtown Menlo Park, that saves them time and money if the system can self-heal or self-configure or self-optimize, right? So, that's what the holy grail is for any operator, let alone a telco, that has a more distributed interface.
John Furrier
>> That's a great point, Jeff. Because look at the data centers that are being built on them, CapEx. It's in the news all the time. Billions of dollars, the state's going to build a mega data center, AI factory. The skills shortage is huge. So, imagine the companies that have to go out and refresh this. There's grids involved, there's all kinds of multiple operating deployment factors. What's your response to that? Do you guys just plug and play, go?
Jeff Aaron
>> Yeah. I mean, that's the goal, right? I mean, certainly that's where standards, like MCP, come into play. That's where us being able to tie in with OpsRamp and GreenLake to bring some of these systems together. Obviously, all HP system has the ability to work a little bit better with an all HP system, but also that's why we establish partnerships with NVIDIA and other folks to actually be able to come in and say, "Okay, these are the solutions you want to use. We're going to try to make this as easy as possible for you to operate it, configure it, and then optimize it going forward."
John Furrier
>> You mentioned earlier having one unified platform, the control plane at the edge, because you got the far edge, we're going to be talking about spectrum and licensed spectrum, wireless access, both on-premises and also out in the wild. It's important that those radios talk to each other and then have some backhaul, which is a factory and router and all the hardware. How does that control plane look? What's the strategy? Explain the Juniper-HPE combination now. What's the message to the providers out there that are watching and enterprises that want to get really high-end services?
Jeff Aaron
>> I mean, I think you answered it in that, look, the control plane has to be distributed. You need to be able to serve the data where it makes the most sense. And especially if the network goes down, it's got to be able to run and serve it if you can access it from a management plane standpoint. But we are seeing more and more that the management plane wants to be centralized and needs to be unified, and it's not a common dashboard anymore. We like to say AI killed the UI star. It's not about a dashboard anymore. It's about the unified operations, right? Something goes wrong, how am I taking all these different control points, doing event correlation and coming back and telling you, "It's a compute issue," or, "It's the server issue," or, "It's the workload issue," or, "It's a GPU issue," right? That's how you get to the root of the problem with limited IT resources, and that's where this all comes into play.
John Furrier
>> I have to ask you to wrap up, because I think this is super important on everyone's mind, is HPE was looking to acquire Juniper by the end of 2023. I think the announcement was in January 2024. Mist was a home run obviously at Juniper, but they're known for the big iron routers, as you pointed out, so you had that legacy. I mean, in AI years, they used to talk about dog years with the internet, I think there's a whole nother classification, that's a lot of years have gone by. What's the new Instagram picture of HPE networking if you had to paint that picture?
Jeff Aaron
>> Yeah, for sure. So, the networking bit of it, I'll talk to first, is about the self-driving network, and HPE is fully embraced that, they're all in. They're leaning into that as a lead for a lot of the rest of the story, right? Like we said, compute needs and networks, providing it back end infrastructure, as does the whole AI infrastructure. So, we're seeing nothing but receptivity there. It's been a great acquisition. It's been super exciting where the networking BU now has substantive revenue, substantive profit and substantive story with no gaps that fills a really important gap. So, I was at the Mist acquisition of Juniper, I loved it. I saw them embrace it. We were able to convert the company from more of a telco routing play to one that's more AI-driven and more across the gap. HPE, we have even more tools in our arsenal. It's now an even bigger full stack, even more resources. So, we're super excited.
John Furrier
>> My first MWC was 2009, Juniper, HPE, I've been servicing the service providers for many, many decades. What's the biggest thing happening now if you had to put the headline out there for MWC26? What's the headline? What is the key thing?
Jeff Aaron
>> I mean, I don't think I'm saying anything new. Look, data is moving everywhere right now and the network is back. The network isn't just plumbing. The network is how you build a value-added service using an AI workload as a telco infrastructure, right? Yeah. And so, we're super excited. Like you, I've been in networking for 30 plus years and you've seen it where it was like the core to, okay, plumbing to, it's now right back in the spotlight-
John Furrier
>> And we all kind of pat ourselves in the back because we're always like, "Networking is the most important part," but now it is.
Jeff Aaron
>> It really is, yeah.
John Furrier
>> I think three years ago, Jensen Huang sat on stage at GTC two years ago, "Networking is the operating system for the AI factory."
Jeff Aaron
>> I love that, yeah.
John Furrier
>> I'm like, "Well, that's networking. It's not an OS." But what he meant was the coordination between the subsystems in that factory. So, at the edge, you have to have those factories networked, that's the control plane. And then, the rest is just blocking and tackling networking.
Jeff Aaron
>> Yep. Yep. It's exciting times, for sure.
John Furrier
>> All right. Jeff. Thanks for coming on.
Jeff Aaron
>> I appreciate it, John, always.
John Furrier
>> Good to see you. Appreciate you coming on. HPE, obviously, with the Juniper acquisition now three years on the books, finalized. As the world goes to the edge, networking is the fabric. Unified control planes, observability, cloud-native capabilities, distributed computing, this is the future of the telecom industry. Of course, it looks a lot like the enterprise, we love that. Thanks for watching.