At theCube’s exclusive Networking for AI Summit, full-stack networking meets AI precision. In this interview, Anil Varanasi, co-founder and CEO, Meter, joins host Bob Laliberte to explain why Meter built the entire stack in-house across hardware, firmware, OS, data pipelines, APIs and applications. Varanasi contrasts this approach with legacy, bolt-on stacks that create configuration and interoperability problems, hurting reliability, performance and security. He shares real deployments, from Bridgewater’s complex compliance needs to Webb School’s 100-acre campus where Meter redesigned the network in weeks and added roughly 30–35% more APs for better coverage and peak-time availability at a fraction of expected cost.
The discussion explores how full-stack control enables trustworthy AI for networking. Varanasi details why Meter trains purpose-built models on its own data for deterministic outcomes, not creativity, and why clean, end-to-end telemetry is essential. He highlights Microsoft Azure GPU clusters supporting rapid iteration, with models trained and released on a daily cadence. The conversation connects to the summit’s core angles: enterprise networks as the foundation for agentic AI, secure operations and simplified management across campus, branch, WAN and data center. Varanasi closes on what is hardest to replicate: ground-up hardware and a single OS and API.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
The Networking for AI Summit. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Networking for AI Summit
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Networking for AI Summit.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
The Networking for AI Summit. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to The Networking for AI Summit
Please sign in with LinkedIn to continue to The Networking for AI Summit. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
What to expect at The Networking for AI Summit '25
Video Title: Anil Varanasi, Meter 2 | Networking for AI Summit
Join us at the Networking for AI Summit as Bob Laliberte of theCUBE Research hosts Anil Varanasi, the founder and Chief Executive Officer of Meter. Together, they explore the innovative capabilities of Meter Command and its significance in transforming network operations.
Anil Varanasi brings a wealth of expertise as the founder of Meter, discussing with theCUBE Research's Bob Laliberte how Meter Command revolutionizes network operations with natural language processing and dedicated artificial intelligence models. This video delves into how networking-specific models are crafted to surpass traditional large language models, aiming for more precise and actionable results for network engineers.
Key takeaways from this discussion include insights on the independence of network-specific models according to Varanasi and the critical role vendors such as Meter play in ensuring reliability in network operations. Varanasi emphasizes that by integrating Command, IT teams are equipped to transition from manual command-line interface processes to more seamless natural language interactions, enhancing both efficiency and confidence in managing networks.
What to expect at The Networking for AI Summit '25
Anil Varanasi
Founder & CEOMeter
At theCube’s exclusive Networking for AI Summit, full-stack networking meets AI precision. In this interview, Anil Varanasi, co-founder and CEO, Meter, joins host Bob Laliberte to explain why Meter built the entire stack in-house across hardware, firmware, OS, data pipelines, APIs and applications. Varanasi contrasts this approach with legacy, bolt-on stacks that create configuration and interoperability problems, hurting reliability, performance and security. He shares real deployments, from Bridgewater’s complex compliance needs to Webb School’s 100-acre ca...Read more
exploreKeep Exploring
What is Meter's full-stack, vertically integrated approach to networking and how does it differentiate the company from legacy networking providers?add
What are some examples of customers with varying needs and how do they utilize specific solutions?add
What factors contribute to the success of companies like Tesla and Waymo in deploying self-driving fleets?add
What to expect at The Networking for AI Summit '25
search
Bob Laliberte
>> Hello, and welcome. In this session, we're joined by Anil Varanasi, co-founder and CEO of Meter. Welcome, Anil.
Anil Varanasi
>> Hey, Bob. Thanks so much for having me. Great to be here.
Bob Laliberte
>> Yeah, absolutely. It's going to be a great session. I mean, I'm really looking forward to exploring how Meter is taking that full-stack, vertically integrated approach to networking, designing everything from the hardware, software, operational flows. Also want to look at how this foundation positions Meter to build out those networking solutions for the AI era. So let's jump right in and get started. How does the full-stack approach differentiate Meter from the other legacy networking providers? And what advantage does that deliver to customers?
Anil Varanasi
>> Yeah, for sure. So if you look at networking in the last 40 years, obviously, many companies and many legacy companies have racks similar to ours behind me throughout. But as you know, Bob, they bought that stack rather than building it, switching from one company, wireless from another, security from another, layer seven security from another, SD-WAN from another. And then, just sales and marketing are packaged together. What that means for customers is promises from legacy vendors that, "Hey, we're going to integrate all these. You're going to have a simple experience." But as we all know, there's products out there, and majority of them, even though the acquisitions were done 10 years ago, they're still disparate products. And only non-technical people that haven't worked on software and hardware might just say, "Oh, you bought the company. Why can't you just combine them?" There are different code bases, there are different languages, different frameworks, different architectures, different APIs. So how that impacts customers is the two biggest challenges where issues come from the networking are configuration, interoperability. And when you have many disparate products that are smashed together, and only the sales and marketing are packaged together, those actually lead to bad outcomes for customers. And this is where you see degradation in performance and reliability and security. But most of all, networks should be great. When you have wireless from one company that was bought, and switching from another company that was bought, and everybody does their own implementation of protocols and RFCs, you just have a situation where the end outcome from a customer isn't great. So as ludicrous as it sounds, in these last 40 years of networking, before Meter, nobody has built the entire stack from the ground up with hardware platforms that are tied together, single firmware image, operating systems, data pipelines, APIs and applications for the entire stack from ISP, routing, switching, wireless security, DNS security, IDS, IPS, SD-WAN, and VPN all tied together.
Bob Laliberte
>> Now, that sounds great. Can you give maybe a solid example of one of your customers and how they were able to deploy this and be able to recognize the benefits of having that single-stack approach?
Anil Varanasi
>> Yeah, maybe I'll go through two customers, so you can understand customers at two different ends of the spectrum.
Bob Laliberte
>> Yep.
Anil Varanasi
>> So first one, maybe we look at somebody like Bridgewater. Bridgewater is the largest hedge fund in the world, has incredibly complex networking requirements, security and compliance requirements. They've deployed Meter to get those benefits on reliability and performance. And most of all, we're in an industry where we're told hardware's commoditized and all these disparate products that are put together, that's where our innovation comes in, being able to combine. So somebody like Bridgewater is able to deploy and get those benefits. On the other end of the spectrum, you might look at a school like Webb School. It's a school that serves over a thousand students across 12 buildings on a hundred acre campus up that needed a network refresh and overall air fall. And they were quoted quarter million to half a million dollars just to get the minimal viable APs that are needed, not even including the support costs, and licensing, and all that stuff. So not only did it turn out to be too expensive for them, it just wasn't timelines and outcomes that a legacy vendor like that could hit. And we all know in schools, even with thousands of students, it's a very small IT and networking staff that have to be able to service this. And in schools, networking is incredibly important because every student has some sort of device they're using to access content. So we were able to design and deploy the entire campus, that hundred acre campus, in a matter of weeks rather than months and at a fraction of that cost. They thought we were going to come in and just swap each AP, but we completely redesigned the network with about 30, 35% more APs than they had. And the new design eliminated coverage holes and improved bandwidth and availability during peak periods. And at the same time, even with our approach, it's not that we're taking away control. And, Bob, I think you've seen our software before. We're able to give them a full self-service down to any small thing they're able to change. So we have two examples of literally one of the largest companies in the world with all the resources and a school that's always on tight budgets, both being able to deploy Meter and get those outcomes without having to buy six different vendors, configure all of them, maintain them through four to six different dashboards and not sure on what the outcomes are.
Bob Laliberte
>> Yeah. No, it sounds impressive, right? Being able to reduce their overall costs, but more importantly, be able to deliver more efficient and optimized management being able to accelerate the deployment, right? So being agile, being able to do that very quickly. And ultimately being able to, as you had said, by redesigning the whole environment, providing better experiences for the end users ultimately.
Anil Varanasi
>> That's right.
Bob Laliberte
>> Yeah. No, that sounds great. One of the other things you've also highlighted is that full-stack control is a foundation for building AI networking products. And I'm wondering if you could talk about why this level of integration is critical for training accurate and reliable models.
Anil Varanasi
>> For sure. So maybe first thing to say is, I think one of the things I am concerned about in our industry is nobody's actually training models. People are taking very sensitive data, and just shipping it off to one of the model providers. This is actually networking data that's very critical to our customers, sensitive data about their businesses and their environments and their usage. That's getting just shipped off to these model providers without any sort of confirmation or wherewithal on what's happening with this data. So first, we actually believe you need to have custom models that are built for networking for that. So in that, you need control over the stack because in the vertically integrated approach, you can have a single API that can go through, like I said, from ISP down to a single VLAN and SD-WAN and VPN and security, all of those. So to be able to build it, this is actually akin to the approach why somebody like Tesla or Waymo are successful in deploying fleets that are able to drive themselves that we all can use because of that full end-to-end controlled approach rather than just taking any pieces of it and disparate APIs. It's why a legacy car company is not able to do what Tesla and Waymo are able to do is because of that control of their entire stack. The control also what it gives is actually access to data across the entire stack. You cannot build great models and great products without large diverse, clean data sets that are traversing the entire stack. We have, like I said, a single pipeline from how a network is designed to how it's deployed, then how the hardware and software maintain it throughout. So having that approach, we believe is the only way. It's why, again, Tesla and Waymo have succeeded. Why others have not is because you need that control. You need those data sets to be able to do and actually train models yourself from the ground up rather than just like opening a API.
Bob Laliberte
>> Got it. Got it. So you're actually taking all of your own data, all of the Meter data itself, bringing it in, collecting it, cleaning it, and so forth, and putting it into your own network specific model for AI?
Anil Varanasi
>> That's right. Again, we believe that's the only way to do it. Because when you look at a large model, you are building a large model because it be creative, from writing to anything like that. Creativity is actually what we don't want in networking. In networking, we want precision and accuracy, not creativity. And the outcomes need to be deterministic. So how you train models, what you're rewarding them, what your loss functions are, all of those things matter to actually be able to be really accurate. One of the things that could happen in our industry that could lead to bad outcomes, and outcomes we all don't want, is vendors taking random models and just throwing it up and calling it AI and selling it and marketing it. Then, end customers have bad results from it, and they say, "Oh, this AI stuff doesn't work." Well, of course it doesn't work. You're just shipping it off to a language model that is built to either write code or built to write emails and marketing campaigns. That's not what networking needs. Networking needs models that are deterministic and that are accurate and precise that we can rely on in production.
Bob Laliberte
>> Yeah. No, I think that's important. One of the main topics I've always looked at in this space is the time to comfort for the end users. So I'm wondering if you could touch upon, by building out your own model and having it be more predictive and precise and accurate, how is that impacting the amount of time it takes for the operations teams to become comfortable with the technology and be able to start embracing it instead of being somewhat skeptical and having to validate it? Right? There's going to be time no matter what. But I'm just curious is, what you're seeing from your customers?
Anil Varanasi
>> Yeah, we see very different results compared to what we've seen from legacy vendors and others in the industry. Our product, the Command product, that I think you've seen too has rave reviews from customers on the accuracy, precision, and speed to be able to do it. But the other thing also is, it's not demo-ware. It's actually in production, not vaporware. Whereas, most of the products we all see that people are just pushing out saying AI as fast as possible is demo-ware that will come next year or the year after. And when it does, it doesn't really work, and et cetera. But being able to actually build production-grade, high-quality products that we can stand by and our customers can rely on, we're seeing really great responses from, not just customers, but also all of our partners as well.
Bob Laliberte
>> Got it. Yeah. No, I think that makes a lot of sense. I know we're going to be talking about Command in separate video, so I'm not going to go too deep on that now. But what I did want to touch upon was your partnership with Microsoft. I'm wondering if you could explain how that partnership and the access to those GPU clusters supports your AI networking roadmap.
Anil Varanasi
>> Yeah. Build grade models, like I said, you need control, in our view, and data. But also, you just need a lot of compute. And Microsoft has an incredible platform with Azure. The resources they're able to provide to companies. Building out GPU clusters is not our core competency. We want to have partners that are great at doing that, and doing that across thousands and tens of thousands of GPUs to be able to build great models and products. Microsoft has a great establishment on how to do partnerships. And then, the leadership is really bought in as well to be able to enable companies building great models, whether it's Satya or Jay, who is one of the leading experts in networking, having built out on the open compute platform. They've just been a tremendous partner to be able to build models. Because at the end of the day, without compute, you're not going to get anywhere. And maybe that's one of the ways to gauge all networking vendors that are claiming AI is, what's the compute that they have to actually be able to produce models?
Bob Laliberte
>> Right. I mean, along that line. If you're doing that, how often is the model being constantly updated? Is it something you do on a regular basis? I assume all the data, as your customer base grows, you'll be able to collect more data and be able to further refine the models.
Anil Varanasi
>> Yeah, we train and release iterations of models on a daily cadence.
Bob Laliberte
>> Okay. Yeah, so trying to keep up with the cloud native environment and ensure that you always have the best. That makes a lot of sense. So one other question I wanted to ask you. Why should organizations consider Meter for AI networking solutions? Clearly, this whole summit that we're doing today is all about the different areas of networking, whether it be in the back end, AI data centers, whether it be in that front end environment, right? Making sure you're able to collect the data and bring it to, potentially, even the WAN. And also, obviously the operations aspect of it. So given that as a context, how would you describe your differentiation in the areas that you play in?
Anil Varanasi
>> Yeah, I think it's a little bit different for whether looking at a front end network, a back end network, or an operational network. I think there's kind of different answers there. But to be as laconic as possible, I think the reasons are we built the platform from the ground up with a single platform for the software, operating systems, data pipelines, APIs, and applications. And we build models from the ground up to help with network design, network configuration, network deployment, and network maintenance. At the end of the day, we're seeing an industry where the number of network engineers that's coming in is going down every year rather than up. You know, something like 30 to 40% of network engineers are retiring by the end of this decade without a replacement. So actually having that full control data and compute and products that are built to be able to do this is the main reason across front end network, back end network, or even an operational network.
Bob Laliberte
>> Got it. And so, you've obviously put a lot of time and effort and thought into this. The way you've designed it. I know it's the staff that you have. The mentality is network engineers by network engineers for network engineers, and so forth, to provide a really compelling solution. Of all the things that you've put together, which of the requirements do you think would be the hardest for other companies to replicate, and why? Yeah.
Anil Varanasi
>> Yeah. I think the hardest would be to build out the hardware from the ground up altogether and writing a single operating system and a single API. Again, pick your favorite legacy vendor, because it's one of the only industries that has multiple hundred billion dollar plus public companies, yet no new companies are coming in at all. Networking is probably the most scathing review against the efficient market hypothesis. Usually when you have legacy incumbents, large markets, you have new companies come in. But these legacy vendors, they've just been buying rather than building, and that would require a gargantuan effort to be able to redo their entire stack from the ground up.
Bob Laliberte
>> Got it. Got it. Well, it certainly sounds like you're on the right track at Meter in bringing together the homegrown solution. Again, with the hardware, software, operational, right? All the software stack customers are buying into it. We're hearing more and more organizations that are adopting Meter, so congratulations on that. Before we wrap up, anything that organizations should be thinking about as they move forward into this AI era when it comes to the network?
Anil Varanasi
>> Yeah, I think it's time we return networking sort of back to the basics. When you're thinking about high throughput, high latency, high bandwidth, people obviously want to do some fancy things and fancy buzzwords and acronyms, things like that. But majority of the gains we believe will actually come from us as an industry and particularly even Meter as a vendor taking responsibility of doing the basics in networking right. From how the protocols are implemented to RFCs, to how we're building the hardware. I think it's time even our customers demand that the basics work incredibly well from vendors.
Bob Laliberte
>> Got it. That makes a lot of sense. Anil, thank you so much for joining us. That was a great overview of Meter's approach and really highlighted some of the key requirements for building out these effective networking solutions for the AI era by designing everything from the ground up, focusing on full-stack control data and scalable compute to be able to build the models. Meter is starting to carve out a unique position in the market. Thanks so much for watching. Stay tuned for more Networking for AI Summit.