Anshul Sadana, founder and Chief Executive Officer of Nexthop AI, joins theCUBE's John Furrier for an insightful discussion as part of the NYSE Wired Mixture of Experts Series. This session explores the intersection of advancements in artificial intelligence (AI) and infrastructure demands, supported by notable funding milestones.
In this engaging interview, Anshul Sadana unveils Nexthop AI's mission to revolutionize AI infrastructure with custom solutions tailored for cloud hyperscalers. Drawing from their experience at Arista Networks, Sadana discusses critical topics such as networking innovation, the development of leaf-spine designs, and the role of AI-scale infrastructure. Analysts from theCUBE, experts in tech industry dynamics, guide the conversation to uncover key insights.
Key takeaways from this discussion include the significance of power efficiency and product customization within AI infrastructure, as highlighted by Sadana and analysts. According to Sadana, collaboration with hyperscalers is driven by a pressing need for resilient networking systems that align with contemporary AI demands. The interview illuminates emerging trends and solutions that cater to complex data center architectures.
Hashtags: #NexthopAI #theCUBE #NYSEWired #MOESeries #Cybersecurity #AI #CloudHyperscalers #LeafSpineNetworking #CustomAIInfrastructure
Find more SiliconANGLE news and analysis https://siliconangle.com/.
Follow theCUBE's wall-to-wall event coverage https://siliconangle.com/events/
Learn about the latest theCUBE events https://www.thecube.net/
00:00 - Unveiling the Future: TheCUBE's AI Advancements and Nextup.ai's Strategic Growth
03:04 - Nextup.ai: Origins, Custom Solutions, and Market Trends
06:36 - Product Development and Deployment
09:39 - Designing the Future: Networking Innovations for AI Clusters
15:50 - The Role of Integrated Systems
18:52 - Company Vision and Culture
21:32 - Discussion Wrap-up and Closing Remarks
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Mixture of Experts Series
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Mixture of Experts Series.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Mixture of Experts Series
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Mixture of Experts Series. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Anshul Sadana, Nexthop AI
Anshul Sadana, founder and Chief Executive Officer of Nexthop AI, joins theCUBE's John Furrier for an insightful discussion as part of the NYSE Wired Mixture of Experts Series. This session explores the intersection of advancements in artificial intelligence (AI) and infrastructure demands, supported by notable funding milestones.
In this engaging interview, Anshul Sadana unveils Nexthop AI's mission to revolutionize AI infrastructure with custom solutions tailored for cloud hyperscalers. Drawing from their experience at Arista Networks, Sadana discusses critical topics such as networking innovation, the development of leaf-spine designs, and the role of AI-scale infrastructure. Analysts from theCUBE, experts in tech industry dynamics, guide the conversation to uncover key insights.
Key takeaways from this discussion include the significance of power efficiency and product customization within AI infrastructure, as highlighted by Sadana and analysts. According to Sadana, collaboration with hyperscalers is driven by a pressing need for resilient networking systems that align with contemporary AI demands. The interview illuminates emerging trends and solutions that cater to complex data center architectures.
Hashtags: #NexthopAI #theCUBE #NYSEWired #MOESeries #Cybersecurity #AI #CloudHyperscalers #LeafSpineNetworking #CustomAIInfrastructure
Find more SiliconANGLE news and analysis https://siliconangle.com/.
Follow theCUBE's wall-to-wall event coverage https://siliconangle.com/events/
Learn about the latest theCUBE events https://www.thecube.net/
00:00 - Unveiling the Future: TheCUBE's AI Advancements and Nextup.ai's Strategic Growth
03:04 - Nextup.ai: Origins, Custom Solutions, and Market Trends
06:36 - Product Development and Deployment
09:39 - Designing the Future: Networking Innovations for AI Clusters
15:50 - The Role of Integrated Systems
18:52 - Company Vision and Culture
21:32 - Discussion Wrap-up and Closing Remarks
In this Mixture of Experts segment from theCUBE + NYSE Wired, theCUBE’s John Furrier sits down with Anshul Sadana, founder & CEO of Nexthop.ai, to unpack breaking launch and funding news alongside the architectural realities of AI-scale networking. Sadana confirms $110M raised across seed and Series A – led by Lightspeed (Guru & Ravi) with participation from Kleiner Perkins, WestBridge Capital, Battery Ventures and Emergent Ventures – and outlines Nexthop.ai’s model: co-developing custom hardware and software with hyperscalers to deliver turnkey, rack-integra...Read more
exploreKeep Exploring
What recent accomplishment has Nexthop.ai announced in terms of funding and investor partnerships?add
What was the focus of the project at Arista?add
What criteria do the hyperscalers consider when designing products for the cloud, and what are they optimizing for?add
What are some challenges faced by enterprises in terms of deploying large-scale machine learning models and how do you see the ecosystem evolving to address these challenges?add
>> Hello, welcome to theCUBE here. I'm John Furrier in the Palo Alto Studios, host of theCUBE, bringing you some breaking news from our mixture of experts networks, theCUBE plus the NYC Wired Community and Open Network where people gravitate around content. We're calling it Mixture of Experts because it's a little pun intended there, MOE. We're going to have all of our chain of thought here, bringing all the AI knowledge here. We got a great story and funding news and a launch, Nexthop.ai, , your founder and CEO, former Arista, knows the business, friend of theCUBE. Welcome to welcome to our breaking news segment for our MOE, Mixture of Experts.
Anshul Sadana
>> Thank you, John. Great pun.>> We've got a lot of experts in theCUBE Alumni network. It's kind of fun to kind of throw that around. Brian Baumann came up with that name from the NYC Wired, but that's really cool about what you're doing here is not only you guys are experts at what you do and what you've done, the team you've assembled at Nexthop.aiai is forging new ground. You've got some huge funding news, launch of the company. It wasn't started yesterday, been around. Explain Next Top AI funding news. How much did you raise? Who's on the team? What's the funding, partners? Go.
Anshul Sadana
>> Absolutely, John. Nexthop.ai, we're building custom products for the cloud hyperscalers we just raised and announced a total of $110 million. That includes our seed and our series A. And we have great investors as partners. These are great venture capitalists who understand long-term infrastructure play. So round is led by Guru and Ravi at Lightspeed. We also have Kleiner Perkins, WestBridge Capital, Battery Ventures, and Emergent Ventures as our other investors.>> So you got a lot of tier-one players in on this deal, obviously, where you've come from, what's happens at Arista, networking, storage, compute, the Holy Trinity of computing now in the cloud. I'm sensing it's an AI-scale infrastructure play. Can you share what you guys are doing? Tell us what's the thesis, the North Star and what you guys are launching.
Anshul Sadana
>> Absolutely. It's a great time to be in AI infrastructure. When you look at what's going on around us in the last two decades, networking has changed significantly. We started with client server architecture, when to cloud networking. In fact, I remember in the early days at Arista around 2008, I was asked by a key storage architect at Microsoft, can you put together a design to interconnect 10,000 storage nodes together, fully non-blocking. It wasn't possible that time. So we were come up with this new architecture called leaf-spine designs. That was the birth of leaf-spine in cloud networking. We scaled that to get to up to 200,000 nodes with compute and storage. Now with AI, some of the hyperscalers are trying to interconnect a million GPUs together. Each GPU sending 800 gigabyte or 1.6 terabyte of bandwidth per node. You build a fabric for that. You're talking about extra bits of throughput. That requires a new paradigm for networking. That requires a new way of thinking what you want to do in interconnectivity. We are building customized products for these hyperscalers to solve these problems.>> I'm smiling because we've been staying on theCUBE. We had a former CUBE host, he's now at Red Hat, Stuart Miniman, and he and I would always ... We're networking guys. We love networking. Networking was always kind of like ... No one talked about it. It was always compute, storage. With AI, networking is such a critical linchpin to value and look at security. Networking is the biggest thing in both security and AI because the clustered systems, the new large-scale supercomputers all have top leaf and spine. And every presentation you go to, it's not the server anymore. It's not the rack and the devices connected with a switch or a device. It's the combination of everything together. It's truly system design. That's the new server. The server is now a supercomputing, but it's using parts from all different types. The fabrics have to be put together. Networking is a huge piece of it. The role of ethernet, photonics. I mean, this is an infrastructure kind of like NerdFest in my opinion, but this is critical. Without this infrastructure, nothing happens at scale. And you've got power envelopes, power, and cooling challenges. I mean, liquid cooling is really the only thing we see doing this. So with all of that as a backdrop, is this kind of what you're tackling? What is the core problem you guys are solving? Was it scale? Was it the design of the systems? Was it deployments? Take us through the ideation of how this came about. Obviously, you saw from Arista. Arista had a great run in the past five years as this accelerated computing wave came in. What was the focus? Take us through the problem statement and lay it all out.
Anshul Sadana
>> John, when I was at Arista, I had an opportunity to work with the work, 10,000 customers, some in the enterprise space, financials, retail, service providers, and of course, the cloud. The cloud always fascinated me. They were always pushing the limits of what was possible with technology at that time. Now comes AI. With AI, all of the hyperscalers have their own custom stack and there was a classic build versus buy problem going on in networking even in the past on they can design their own white boxes or they can buy from the industry and customize and fit into their stack. But with AI, the GPU racks are so custom, the compute architecture, the storage architecture, the connectivity as you mentioned, electrical versus different types of optical technologies is a must for them to evaluate. So they cannot easily buy something from the outside and fit it into their custom stack. What we decide at Nextup is to co-develop our hardware and software along with our customers so that they can buy what they would've wanted to be built.>> So you worked backwards from the customers to borrow an Amazon Web Services kind of concept because they had demand. Was that the driver? They had demand and there was no real product or even if they got a product, they still would've to do all that work anyway. Was it the chicken or the egg? Which one came first? The demand or the fact that they couldn't build it or they had to build it from scratch?
Anshul Sadana
>> AI has turned the world upside down in the world of infrastructure. So cloud companies, their first choice is not to build on their own. If they can buy it, they would prefer to buy it. They'll focus on other problems. The opportunity cost is huge. However, in this infrastructure space, they couldn't find a solution that is so custom and fits in their rack. Some customers are trying to build switches that are 19-inches wide. Some are 21 inches wide, some are double-white racks, 31 inches wide. Some are liquid-cooled. But the liquid cooling requirements for each of the hyperscalers are different from each other. So they would prefer to buy products that someone builds for them. However, the standard product doesn't meet the requirements. So last summer after I left Arista, I was asking a few friends in the cloud, "Why do you build on your own?" And they said, "If someone built custom products for us, we will look at it." That is really the birth of the idea.>> Hey, I've got some free time on my hands. So what happened next? Take me through the progression because I think this is really a discovery. It's a breakthrough in my opinion. So they have needs, I mean look at the hyperscalers. They're trying to squeeze every ounce of performance out of their CapEx. And the CapEx is still in demand. They're building out. What was the key for you, what happened next? You say, okay, great, I'm available. I really want to start a business out of this. Did you know at that time Nextup would exist or you knew you'd do something or was it ... When did you form that opportunity line of sight?
Anshul Sadana
>> Like any start-up new journey, you have some partners along with you. My wife knew that I was ready to start something new. So she was nudging me along. I'm sure she was very supportive and she said, "It's time. Go ahead and start it." I talked to my friends in the cloud. They said, "Go ahead and do it as well." So around August last year, we started forming the team and today we have a great leadership team with lots of domain expertise in system design and networking both from systems companies as well as the cloud. We have a team of 400 today here in the Bay Area, in Vancouver and Seattle and Bangalore and a few other cities as well. So last eight, nine months, we've been building up the product and getting to a place where customers can test it, give us feedback. Remember it's code-developed. So every month there's a checkpoint with the customer to see that it was going to meet their needs.>> So co-development first and foremost, where's the industrial property? So are you building on their behalf or are you building your product with their requirements? Take me through how that works.
Anshul Sadana
>> So the product is being built by us based on the requirements. However, as you get closer to deployment, you have to integrate with their parts of software, and the software is not open-source, but it's not closed-source either. There's certain parts that are open source, certain parts that belong to the customer, certain parts that belong to us, that all has to be integrated. By the time they get it, it's a turnkey product. It just works.>> So you're basically providing almost integration services at scale for these large systems. Does I get that right? I mean, not integrated, you're not charged with integration, but you're basically doing the work to integrate.
Anshul Sadana
>> So we are designing the product and integrating it and then giving it to them.>> Do you resell it to other clouds or hyperscalers? Is it per each cloud gets their own share?
Anshul Sadana
>> Now our model is each cloud gets their own product because they're different from each other.>> Yeah, again, it makes sense. So it brings up customization. If you look at the semi-market right now, custom silicon is a huge discussion. So this makes sense. What are some of the conversations like with the hyperscalers? Is it performance? Is it energy? What is the criteria that you guys design around? What are you optimizing for?
Anshul Sadana
>> Sure. Almost in every discussion, power efficiency is number one. If you can make it more efficient, they want to give you some perspective. Recently, Meta announced a two-gigawatt data center. And each hyperscalers is adding one to two gigawatts of capacity every year. Soon each one will have 10 gigawatts of data center capacity or more. If you can provide one person efficiency, that's 100 megawatts. That's the size of the largest data center in the world just a few years ago. So the cloud companies are heavily focused on that, but the other problem they're trying to solve for and which we are part of is can you solve for time to market? Because in the past, the CPU cycle, once every three years. The GPU cycle is once a year. So can you design products so that they can deploy new products every year? Their NPI cycle used to be 18 months per the GPUs come every year. They have to go faster. They need partners like us to help them go faster.>> So talk about the product. Is it software, is it hardware? You bring in a box, do they give you spots on their rack? I mean, take me through the deployment and how they're consuming your solution.
Anshul Sadana
>> So we are building hardware. It runs our software as well. So we are in the rack. It's a box that they take and deploy in the GPU racks. However, unlike traditional networking products that exist out there, these are pre-integrated in the customer's GPU rack. So sometimes it could be at the top of the rack, sometimes it's at the back of the rack. Sometimes the connectivity is standard cables. Sometimes the connectivity is very proprietary connectors that only exist with that one cloud company.>> What I love about this world. You mentioned earlier being upside down with the whole opportunity, is that the stuff that they teach you in school or you learn in trade craft, never do a one-off. I mean, you're basically in the one-off business, but they're such big one-offs. These systems. I mean you look at GTC, you look at what they're presenting every year, these things are monster machines. I mean, they're systems. It's like multiple GPUs, multiple fabrics, the spine-leaf architecture, the Ethernet's in there, all kinds of interconnects. I mean, it's a system.
Anshul Sadana
>> Correct.>> I mean, it's a super computer. So you're an ingredient to that.
Anshul Sadana
>> Absolutely. But you're absolutely correct about the one-off as a business model, it's actually very hard because in most cases, it's not scalable. You want to build a product that every customer buys. In the world of ethernet networking and AI specifically, ethernet networking overall is a 70 to $75 billion tan. Half of that, $35 billion is in these hyperscalers. So while it's a one-off, it's the biggest companies in the world, each customer can be considered to be a market by itself.>> Yeah, and that's changed. And I think the conventional wisdom would've told you that, but you would've missed the opportunity. Again, smart move and they're only getting bigger too, by the way. And then distributed. Talk about the nature of where this model scales because once you learn the architecture, there is economies of scale. Once you know the hyperscalers, you get a general feeling of what their configurations might be. Let's just take a hyperscaler. They have regions now. Okay, each region has needs. They've got edge. So you have all kinds of systems. How does that play into your vision? Do you see yourselves in the major data centers for them or you see yourselves also playing out in other parts of the network?
Anshul Sadana
>> It's too early for us to expand because the opportunity is so big. We want to focus on GPU connectivity, maybe sometimes cloud connectivity, but we are inside the data center. Edge is becoming a part of the AI problem because what's happening is if you want to deploy 200 to 500,000 GPUs, you don't have the power in one location. So to interconnect all these GPUs together as one cluster, you now have to form a new edge. The new edge is all interconnected within it. It's a mesh. It's not van, it's not MPLS, it's not backbone, it's not service provider, but it's part of the AI cluster. Some of the solutions we are being asked to work on are exactly those. They're considered part of the GPU cluster solution, not outside.>> Yeah. Again, networking is everywhere pervasive. So some of the GPU cluster. Before we get into some of the company discussions around go-to-market and how you're going to use the funds and whatnot. What's the core problem that you solve? Because obviously, GPU clusters, they're different because again, they are systems, so sometimes they might have different requirements. Is there a pattern to your performance? Is it scale? Is it resilience? What is the core areas that you're addressing when you get into these solutions that they need that no one else can deliver?
Anshul Sadana
>> Absolutely. Today, when they try to build something custom, they require a lot of resources. They're unable to keep up because there's so many iterations each hyperscaler has to do. So they're asking us to build something along with them, but it has to be high quality. Do you know what's the number one problem for AI clusters today? It's actually something, some plain and stupid, link flaps. One link flaps and the entire cluster comes to a screeching halt. 32,000 GPUs are waiting for that one last GPU to send its message, and quite often if you cannot reconcile that message, you have to roll back to the previous checkpoint. That could be up to one hour of compute time lost. That's a million dollars of expense for the cloud company. So they're asking us to make it a lot more resilient than what it's->> Explain link flaps. People don't know networking. This is where weird things happen. Links go down. Explain link flaps for folks, what does that means and then why it's so problematic.
Anshul Sadana
>> So what happens is you have all these GPUs that are connected with a wire, optical or electrical connected to the other side, but you have switches. Then they connect to other GPUs. If any of these links go down for various reasons, that GPU is off the network, it's unreachable. If as part of middle of a transfer between different GPUs, different nodes, you're just blocked until that message can be sent. So solving that problem is extremely important. In the past, people built networks with five nines, 99.999% uptime. In the cloud, that is not good enough. That's five minutes of an outage every year. You don't see a message from the cloud companies, dear customer, the cloud will be down from 2AM to 4AM. They don't do that. You cannot bring down the cloud. So you have to focus on a new technique and different levels of resilience that haven't existed before. We are seeing some of that in photonics. We're seeing redundant lasers being added. As an example, we are seeing electrical cables being hardened. The product we're designing is built to the customer's spec. There's something called signal integrity of the channel, and we have to leave enough margin with that customer's connectors of choice, not an industry-standard product, but whatever that customer is going to use and make sure it's so resilient that the links generally don't go down.>> I mean, I think this is a great example of innovation and you bring up the link flapping that brings up the whole interconnect. I mentioned that earlier. Tightly coupled systems seem to be a trend. The old way was decoupled. Make them highly cohesive. Now you're kind of going in the direction of they want to have these compact systems where there's a lot of action going on where those efficiency reminds me of the '90s back in the day where you would have to do things between these subsystems, whether it's memory and processor. How prominent is that now? Is that a more of a bigger problem? Is that kind of an area that you solve too? I can see that kind of being a system thinking kind of decision. What's your reaction to that?
Anshul Sadana
>> Absolutely. Now, about a decade ago, the cloud companies were busy disaggregating the stack. They would buy the hardware from somewhere. They'd buy the software from somewhere. They integrate together. Things mostly worked, but they weren't moving fast enough because each architecture would move for maybe three years, four years. Then you do the next one. Now the system has no margin. So when you connect things together from a cooling standpoint as an example, you don't have enough margin. So the GPUs run really hot and there's not enough cooling left in the racks. So within that framework, it'll make everything work very tightly coupled. The links have less than half a dB of margin left. So typically there's a 42 dB channel, you have half dB left. If you are over that, the links will not work. The customer is saying, I can no longer disaggregate. I need to think of everything designed together as one ecosystem. It has to work out of the box. That is where I think integrated systems are back in a big way. You've seen NVIDIA do this. You've seen the cloud companies do this with their own GPU racks. They're developing custom Asics of their own, their own NICs, their own stack, and they would like to have their own network integrated as well.>> I love what you're doing. It's exactly what we've been saying on theCUBE and seeing. We've been saying clustered systems era is coming. I use that word loosely, but some people call it AI factories, super ... whatever they're calling it, they're systems. The old days was a server. It did things. It had a design. Had a motherboard, had components. Now these are large-scale systems tied together and you got to do networking.
Anshul Sadana
>> Absolutely. And you're going to see different architectures for training, which is scale-out or inference, which is scale-up. Scale-out is generally designed for today with about 32,000 to maybe 64,000 GPUs interconnected together. Scale-up where all the GPUs are in one memory domain requires even more bandwidth. So if you go back to cloud networking for storage, if that is 1X bandwidth, scale-out requires 10X. Scale-up requires a hundred times more bandwidth than where the world was just five years ago.>> Yeah, we've got a great mixture of experts in our network. Got to say, you're definitely going to be on the list for what we're going to go to for questions on this, but I do want to ask you one more systems question. When you look at those kinds of performances, you're seeing the vertical or the clustered systems that are going to be bundled together, what is the biggest challenge for folks that are trying to do this design now? Just for a second, let's go to the enterprise because we're seeing the same pattern in the enterprise where there's custom needs there too. Now of course, hyperscale is bigger market. I get that hit that first low-hanging fruit and you'll probably be busy for a long time, but it also extends to how they're thinking about systems in the enterprise, which are also running hybrid cloud. What's your thoughts there? Any perspective on how this translates into the JP Morgan Chases of the world or folks like that who have huge needs for customization?
Anshul Sadana
>> I think first of all, the ecosystem needs to be more efficient because these apps are super expensive today because the training models are super expensive today. So over time, this has to come down. Only then you get mass adoption within the enterprise. As the enterprise develops their stack, you'll see that for large-scale deployments, it's very hard for them to get access to power and cooling and so on at a colo. So most likely they will do their own training with cloud build-outs, but a lot of the inference thing they'll try to do with their own data in-house. And I think that's where the application world is going to be sort of full of new ideas. It's happening on a daily basis and it's amazing. As a startup, we see a new stack emerging every day on what is possible that required humans in the past. Now you can just automate it.>> Absolutely. The thing is you send them to school in the cloud and they come home to reason in the workplace. I mean, that's kind of what you're saying.
Anshul Sadana
>> Correct.>> I mean, that's where you're going for ... Okay, cool. Let's get into the company. You got 110 million, great round, good size, not too big, so stay nimble. I'm sure it could be bigger if you threw out the net, get more people in. We've got a good list of investors. How are you going to use the funds? First question. The second question, what's the culture like? And obviously, startups are startups. It's not for the faint of heart, but you're going to be attracting some people. You grow your team, I imagine. What is the DNA? Every company has their own DNA, like Moore's Law for Intel. Arista had their own DNA culture. What is the DNA of Nexthop.ai? Is it faster, solve the customer, build the best custom ... Share what the DNA is and how you're going to use the funds.
Anshul Sadana
>> Absolutely. Let me first talk about the culture because this is extremely important. We have about 100 like-minded engineers and people inside the company. The leadership team all comes with the same philosophy, which is we are highly collaborative with each other and highly collaborative with the customer because otherwise you get into this classic problem of let's build something cool so I can lock in the customer. That's the opposite of what the cloud wants. They want someone to be extremely open with them, and you have to ingrain that culturally into your entire company. The team is growing very well. The funds are needed to hire more talent, build more products, and the amount of work that we foresee that needs to be done to make improvements to build great products in this space is an infinite amount. It's coming our way into the industry, so I don't think we can keep up ourselves and hence the investors and the partnership we have with them as well.>> Yeah, you got the go-to-market, but you have a handful of customers, but the engineering is huge because you got to continue to keep pace with all the different technologies. I see Ethernet being infused into the substrate. You got all kinds of new clever engineering tactics and strategies for dealing with the innovation.
Anshul Sadana
>> Absolutely. These cloud customers are looking for a great partnership. And the way we are building our products and keeping pace with them, we have to innovate and give them a next-generation product every year. That is where all our focus will go, not in trying to do sales and marketing because with hyperscalers, the only three to five hyperscalers that we are working with initially, and we have access to all of them, we can reach out to them. We don't need to do big trade shows or other marketing campaigns just yet.>> All right. Since no marketing yet, we'll do a little marketing now. For the folks watching, they see the news, they want to figure out what you are, are you apple and orange? We want dog cap. People want to know, put in your bucket. Not that I want to pigeonhole you guys yet, but what is Nextup about? What does it mean? What's your mission? What's your North Star?
Anshul Sadana
>> Absolutely. John, at Nexthop.ai, we want to build great products that make AI more efficient. We talked about the efficiency of the data center and the power and so on, and if we can do that, I think the apps are going to be a lot more affordable to the enterprise, to the consumer. That's our mission.>> Awesome. We are here with the founder of Nexthop.ai, where we got the Mixture of Experts as part of the CUBE News. And NYSE Wired community is an open community revolving around content. I'm John Furrier, your host of theCUBE. Thanks for watching.