In this theCUBE + NYSE Wired segment from “AI Factories – Data Centers of the Future,” Nebius co-founder and CBO Roman Chernin sits down with theCUBE’s John Furrier at the New York Stock Exchange to unpack how AI factories are reshaping enterprise infrastructure and the future of data centers. Chernin outlines Nebius’ two-track strategy: a multi-tenant cloud built for developer experience and managed services, and large-scale, mostly bare-metal deployments for hyperscalers and AI labs. He discusses the significance of Nebius’ Microsoft deal (described as “up to $20B” and set to become one of the largest single-site GB300 deployments) as both an engineering milestone and a way to feed scale and cash flow back into the core cloud business. The conversation explores why enterprises want “the baby of supercomputer in the cloud,” marrying cloud flexibility with supercomputing efficiency to minimize time-to-value without sacrificing performance.
Chernin details Nebius’ specialization in AI-centric workloads (large distributed training and inference at scale), a platform roadmap that moves beyond infrastructure into inference, fine-tuning and reinforcement learning as services, and a commitment to helping customers build on open-source models for control, cost and data leverage. He traces customer waves from foundational model builders to vertical AI companies and tech-forward enterprises, noting early traction with firms like Shopify and momentum in regulated sectors such as healthcare following Nebius’ compliance milestones. With roots in Yandex’s large-scale engineering culture and meaningful exposure to ClickHouse, Chernin also weighs in on the economics of AI-scale infrastructure (power and capacity as gating factors), hybrid orchestration and sovereignty, and why latency priorities vary by use case – from reasoning models to voice agents – as AI factories become the new unit of value in modern enterprise compute.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jim Fowler & Ryan Asdourian, Lumen
In this theCUBE + NYSE Wired segment from “AI Factories – Data Centers of the Future,” Nebius co-founder and CBO Roman Chernin sits down with theCUBE’s John Furrier at the New York Stock Exchange to unpack how AI factories are reshaping enterprise infrastructure and the future of data centers. Chernin outlines Nebius’ two-track strategy: a multi-tenant cloud built for developer experience and managed services, and large-scale, mostly bare-metal deployments for hyperscalers and AI labs. He discusses the significance of Nebius’ Microsoft deal (described as “up to $20B” and set to become one of the largest single-site GB300 deployments) as both an engineering milestone and a way to feed scale and cash flow back into the core cloud business. The conversation explores why enterprises want “the baby of supercomputer in the cloud,” marrying cloud flexibility with supercomputing efficiency to minimize time-to-value without sacrificing performance.
Chernin details Nebius’ specialization in AI-centric workloads (large distributed training and inference at scale), a platform roadmap that moves beyond infrastructure into inference, fine-tuning and reinforcement learning as services, and a commitment to helping customers build on open-source models for control, cost and data leverage. He traces customer waves from foundational model builders to vertical AI companies and tech-forward enterprises, noting early traction with firms like Shopify and momentum in regulated sectors such as healthcare following Nebius’ compliance milestones. With roots in Yandex’s large-scale engineering culture and meaningful exposure to ClickHouse, Chernin also weighs in on the economics of AI-scale infrastructure (power and capacity as gating factors), hybrid orchestration and sovereignty, and why latency priorities vary by use case – from reasoning models to voice agents – as AI factories become the new unit of value in modern enterprise compute.
In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future event, Lumen’s Jim Fowler, chief technology officer, and Ryan Asdourian, chief marketing and strategy officer, join theCUBE’s John Furrier to discuss how the company is building the "trusted network for AI." Fowler and Asdourian highlight Lumen’s massive infrastructure footprint, which includes plans to add 40 million miles of fiber by 2028 to support the rapid expansion of artificial intelligence. They explore the company’s transition into a growth-oriented technology enti...Read more
exploreKeep Exploring
How is Lumen transforming its network and technology infrastructure to meet customers' needs in the AI era (for example, high bandwidth and low latency)?add
What are Lumen's network-footprint statistics—including its share of the U.S. fiber backbone, current fiber miles, and planned fiber deployments through 2028—and how does its edge/backhaul connectivity support AI growth?add
How should network architecture change when intelligence moves to the device edge—turning the network into part of a distributed computing architecture—and how should those architectural differences be addressed?add
What networking paradigm should be used at the edge (for example, connecting a factory to edge compute): a cross‑carrier, programmable mesh with policy‑based routing and intelligent gateways to select the best/least‑cost path?add
>> Welcome back. I'm John Furrier, host of theCUBE. We're here at theCUBE's NYSE Studios. Of course, we have our Palo Alto Studio connecting Silicon Valley and Wall Street bringing tech, technology is the market. And here on theCUBE is Lumen executives out here for their investor day. Jim Fowler, Chief Technology Officer, Ryan Asdourian, Chief Marketing and Strategy Officer of Lumen. Thanks for coming on theCUBE.>> Thanks for having us.
Ryan Asdourian
>> Thanks for having us.
John Furrier
>> You guys had a big investor day, obviously listed on the NYSE, so thanks for coming in. Our interest on this AI factory series is we love it and it's only going to get more distributed. And networking is the key, so you guys are in the kind of pole position for this AI network convergence. How'd this play out at investor day? Take us through what happened yesterday, what were some of the results?
Ryan Asdourian
>> Yeah. Well, everything's changing for the customers and we are going from really the transformation you've seen Lumen go through, the growth company that we're becoming. We're building a programmable network on top of the incredible physical assets that we're building and this is really about how are we technology infrastructure, how do we make sure customers have what they need in this AI era where the trusted network for AI, because we are building the pieces and the building blocks that are helping customers get what they need. They need high bandwidth, they need low latency. The needs have changed in the cloud 2.0 world, and we talked with our investors a lot about how are we serving that and how are we delivering for our customers.
John Furrier
>> You guys have the keys to the kingdom and I think on the edge. I want you guys to just share the stats just on the network footprint, the fiber, kind of the edge piece that's feeding the backhaul. Share some stats. People might not know the magnitude of Lumen's footprint. There's a point to all this.
John Furrier
>> John, when you look at our network kind of across the US, we've got a little under 50% of the fiber backbone of the US. We've got about 17 million miles of fiber in the ground today. Between now and the end of '28, we'll put another 40 million miles of fiber in the ground to be able to support all the growth that we see coming at us from an artificial intelligence perspective.
John Furrier
>> And connectivity has been the key to success on AI factories. So I want to talk about how you guys see the vision of that footprint because I think that translates. There's a lot of convergence going on at the edge with wireless, wireline, backhaul, but ultimately we need energy and we need bandwidth, low latency, and I love software AI in there too.
Ryan Asdourian
>> Of course. If you really think about what the world has been building and started building and this growth that we're talking about, it's the supply side for the AI economy. We're building all of that. And you think about the largest hyperscalers, the largest social networks. We just this week introduced and talked about a partnership with Anthropic as well. We're making sure that we've got the supply side built, and that's one piece of it. The other side is how enterprises are demanding all this and they're adopting it and that's the demand side. We see that growing and we also see the data. We know from a trend for years and years, data grows exponentially. And we're in the business of making sure that data gets to all the places that it needs to be quickly, securely, and effortlessly.
John Furrier
>> Jim, one of the things that we talk about in theCUBE a lot is I love the frontier models. They crawl the internet. Every word's been crawled. But when you talk about edge, new information, Ryan was just talking about all that data coming through, that's new data, has not yet even been trained. So it's like first ingestion, it's like that's a huge opportunity. This is where factories might play a role at the edge. Your thoughts on that.
John Furrier
>> John, you've been beating this drum for a while, so I'm not tell you anything you don't know. When you look at the amount of data that's being created, not only for the training model, but more importantly for what we see going forward from an inference perspective, you really need a network that's scaling for that. So what we've been doing, we've been building what we've call the trusted network for AI. That starts inner city. How do you get to 400 gig, 800 gig, 1.6 terabyte gig circuits to be able to move those large chunks of data kind of city to city, inner city, inside metro areas. 65 metro areas we're trying to get 400, 800, 1.6 terabytes all the way out to the hyperscalers and all the way out to the edge of the data centers. Why? Because we know, to your point, that that data's going to exist at the edge. Moving data around, using it as inference into the models, driving business processes and systems. I came out of 30 years of working at enterprise and it was for this reason. I see the problems so clearly that enterprises are going to face at the edge.
John Furrier
>> Ryan, one of the perceptions people have of networking is, hey, they just move a packet from point A to point B. When you think about the value of what AI, when you bring AI into the network or even to the edge or any place, you're going to bring intelligence. What does that unlock from an application standpoint when you bring intelligence? Let's say an Nvidia super factory or a Dell factory, plug it into a footprint size of a telephone closet back in the old days, plug in some really good power where all these access points are that you guys geared up on. What does that unlock when you bring intelligence?
Ryan Asdourian
>> Yeah. Well, the first thing is great data in makes great data out in AI. And you talked about the value. When we think about the value, the equation has changed for customers and they have to move to where the GPUs are. We have to move where the power is. We have to ensure that our customers on every side of the equation can unlock that value by having more control than they've ever had before. What we've done is we've really given this programmable network the capability of saying you now have more control than you've ever had as a customer, and so you can help make sure you're being the most cost-efficient with your data, you're having total control of that data. And scaling it up and down, that control has never existed before either. You have customers that need to turn up gigabit circuits in minutes. That used to take weeks and months, and now we're doing it in minutes and hours. And that has completely changed the way that value equation is changing for customers.
John Furrier
>> Jim, it's AI for networking and networking for AI is the slogan. We've been saying that in some of our conversations. If you have an architectural change, okay, you're talking about a device edge has intelligence to it, what's architecturally different? Because that's not your yesterday's network architecture. It's a bigger piece of a distributed computing architecture. How do you think about that?
John Furrier
>> Yeah. When people used to buy networking, it was kind of like plumbing. They would buy it, they would install it, and they would forget about it and never change it. That's not how that works from an AI perspective. Our customers want to be able to change the size of the pipes, move where the pipes go back and forth. So the programmable piece is the first part. And the way that we make that possible is with something called a fabric port. Our fabric port gets pushed to the edge. We're actually extending the Lumen network out to the customer's location to give them a full software capability where they can change the way the configurations work, the size of the pipes, how they route, where they route. They can change it throughout the day. They can think about loads that are happening overnight. But really giving them full control. Architecturally the way we do that is extending that fabric port out to the edge.
John Furrier
>> What's the benefits to business?
Ryan Asdourian
>> I was just going to say yesterday's networks, you got what you got. Today, what we've done is we've made today's network the customer's network. We're giving them access and control in a way that they've never had so they can do what they need with that network. That's a fundamental shift in the way networking plays a part in the AI economy.
John Furrier
>> You guys agree, AI factories are coming to the edge.
John Furrier
>> I'd say they're already there in many cases.
Ryan Asdourian
>> Yes, absolutely.
John Furrier
>> Okay. So what does it mean if someone says, "What does it mean, John, when you say you're bringing AI to networks?" What does that actually mean? So if you look at Nvidia, they're bringing a lot of AI in, they're very transparent about their roadmap. But once you have that AI, that changes what networks are. How would you guys describe what is AI when you bring it to networking?
Ryan Asdourian
>> The biggest issue that people talk about is compute, and I actually don't think that's the biggest issue. I think the biggest issue is getting these huge amounts of data from point A to point B at the edge for the processing to happen. I was talking to a peer of mine recently in a large company inside the US that's doing their own training of their models. And the problem that they're having is their GPUs are sitting idle because the circuits that exist between where the data is and where the processing happens are too small. So we talk about AI at the edge. I think the biggest problem that we have to help our customers overcome is get the pipes that are big enough to be able to get the data where it needs to be at the edge to do the processing.
John Furrier
>> So the bottleneck's the latency between the data moving into the processing. They're waiting>> And now fast-forward to starting to use those models to run business processes. If you're thinking about things like telesurgery or you're thinking about customer call center automation, you need to be at five milliseconds of latency, no greater than 10 milliseconds, and the majority of the way networking has been implemented won't do that.
John Furrier
>> Talk about the precision. And one of the things that's coming up with these AI factories, whether it's edge or central or even cyber resilience, precision is a word that comes up a lot, precision rollbacks, precision detection. When you start getting the ability to slice and dice the salami any way you want, you get basically differentiated services. What are some of those services? Because it seems to be moving up the stack. For instance, I might not want to bring all that data in because I know my model is a private model. Maybe I bring a small amount of data. So I think agents might play a role. You're kind of smirking. What...>> There's two parts I'll talk about and I'm sure Ryan will add in. I think you're right, agents. So only bringing the data you need and making sure that you've got clarity on how that happens. Being able to segment your networks up to be able to send the data that you want through the streams you want. But here's the other thing that we see happening in this space. People are really thinking about how do they go cloud to cloud in a very different way. A lot of the way enterprise communications happens today is frankly over the internet. You go from east to west, sometimes you're doing that over the open internet. That's not deterministic enough for what we're talking about. So what we're trying to build is a multi-cloud gateway that gives you a deterministic private network that goes between the cloud providers, between your data and your enterprise, out to your edge so that it is deterministic and you actually know that you're going to get the consistency and precision that you need.
John Furrier
>> Not to chime in, but I want to just ask you to comment on the security piece because if you go through the internet, that's another security hop.>> Absolutely.
John Furrier
>> So security's baked in.>> Big part of what attracted me to Lumen is we have an entity called Black Lotus Labs, and that's all Black Lotus Labs does every day is think about how can we protect that private network connectivity that we believe enterprises deserve and need going forward. It is a huge part of our strategy going forward.
Ryan Asdourian
>> When we say the trusted network for AI, that trusted piece really plays a big part of that because security is absolutely table stakes. I also just wanted to add, when you talk about the customers moving data around different places, whether it's edge, it's on-prem, it's in the data center, it's cloud to cloud, we talk about it as universal and ubiquitous. We need to make sure no matter what type of in-between transport you have, we support that. And we have to make sure that you can do that across anything that you you're driving. That's what universal and ubiquitous really brings, anywhere to anywhere.
John Furrier
>> And I think the network changing is such a radical thing because Nvidia's success was because they had great interconnect between systems or the maximum supercomputing performance. Okay. That concept's kind of oversimplified computer science, but now you say, okay, that's supercomputing. What happens at the edge? Can I have a mini factory? So the thesis is what is the networking paradigm for the edge, peer-to-peer on wireless, backhaul through policy-based routes? I mean, take me through that factory comes to the edge.
John Furrier
>> We believe it's a mesh. We believe that for this to work, it really needs to be a cross-carrier mesh network that's opened up to the edge to be able to do that compute because what they need is the fastest path to be able to get the supply chain of data moving in the way that they want. Sometime that's going to be over our fiber network, sometime that might be over another carrier's network, and sometimes that might be within a private network that they already own. But what has to happen is they shouldn't have to think about it, it should be programmable, it should be software oriented, and they should be able to find the least cost best route for the activity that they're doing.
John Furrier
>> So fabric is beachhead for Lumen for an intelligent gateway basically to manage all the intelligence.
Ryan Asdourian
>> You said it well. It's bringing the best of the physics of the optical world together with the best of the digital world. Those two together, that's where the really optimized cloud 2.0 network gets created.
John Furrier
>> Ryan, you guys have a wireless, wireline business because you guys are in enterprise with 5G and everything that's going on. What is the unlock for enterprise customers as they have requirements certainly for security? But now that you have AI coming into the fold, agents talk to other companies, that's a big trend on the agent world. So how do you guys see the customer base evolving? What's the unlock for them?
Ryan Asdourian
>> Yeah. One of the things we want to do is enable choice. And so as we think about the backbone that we provide, we want to make sure, Jim talked about the cross connects with all the different things, we don't want the customers to think about. What we want them to ensure they have is resiliency, redundancy, it has to work every single time. And that's what we're building. We're building that backbone that they can count on and they can trust.
John Furrier
>> I'm going to be at MWC, formerly called Mobile World Congress, next week. The theme there is going to be what does the telecom infrastructure provide and the carriers do in the age of AI. They have the power, they have footprints, they have connectivity. I mean, you don't have to be a rocket scientist to figure out that's the gold mine, value extraction. They got the data, they got the energy, they got the footprint.
Ryan Asdourian
>> Well, let's talk about technology infrastructure. Because when we build this, what we're also doing is we're partnering with a number of technology companies so that you can get all of those best in class, whether it is recovery systems, whether it is any sort of hardware and software coming together, whether you think about the hyperscaler technology or beyond, and security plays a giant piece in this. You have to be able to get the customer all of those technologies, and our infrastructure is helping bring those to the customers faster, easier. We want to reduce the friction it takes for customers to be successful.
John Furrier
>> Guys are great. I love the infrastructure angle. Final question, what is the modern era of networking? If someone asks you the question, if all this stuff kind of plays out, which I think it will, the dots are connecting, what is the network? I mean, what is the value of the network and the modern era? What's different? What's changed?
Ryan Asdourian
>> So I'll take a first shot at it. From my perspective, the modern era of the network is about a data supply chain. It's about being able to get data where it needs to be from a compute and a processing perspective. And that data supply chain needs to be wing to wing. Too many companies are either focused on just the north-south premise to cloud or the cloud to cloud. Very few companies are looking at the entire logistical supply chain of data and I think that's what the future of telecom looks like.
John Furrier
>> Ryan, closing word.
Ryan Asdourian
>> I agree and I think that when you think about cloud 2.0 and what has changed, we've got to make sure we're supporting our customers so they can adopt AI as fast as possible. We're putting that baseline there. Everyone talks about power and chips and compute and cooling. If you don't connect with this 10x growth and data center that we're going to see, then they're not as powerful as you want them to be. We have to make sure that we get all of that information to our customers, they get data in the supply side, they can get it out the demand side, and we go from there .
John Furrier
>> I mean, you guys are building a collection of small data centers. And the size of a data center can be shrunk down to essentially DGX box with Nvidia.>> And the great part is it's actually something we've already built and we've been building off of.
Ryan Asdourian
>> Yeah.
John Furrier
>> You guys are bringing the future. Congratulations on the great investor day. I think if you squint through the transition, we already crossed the threshold.
Ryan Asdourian
>> Growth.>> Growth.
John Furrier
>> We're on the AI era. A lot of growth. Thanks for coming in and sharing. Appreciate it.
Ryan Asdourian
>> Thanks so much.
John Furrier
>> Thanks for having us.
John Furrier
>> All right. I'm John Furrier, host of theCUBE. AI factories are coming to the edge. It's going to unlock some unfathomed use cases. A lot of intelligent coming and managing the hard part, the network then the AI native apps all kick in, business consumer services are going to collapse. That's our vision of the future. Thanks for watching.