In this AI Factories – Data Centers of the Future segment, theCUBE’s John Furrier and Dave Vellante sit down with Arthur Lewis, president of Infrastructure Solutions at Dell Technologies, to unpack how AI factories are redefining enterprise infrastructure. Lewis explains why “it’s not cloud vs. on-prem – it’s data gravity,” noting that over 83% of the world’s data sits on-prem and ~50% of a traditional data center’s data is “dark.” He describes the shift to modern, AI-ready estates where data moves to hot/warm tiers and the data center operates “like one big computer” spanning compute, acceleration, networking and storage. With 3,000 enterprise customers and 6,700 more in the pipeline, Dell sees the Fortune 4000 leaning in (especially finance, healthcare and manufacturing) citing examples like CSX improving efficiency and reducing risk, and Hudson River Trading advancing quantitative research. The conversation frames AI factories as the new unit of value for enterprise scale, productivity and time-to-impact.
Lewis details Dell’s engineering playbook and growth trajectory, including ISG’s long-term revenue framework expanding from 3–5% to 6–8% and now 11–14% over four years. He outlines CTO “pods” (storage, server, network, thermal and data center architects) that co-design rack-as-a-computer systems – down to busbars, power/capacitor shelves, liquid-cooling manifolds, rear-door heat exchangers and quick-disconnects. Dell has deployed 100,000 GPUs in weeks (27,000 nodes, 1,600 racks, 6,000 switches) using Dell-badged teams and was first to ship 100kW and 250kW racks. On storage and software, Lewis highlights private cloud with multi-hypervisor flexibility, evolving beyond HCI via the Dell Automation Platform and the AI Data Platform stack with PowerScale/ObjectScale plus Lightning, Dynamo and a Dell data lakehouse – alongside cyber resilience. He contrasts greenfield CSP builds with brownfield enterprise constraints (air vs. liquid cooling) and shares Dell’s five-step enterprise motion: use-case/ROI, model selection (open-weight, one-shot inference, long-thinking autoregressive), data, architecture and finally infrastructure – keeping the AI Factory malleable to each customer’s needs.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Arthur Lewis, Dell Technologies
In this interview from theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, Glean co-founder and CEO Arvind Jain joins theCUBE’s John Furrier to unpack what’s really working in enterprise AI today and what comes next. Jain explains why knowledge access remains the first successful AI use case at scale and how Glean’s enterprise search brings AI into everyday work. He details the past year’s lessons with AI agents – from the need for guardrails, security, evaluation and monitoring to democratizing agent building so business owners (not just data scientists) can create production-grade agents.
The conversation dives into Glean’s vision of the enterprise brain powered by an enterprise graph, highlighting the importance of deep context, human workflows and behavior to reduce “noise” and drive outcomes. Jain outlines core building blocks – hundreds of enterprise integrations and a growing actions library – that let agents securely read company knowledge and take actions across systems (e.g., CRM updates, HR tasks, calendar checks). He discusses how organizations are standing up AI Centers of Excellence, prioritizing “top 10–20” agents across functions like engineering, support and sales, and why a horizontal AI data platform that unifies structured and unstructured data – accessed conversationally and stitched together via standards like MCP – sets the foundation for AI factory-scale operations. Looking ahead, Jain says Glean’s upgraded assistant is evolving from reactive tool to proactive companion that anticipates tasks and accelerates productivity.
In this AI Factories – Data Centers of the Future segment, theCUBE’s John Furrier and Dave Vellante sit down with Arthur Lewis, president of Infrastructure Solutions at Dell Technologies, to unpack how AI factories are redefining enterprise infrastructure. Lewis explains why “it’s not cloud vs. on-prem – it’s data gravity,” noting that over 83% of the world’s data sits on-prem and ~50% of a traditional data center’s data is “dark.” He describes the shift to modern, AI-ready estates where data moves to hot/warm tiers and the data center operates “like one big ...Read more
exploreKeep Exploring
What is the impact of data gravity on the current trends in AI infrastructure, particularly regarding on-premises versus cloud solutions?add
What changes and modernizations are organizations expected to undertake in order to effectively utilize AI within their operations?add
What are the key considerations and trends in the design and operation of modern data centers?add
>> Hello, I'm John Furrier, your host of theCUBE, with Dave Vellante here at our NYSC CUBE Studios on the East Coast. Of course, we've got our Palo Alto and Boston, the centers of excellence around the studios. Of course, we cover all the events like Dell Tech World, and here we have in the studio Arthur Lewis, President of Infrastructure Solutions at Dell Technology. Arthur, great to see you. Thanks for coming on.
Arthur Lewis
>> Thanks for having me.
Dave Vellante
>> Great to have you here face to face.
Arthur Lewis
>> Thank you, thank you.>> Yeah, face to face in our new studio here. Always on.
Arthur Lewis
>> Amazing.>> Yeah, thanks for coming in. Our series, AI Factories, is very popular right now, mainly because the enterprise, certainly the large hyperscalers, are building out large scale. You guys are doing a lot of business there, but the enterprise right now sees the AI revolution coming and the actions and the infrastructure because that's enabling that next wave coming, which is agents and the software innovation and open source. This is driving massive growth. How are you seeing that? And the on-premise piece specifically with the data gravity is driving it, and it's not so much about cloud versus on-prem, it's just the data gravity. What's your perspective?
Arthur Lewis
>> A lot to unpack in the question. We start with the fundamental premise that over 83% of the world's data sits on-prem and there's very strong gravity to that data. We also start with the premise that the data centers of today are not going to be the data centers of tomorrow, and what do I mean by that? There's going to be a significant modernization that organizations are going to have to go through, which requires a cultural mindset change to operate their business in a different way to truly take advantage of AI. If you think about how traditional data centers are set up, 50% of a traditional data center's data is actually dark. Another big percent is sitting in cold backup. You can envision a world where this is a system that's built out where all of this data is observable and active, sitting in hot and warm tiers, constantly in circulation, feeding these engines and agents that will get built out over time. But at the end of the day, and we were talking about this before, but if you zoom out, the data center is like one big computer that has processing capabilities, has acceleration capabilities, has networking capabilities, has storage capabilities, all in the spirit of refining your data into premium high-grade fuel to feed these AI engines that are going to permeate data centers of the future.>> The AI Factory concepts, the metaphor, my first reaction to that, first, I love the name, everyone knows I love the name AI Factories, but the first image in my mind was big factories, the big data centers. You think about the $20 billion data centers being built, the neoclouds, but actually, when you look at what the activity is coming in the market, it's the Fortune 4000. It's the, I won't say medium-sized enterprise, but it's like your classic enterprise. It's the IT world where Dell has the position. You guys are seeing lift there, I know Dave has some of the numbers, but that is the category. What's going on with that market? Because that's where the innovation's going to give them the ability to punch way above their pay grade in terms of productivity, coding, and speed.
Arthur Lewis
>> Yeah. We have 3,000 enterprise customers today, we have 6,700 customers in the enterprise in our opportunity pipeline, so call it about 10,000. What's really interesting here, and I think there's a form of self-selection going on, customers that have a fail-fast mentality, they're going to lead and have a competitive advantage. Organizations that are thinking, "Well, I'm going to let someone else figure it out," given the pace at which we're evolving the technology, they're going to be left behind in perpetuity and they're going to be at a competitive disadvantage and may not even be here 10, 15, 20 years from now. So where do we see the biggest adoption? We see it in really data-intense industries. Finance, healthcare, manufacturing leading the way just by sheer numbers. But when you talk about what organizations are doing, it is incredibly inspiring, I guess I would say. I have the example from CSX. They're trying to drive operational efficiencies and reduced risk. Typically, you do one to sacrifice the other, but now they're using AI to do both, improve efficiency and reduce risk. You look at what Hudson River Trading is doing to further AI innovation in quantitative trading, it's amazing. Several other examples in healthcare. When you listen to the type of acceleration that's going to happen in research in areas that we have struggled to make progress over the last several decades, all of this stuff is going to advance at light speed over the next several years all because of AI.
Dave Vellante
>> We're just coming off the securities analyst meeting. It was a packed house. Obviously a lot of interest in ISG, the group that you run. You guys have... You updated your long-term framework, and once again, updated revenue guidance, particularly in your group. If I understand correctly, you went from 3 to 5%, to 6 to 8%, and now you're at 11 to 14% over the last, what is it, four or five years?
Arthur Lewis
>> Four years, yes.
Dave Vellante
>> Yeah. So what's driving that growth? Obviously AI, but can you be more specific as to what you've seen?
Arthur Lewis
>> Yeah, again, if you unpack, artificial intelligence is more than AI servers. It is a fundamental revolutionary technology that's going to modernize how customers take advantage of their most valuable asset, which is their data. In order to do so, you have to have a fully integrated system, which includes the compute, the network, the storage, and the software, a full-stack solution. And so when you think about our long-term growth framework, the way to think about that is, well, infrastructure is cool again. So servers and storage are going to grow at a premium to market, and then AI will also grow with a premium to market. We look at those three vectors and say, "Hey, we were four years ago in the 3 to 5% range." This was before the 9680. "Then we upped it to 6 to 8%. Now we've upped it to 11 to 14%."
So yeah, we feel really good all across the portfolio. Whether it's traditional servers, AI servers, the storage part of the house, we feel really good about our opportunity.
Dave Vellante
>> And you've spoken publicly about some of the big wins you've had, obviously xAI, CoreWeave and others, and the financial analysts were asking like, "Why are you winning?" And it was interesting because Jeff said, "Well, everybody shows up." It's big dollars, so it's super competitive. I wonder if you could talk about just the engineering that goes into what you guys are doing to build your AI Factories, how you're doing it really at hyper speed, and how you're able to deliver that and make sure that it actually works, I think you said 99% of the time. Explain that differentiation.
Arthur Lewis
>> Yeah. We made some decisions a couple of years... And you know in this business, the success of today is largely based on decisions you made years ago. In our space, it's very hard to make a decision today and have an impact a week from now. So back in the winter of '22, as we were looking at AI, we realized that this was going to be an engineer-to-engineer-led conversation because the technology was just evolving so quickly, it could do things that nobody really understood. And so we created specific what Jeff calls pods or we call pods within our CTO organization that includes a storage architect, a server architect, I'm sorry, a storage architect, a server architect, a network architect, a thermal architect, and a data center architect. These skill sets combine to be able to consult with and provide customers with design optionality. Because people think, "Well, but it's a reference design. How hard is it to put together?" Well, it's very hard because as you think about the rack as a computer, you think about a scalable unit as eight, and then you think about the cluster, how you design the rack itself becomes a critical component. How do you think about the busbar? How do you think about the power shelves, the capacitor shelves, the liquid cooling design, the manifold, the rear door heat exchanger, the quick disconnects? There's a lot of value and optionality that we bring to customers. There's insights into, "How should we think about this technology versus the next technology? Should we wait?" There's a lot of effort that goes into the upfront design of meeting with customers. Once we get that design locked, they understand that we have very high quality expectations and our ability to deploy and integrate is better than most. Jeff talked about the fact that we've deployed 100,000 GPUs a week. In weeks, excuse me. That's a lot. That's 27,000 nodes, 1,600 racks, 6,000 switches. That's a lot of stuff to plug in. I see your computer there. If you have a problem and this thing blue-screens, you know where the problem is. But if you have a 100,000-GPU cluster and there's a problem with 20,000-plus miles of cabling, well, how do you diagnose that? How do you root-cause it? So our ability to actually deploy and integrate separates us from the competition, and then it's our ongoing support as well. These are all resources, by the way, that we deploy that are Dell-badged. A lot of others use contractors, and again, you can't train people fast enough that are not in your company to do God's work in this space. These people need to be Dell-badged and Dell-trained.>> What were some of the decisions you made years ago that you look at now and saying, "That was a good bet"? Obviously one of the famous quotes we have repeating on theCUBE in the past two years is, "It feels like the '90s again." Memory's important. You see the role of memory and storage and networking, database, data, fabrics. What are some of the key things you guys decided on that's paying off for you at the AI Factory?
Arthur Lewis
>> Man, there was a lot, actually. Let me start with storage, actually, because one of the things was, "Hey, as we think about what a modern data center looks like, we need to simplify the portfolio and simplify our focus," and we said, "look, there's going to be three things that customers really care about."
There's this concept around private cloud. And traditional workloads are going to be around for a very long time, and in fact, AI generates a lot of work that needs to be processed in a traditional workload kind of a way, so private cloud becomes incredibly important. There are a lot of things that are going on in terms of customers that are evolving to multi-hypervisor environments. These workloads now span VMs, containers, bare metal. Customers want flexibility, they want lock-in. They liked HCI from a simplicity perspective, but now HCI doesn't work because it doesn't scale the compute and the storage independently. We've taken that legacy of HCI and VxRail and built something really cool in the Dell Automation Platform that really is a super snazzy way to build out private clouds quickly, effectively, with really high cost of TCO, with really good TCO, excuse me. Then we streamlined and focus on our AI Data Platform with PowerScale, ObjectScale, building Lightning, Dynamo, the Dell Data Lakehouse on top of that, and then the cyber resilience story. So we knew we needed to clean up the store story to fit the AI Factory that we wanted to build. We built out these pods. We also knew that this was going to move from node, because remember, we started selling 9680s, but we knew early on that this was going to go from node-level to rack-level design. So we started investing in engineering capabilities to start to build out these very large clusters. I mean, again, we were the first to ship a 100-kilowatt rack, first to ship a 250-kilowatt rack, and hopefully we ride that wave and keep it going.
Dave Vellante
>> And every part of the stack is changing in the infrastructure. Everybody, of course, talks about the GPUs, but you guys are talking about storage and networking. The storage piece, you've got Project Lightning, you've got Dynamo, so you've got the disaggregated piece coming online. You've got the KV cache, John's favorite little element in the whole stack.>> The operating system of the AI Factory.
Dave Vellante
>> That's coming on. So those will be increasingly optimized for AI, that's Dell IP. And then in addition, you mentioned you've got the cyber resiliency piece as well. All those coming together as a system, and the implication is that drives, first of all, networking. You've gone from third place or fourth place to wow, now it's fundamental, it's bundled into the system, it's crucial. So you're now, overnight, a key networking player. You've always been a key storage player. So all that is uplifting your division. How should we think about that going forward?
Arthur Lewis
>> Look, I mean, our story is not going to really change. I mean, we have a very strong portfolio that's focused on modernizing core business applications as well as helping customers think about emerging AI workloads. Our view is that over time, the line is going to blur in between the two, and we want the AI Factory to be malleable to support the very specific needs of the customer, but we feel like we have all of the building blocks that we need to support customers for the foreseeable future.
Dave Vellante
>> When you think about those 3,000 customers, those enterprise customers, and the 6,700 in the pipeline, I presume they're much different than xAI or CoreWeave.
Arthur Lewis
>> Very.
Dave Vellante
>> What are the salient differences? What do they need to do to prepare? I mean, obviously there's the data piece. Is liquid cooling infrastructure a key part of that, or are they leaning toward a hybrid because I know you guys provide hybrids, or an air-cooled? What's that look like?
Arthur Lewis
>> Yeah. The environment we're dealing with is very much a brownfield, whereas in some of the Tier 2 CSPs, it's a lot of greenfield opportunities so it's built to suit. With the enterprise, they have constraints, and you can almost see dichotomy where the Tier 2 CSPs and the enterprise are deviated between liquid-cooled solutions versus air-cooled solutions. So they may not care about the density because they can't put 100 kilowatts per rack or 200 kilowatts per rack. So they're going to go down this path of air-cooled, but they're looking for the more complex system that includes the network, the storage, and the data ingest. The typical conversation with a customer is actually pretty basic. It's like, "Hey, Dell, help me think about use case and ROI. I've heard all of these crazy numbers. What use cases do you see ROI?" Then it's like, "Hey, Dell, help me with model selection." Then it's like, because there's so many foundational models out there, "How do I think about that? Tell me what an open-weight model looks like. What's a one-shot inference model versus a long-thinking autoregressive model?" Only then... Oh, then you get into, "Help me with my data." Then you get into an architecture conversation. Then you get into an infrastructure conversation. And what's really cool about that five-part conversation, typically, we would get brought in on prong number five. They've made a decision on architecture, they know what they want, they put an RFP out, and we go and bid. We are way up the chain in terms of helping customers think about their strategy, which positions us really well in this world of AI.>> Arthur, thank you so much for coming on theCUBE, and thanks for sharing and unpacking, and congratulations on the growth numbers and the revised forecasts. Looking good.
Arthur Lewis
>> Thanks, guys, and congratulations. This is a great spot.>> Thanks.
Dave Vellante
>> Thank you. Great having you in studio.>> All right.
Arthur Lewis
>> Thanks, Dave.>> I'm John Furrier with Dave Vellante for our AI Factory Series where we unpack and explore the future of computing, the future of the modern era that generative AI is enabling. Thanks for watching.