Dion Harris, director of accelerated data center go-to-market at NVIDIA, joins theCUBE's Artificial Intelligence Factory series. The discussion focuses on the transformative shift from traditional data centers to AI factories, coinciding with Climate Week in New York. Harris underscores NVIDIA's pivotal role in driving these changes and supporting various industries in addressing global challenges through their technology.
The dialogue reveals the concept of AI factories as revenue-generating entities that synthesize tokens for innovative applications. Harris provides insights into NVIDIA's evolving ecosystem, highlighting the integration of large-scale systems with enterprise needs. The conversation also considers AI's impact on mainstream verticals such as retail and life sciences, offering a perspective on the future of AI as described by analysts at theCUBE Research.
Key takeaways emphasize the importance of AI in environmental modeling and prediction, particularly through NVIDIA's Earth-2 initiative. Harris demonstrates how AI and advanced computing technologies are essential for simulating complex systems and tackling global challenges. Analysts point out the significance of NVIDIA's software and hardware innovations, such as direct liquid cooling, in enhancing performance and efficiency across the AI infrastructure landscape.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Dion Harris, NVIDIA
Dion Harris, director of accelerated data center go-to-market at NVIDIA, joins theCUBE's Artificial Intelligence Factory series. The discussion focuses on the transformative shift from traditional data centers to AI factories, coinciding with Climate Week in New York. Harris underscores NVIDIA's pivotal role in driving these changes and supporting various industries in addressing global challenges through their technology.
The dialogue reveals the concept of AI factories as revenue-generating entities that synthesize tokens for innovative applications. Harris provides insights into NVIDIA's evolving ecosystem, highlighting the integration of large-scale systems with enterprise needs. The conversation also considers AI's impact on mainstream verticals such as retail and life sciences, offering a perspective on the future of AI as described by analysts at theCUBE Research.
Key takeaways emphasize the importance of AI in environmental modeling and prediction, particularly through NVIDIA's Earth-2 initiative. Harris demonstrates how AI and advanced computing technologies are essential for simulating complex systems and tackling global challenges. Analysts point out the significance of NVIDIA's software and hardware innovations, such as direct liquid cooling, in enhancing performance and efficiency across the AI infrastructure landscape.
Sr. Director, HPC, Cloud, and AI Infrastructure GTMNVIDIA
In this segment from theCUBE + NYSE Wired AI Factories – Data Centers of the Future series, Dion Harris, senior director of HPC & AI infrastructure solutions at NVIDIA, joins theCUBE’s John Furrier to unpack why AI factories are redefining enterprise infrastructure. Harris explains the mental shift from “data centers as cost centers” to AI factories as revenue centers, where the output is tokens powering drug discovery, materials science and domain-specific LLMs. He details how NVIDIA’s stack brings compute, networking and storage together – highlighting Dyna...Read more
exploreKeep Exploring
What is the significance of the shift from traditional data centers to AI factories in terms of their role and output?add
What is NVIDIA's approach to integrating solutions into the enterprise performance computing market?add
What are the current trends and developments in AI across various sectors, and how is it impacting sustainability and entrepreneurship?add
What challenges does NVIDIA aim to address with its technologies in understanding and modeling the environment?add
What are the challenges and opportunities associated with the growth of AI and data centers, particularly in relation to energy consumption and grid management?add
What is the significance of the AI factory and its alignment with data processing capabilities?add
>> Welcome back, everyone. I'm John Furrier, host of theCUBE here in our New York Stock Exchange Studios with Dave Vellante. We're kicking off our AI factory series where we talk to the leaders who are making the game-changing decisions and technology that's enabling this next wave of generational change that's going to create new software layers, new ways to engage, of course, changing society, and with Climate Week going up here in New York, a lot of big data technology innovators making a lot of discussions to help solve a lot of the problems. Dion Harris, Senior Director of HPC and AI Infrastructure Solutions with NVIDIA is here, the company that is leading the change. Actually love the AI factory. I think Jensen Huang said that two GTCs ago, maybe earlier. Dion, thanks for coming on theCUBE. Really appreciate it.
Dion Harris
>> Thanks for having me. Pleasure to be here.>> So I just got to say, when Jensen said AI factory, "I love that term," and then Dell picked it up and they're using it, so he's very much sharing the name. It's kind of a very good concept. But it really kind of speaks to what's in it, data, all kinds of elements. We're seeing the data center be a computer. That's one trend. Data centers together being a bigger computer, scale across that you guys talk about. So you start to see the system game change in the old school data center and even in the hyperscalers, even the recent news of NVIDIA putting a hundred billion into OpenAI, which has a huge application, running a lot of NVIDIA because it goes fast. You start to see that AI native software paradigm come in. And that's not just GPUs. It's software. I mean, I go back 15 years, Jensen, "We're a software company." Of course you are. There's a lot more going on. Liquid cooling is one. We're going to talk a little bit about that. But this change, high performance computing from the old workstation days, where's the compute, go faster inch by inch, and then a few years ago, it just really started to accelerate.
Dion Harris
>> Yeah, so just to talk a little bit about, like you said, that shift that's happening from plastic data centers to AI factories, and more than a physical shift, it's actually a mental shift. And what I mean by that is really understanding that these AI factories are revenue centers now. They're not just cost centers that are just driving efficiency gains and productivity gains. They're actually driving revenue. And so when you think about a factory, it produces output. In this case, an AI factory, the output is tokens. And these tokens are used to synthesize new drugs, they're used to discover new materials, they're used to create all sorts of innovative large language models that have different applications, and so understanding that sort of philosophical shift of the underlying technology is really a factory for intelligence, which are expressed through tokens.>> And the other thing that's going on that I want to get your thoughts on is that NVIDIA is very clear, "Here's our roadmap on the supply side, supply chain side." The processor is very transparent. But now you have an ecosystem developing on the other side, I call it a two-sided market for NVIDIA, which is the enterprise. If you go back 10 years ago, there wasn't a lot of ecosystem action. You had gaming. GeForce is well-known. Bitcoin has picked it up. But now the integration of these large-scale system is really changing the entire landscape, specifically the enterprise, not just high-performance computing market, but the enterprise.
Dion Harris
>> Yeah. Just speaking just broadly about ecosystem, I'll just touch on that for a second. We recognize that in order to really build these solutions, deliver them to the entire market, NVIDIA couldn't do it alone. And so we really need to make sure that we invest heavily in making sure that the entire ecosystem is ready and prepared for the solutions that we're building so that they can build the infrastructure that it takes to run those applications. In terms of how it's making its way into the enterprise, you recognize that again, a lot of enterprises, like Procter and Gamble, a lot of enterprises like Nike, their core competency is understanding their customers and their markets. So we want to make sure that as we build solutions that take AI to the market, we want to make it as easy and seamless as possible for them to adopt AI. So that's why we've even been in things like NIMs, which are about taking containerized models and making them simple and easy to deploy on the infrastructure, so trying to stratify and simplify as much as possible so that as you roll these into enterprise applications, they're seamless. But then you also have to make sure that the security is there and fit all the data profiles for the enterprise, and so we've invested a lot in safeguards and guardrails within the AI model building and development process that helps ensure that when these models are deployed, they meet the overall governance and security requirements of a lot of the enterprises as well.>> One of the things that I noticed is that people who love AI, one, they love NVIDIA stock, so congratulations. Mainstream retail people are buying the stock. But as the Gen Zs come in too, actually, no one really understands what a data center is. Now, old school folks do. But you look at the transformation of the data center, you say, "Okay, I can see how AI is close to the data. Go on premises. That's clear." But then you hear Jensen at GTC say, "KV cache is the operating system for the AI factories."
Okay, let's put a pause there in a second. That's networking, right? Okay. So what that means is there's a lot going on under the cover. So let's unpack what's that system, KV cache, liquid cooling. There's a lot of elements and system elements that make the data center feel and act like a super computing. It's not like a magic wand. I mean, take us through the importance of that because there's a lot of design involved. There's a lot of long game kind of thinking.
Dion Harris
>> Sure. Well, when you think about concepts like the data center or the cloud, great abstract concepts, but they all run on hardware. There's networking, there's compute, and there's storage. And so when you think about AI, there's really two operations that you're trying to do. There's training and developing the model, and there's inference. When you get to the inference, which is essentially how you extract the value out of AI in terms of using it and deploying it in applications, the key thing about inference is when you understand to deploy it at scale efficiently, you need to make sure you can understand not just the type of model you're running, but obviously the profile of it, like when the model needs to be available, how are the user access patterns affecting the demand on the infrastructure, but then also on a per user basis, you need to make sure that when the person goes to issue their query, the response comes back in a reasonable timeframe. And then you need to do that not for one user, but millions of users. And so that's why when Jensen described our Dynamo software, it's really a software package that orchestrates across all the layers of the infrastructure, across the compute, the networking, and the storage. And in order to do that, you have to have just general awareness and understanding, okay, where this query came in, where does the data exist in terms of being able to service that and generate tokens to support that, but then you also need to understand, okay, how long should I keep it in memory versus move it to storage, and how do I make sure that that transfer of data is seamless? And so that's really, like I said, when you think about NVIDIA, we build GPUs and we build networking, but the software that orchestrates that user experience is really what creates the value. And that's why when we describe a lot of these X factors in terms of performance improvements from generation to generation, some of it lies in the hardware and the die and chip design. Some of it lies in the actual networking elements, but a big portion also resides in the software that helps all these systems run faster and more efficiently.>> I think it's notable, obviously we know this, that NVIDIA and OpenAI have a relationship. Their advantage as an application directly result of the software, CUDA and other software they're running, so they get a competitive advantage. I want to take that to kind of vertical. So as theCUBE, we were covering tech. We know we love infrastructure, so we can cover that all day long. From three in the morning to whatever, we'll talk about AI infrastructure.
Dion Harris
>> I'm here for it.>> But you see AI now go mainstream. It's in every vertical. Retail's an AI show now, NRF. You got life sciences, the discoveries that are happening with the AI factories. You're starting to see an enablement there. This week, Climate Week is here as part of the UN, and the streets are pretty much closed in New York, as you found out with the taxis. Climate we cover from a sustainability standpoint because the data center is a sustainability conversation.
Dion Harris
>> Absolutely.>> Sure. But there's also an emergent factor of entrepreneurs working on large-scale problems that were ungettable five years ago because they have data, they can store the data, they can crunch the data, they can actually use NVIDIA to do that because it's available. So you're starting to see tech nerds doing really hard problems. Figure out-
Dion Harris
>> Shout-out to the tech nerds. Right.>> Let's index the entire grid of the atmosphere.
Dion Harris
>> Yes.>> Like what?
Dion Harris
>> Yes.>> Take me through what you're seeing there because you're looking at these solutions, you're seeing this pop up where this market's starting to develop. Why and how and what are some of the things that you're seeing on the progression of the new way?
Dion Harris
>> So first of all, at NVIDIA, we truly believe that if a problem isn't hard, we don't want to expend resources on it. We'll let someone else tackle it. And there's no greater problem or challenge than being able to understand, simulate, predict, and model our environment because things are constantly changing. There's so many physical properties to manage and predict. And so when we look at our core portfolio of technologies, we thought this is a great opportunity to kind of bring those technologies to bear. So we talk about accelerated computing and just being able to simulate and model a lot of the first principles based dynamics that goes into understanding the climate and environment. We talk about AI, which is another new technique which is used to create emulation models that can accelerate and actually make that whole prediction process much more seamless, much more available and sort of efficient in terms of being able to really understand those different dynamics. And then third, we have our digital twin technology, which is about how do you represent complex systems? How do you bring all these different data types together, whether it's observed data through satellites or it's sensor data or it's simulation data? How do you bring all those into one complex system that you can then model and use as a visualization technique that also is being used to share and collaborate across those data sets? And so we have this effort called Earth-2, which is really about taking those core foundational technologies and enabling the tech ecosystem, those tech nerds you talked about, to really harness the power of AI, harness the power of digital twins.>> Explain Earth-2 for us, because I think... So is it a compute center or they have to go procure massive amounts of-
Dion Harris
>> Yeah. So Earth-2 is really sort of a platform shift and we basically, like I said, let's take these core technologies that we've developed and have been working on for a number of years now and expose them to the broader tech ecosystem. And so we do that not just by creating products and selling them, but actually developing AI surrogate models. So we had a model called FourCastNet that we just released that really does AI surrogate modeling for weather forecasting. We had another model that's called CorrDiff, which takes output data, super-resolves it so that you can get much more localized sort of interpretation of that data set. Then we also have another model called Climate in a Bottle, which takes not just forecast data, which is weather, but also looks at climate scale data and does that same sort of hyper-resolution, super-resolution, which makes the data much more actionable for a fraction of the cost.>> So scientists, companies-
Dion Harris
>> Absolutely.... >> entrepreneurs.
Dion Harris
>> And when you think about the impacts of weather, it's one of those things that goes unnoticed, but every time you take a plane, you need to understand weather patterns and how it's going to be affecting that plane route. Every time you ship a package, you need to understand what's happening in the oceanic atmosphere. Every time you're doing any sort of shipping and logistics, disaster recovery, agriculture, whenever you're understanding how your overall weather conditions are going to be for a given season, that dictates which crops are going to be successful. So we're here at the New York Stock Exchange. That obviously has significant effects on the commodities market. So weather is one of those things that is so ubiquitous and wide-reaching in terms of the impact, we saw it as a huge problem that we could contribute these technologies to help drive some efficiency.>> And that was what I was saying about the instrumentation of data. It's there, but it's not attainable. Right now it is. So talk about the model, because at GTC, I was very fascinated by the physical AI story with NVIDIA because the thesis was, "Hey, the physical world's connecting with digital." We're seeing that play out in financial institutions with crypto and blockchain. Old and new are coming together as one first-party set of data.
Dion Harris
>> Sure.>> The other thing was is that you guys were showing that you have a lot of synthetic data and you're making the data smarter, but you're offering it up.
Dion Harris
>> Yes.>> This is what Earth-2's doing. Is that right?
Dion Harris
>> Yes.>> You guys are adding value and saying, "Hey, we've done some stuff. Put more data in, real data or synthetic data."
Dion Harris
>> Yeah. So Earth-2 is one instantiation of that where we're basically creating these different technology platforms and we're working across the entire global ecosystem with ECMWF, with NOAA. We've been doing a lot of work with the Weather Channel and Weather Company, I'm sorry. So it's really recognizing that this problem by definition is a global scale problem, so going back to the initial point we talked about in terms of ecosystem. We're trying to bring that entire climate-tech ecosystem together to build solutions on that platform that can then sort of advance it. And so making a lot of the data available, like you mentioned, making the models available and open and accessible is a lot of the core functions of Earth-2. But when you talk about getting into the physical AI challenge, that's where, like I said, as we've been talking about AI, most of what we've been describing has been virtual AI, whether it's chat bots, whether it's virtual implementation of AI, but as AI goes into the physical world, it's what we call a three-computer problem, meaning that you have to train the AI to understand and build a model of just the general world itself, but then you also have to create a training ground where AI can learn how to interact with the physical environment, learning things like gravity, learning things like object permanence. All these elements that you learn through interacting with the world, you have to teach the physical AI to do that. And then once you deploy it, there's some edge device, there's some humanoid robot, there's some self-driving car, there's some autonomous medical device arm. Those elements are the other part where AI gets implemented physically, and so that's where we're seeing a ton of opportunity there as well.>> Yeah. Dion, one of the hottest content series we have is AI Infrastructure, which is AI Factories now, here at the NYSE Wired program that we do. The other one is Robotics. Robotics is getting shot in the arm literally with AI because now that's a true opportunity to put high performance compute either in the robot, whether it's manufacturing or humanoid, and talk to a bigger cloud or a bigger data center in real time. This is demoed, and you guys are very bullish on this, as we are. We love robotics. We think with open source and synthetic data and simulation, the acceleration for solutions is going to be faster. So the question is, as you're in charge of HPC, hyper performance computing, which is like supercomputing that's inside the industry, but also AI solutions, as these new solutions come out, what are you seeing as the progression? Is there a pattern around the innovation? Is there a use case they pick up? Do I have to go all in end to end? I interviewed a company that's working with NVIDIA that has an autonomous vehicle L4 from the ground up. NVIDIA's powering the whole thing in one of your chips. It doesn't even talk to the internet. It's basically a rolling AI factory. So you're starting to see, it's not an obvious pattern, but what do you see? Because you have to look at the solutions. I'm sure there's headroom that I'm not seeing.
Dion Harris
>> Well, what I'll say is I think there's massive connectivity between these different approaches and markets that we're looking at in terms of going from virtual to physical AI, and let me just sort of tease this out a little bit. You talk about self-driving cars. First, you need to build the model, and that happens in a data center. You need to basically help your computer understand the rules of the road, how to interact with certain types of obstructions, and that's all a part of just training the core foundational knowledge. Then you need to help simulate that to do it millions of times, because a lot of what you want to do is make sure that you can give it an environment so that they can learn without having to learn on the road. We have lots of regulations that prohibit you from doing that, and that's why we have a platform called Omniverse, which is solely around having digital world, virtual worlds that allow you to create these training grounds. And we actually open-sourced our Cosmos platform, which basically makes that available to a number of different developers in that ecosystem to help train and develop those digital models for physical AI. And then like you said, the end point, which is the car, like building a self-contained system that can have all the redundancy, have all the safety requirements, have all the performance required to execute all these models and all this learning that it's been trained on, that's where NVIDIA really shines because we're able to take our core data center skill set and apply it to the model building, the world building, and then the actual last mile inference in the real time.>> We could do an hour just on that solution.
Dion Harris
>> For sure. For sure.>> It's so fast, and you guys are making it so much faster. Huge fan. Everyone kind of knows that. I want to get into something that we talked about before we came on camera. You have a background in the grid energy, and then also software has been a key component for NVIDIA. So sustainability software knobs to manage sustainability, is there a way to manage that on the fly with maybe some AI? But also, the energy side's bounding the problem. So power interconnect, your systems are built, you got InfiniBand here, you've got scale up, scale out, now scale across, different use cases, but together they work as a system. Talk about the power component and specifically the grid. Is there going to be work there? How do you see that happening? And then how do I manage my sustainability equation?
Dion Harris
>> Well, it's interesting. When you talk about AI, it represents both a challenge and an opportunity, a challenge in that requires a lot of energy as we're building out the data centers and you think about all the growth that's being projected in terms of CapEx spend around the data center, and therefore all the power required to support that, that's the challenge. But the opportunity, in fact, there was a recent report that we published earlier this week by CICS... CSIS, sorry, I mispronounced that... but CSIS will help us really understand how are different energy generation and utility grid transmission and management organizations going to harness the power of AI to help be a part of the solution? And so when you think about what happens, and actually I spent five years at the regulated utility in the California, it's called PG&E, and you recognize that there are significant challenges in terms of one, bringing new load to the grid, and there's a reason why that is, because you have to make sure that you can maintain the reliability and the security of the grid, so you want to be somewhat cautious, but in the absence of having data and tools to help simulate data better, to help model it in a more sophisticated way that gives you more granularity to ensure that you can ramp production on and off for data centers while bringing that load on, but also maintaining that reliability. So there's a ton of solutions that are being brought to bear. There's a company called Utilidata that we've been working with that instruments smart meters, that instruments the transformers themselves to really understand at a granular level the load so that you can more granularly monitor and manage that overall network performance. There's also companies like Emerald AI, which is looking at how do you manage the efficiency of the data data center itself in terms of getting power in and out. And so we've been very sort of hands-on in working with that broader ecosystem, not just of what's happening in the data center, but we call it sort of from chip to grid, not just looking at the chips, but looking at how it impacts the downstream power and cooling within the data center, but also in terms of managing and operating the grid itself.>> Well, it's really good you guys have that software because at the UN Climate Week, the big conversation is the future of the planet. Obviously-
Dion Harris
>> That's right.... >> data centers are a big part of it, but AI can help too, right? So yes, it's AI for the planet, planet for AI kind of thing going on. So congratulations on that. I'm glad you guys are doing what you're doing. I guess my final question for you is, okay, what are you optimizing for? Your job is to put AI solutions out there. It's what everyone wants.
Dion Harris
>> Sure.>> High-performance computing. To me, that's just the data center. That's AI factories. Check.
Dion Harris
>> Absolutely.>> AI factories is good. I like that direction. But a lot of people are really wanting to intersect with this. What's your goals? Share some of your objectives for the next year. What do you hope to accomplish?
Dion Harris
>> Well, we have a very ambitious vision for what AI can deliver. And in order to do that, we have things that we can control that we need to really deliver on, which is all the products that we're bringing to market, all the incredible innovation we're doing at the core chip level, at the networking level that we talked about, being able to use our scale up rack architecture like NVLink, scale across and using Spectrum-X and our other capabilities there. So we have a very ambitious roadmap that we have to execute on. So that's sort of first and foremost. And we think in doing that, it'll deliver more performance, more efficiency for your power-constrained data centers, and that will help sort of ease that burden. But then the other responsibility I think that we feel as being sort of the stewards of this sort of AI movement is making sure the entire ecosystem is ready, and then making sure that we are engaging with not just the data center builders and designers, but also all of the MEP partners who are building mechanical, electrical and plumbing solutions to outfit it, to make sure that as we create our new rack architectures, that their bus bars are ready, that their power whips can support the power density, that their CDUs have the proper pipe diameter to support all the fluids that's required to cool a lot of these systems, and so understanding that we have a unique perspective in that we see the models that are coming. We're working with all the advanced model developers like OpenAI and Mistral and others, and we also understand the products. Obviously we're building them, so that gives us unique advantage to help drive a lot of those downstream requirements. And so I think that's really our core objective is to make sure that the entire ecosystem is ready and therefore we can do it in the most responsible and efficient way possible.>> Awesome. I do want to touch on one more thing, if you don't mind.
Dion Harris
>> Sure.>> Because we didn't get to the liquid cooling conversation, something that you're close to.
Dion Harris
>> Sure.>> Very important piece of this-
Dion Harris
>> Yes.... >> is the power and the cooling.
Dion Harris
>> Yes.>> Just quick highlight on liquid cooling and the importance of it. What's the state of the art for NVIDIA? What's in it for the customer? How are they using it? What's the design criteria requirements?
Dion Harris
>> So I always like to start out by just first describing what's happening with the application. So first, we understand what's happening with AI models. Particularly, we've always known that large training projects require lots of data center scale systems. But what we're also seeing is now inference workloads are no longer run on a single GPU or a single server even. You have multi-server, multi-rack deployments of inference. So to the extent that we can get these compute engines to talk to each other better drives more performance and efficiency. So if you start with that as sort of the key requirement, we look at our architecture and start with a clean sheet of paper saying, "Okay, what do we need to do?" So first thing, we need to stay on copper as long as possible because you lose less energy when you convert from copper or electrical to optical. And so that's where our scale-up rack architecture comes into play. So we have to call it NVLink, which allows you to scale to over 72 GPUs today and beyond, and so that is a requirement. And then the next downstream effect of that is how are you going to cool it? So having a compute density, so typically data centers, we're optimizing on 20 kilowatt racks, whereas our current Blackwell architecture is anywhere from 120 to 130 kilowatts in a single rack. If you're going to drive that compute density, liquid cooling becomes really important, right? And so that's the way we think about it is first, what does the application require? How can you really meet the requirements and the needs to serve that application efficiently? And then what does that mean for our architecture? And then as a result of that, we've landed on direct liquid cooling as being a key technology to enable that. And that's why we're working with all of the major CDU builders and providers like Vertiv, like Schneider Electric, CoolIT, Cooler Master, you name it, because we recognize that that core technology needs to be there in order to deliver the performance and the efficiency that we're trying to get to.>> And Dion, what's great is that the solutions and the AI native applications are coming out fast. You guys are enabling that.
Dion Harris
>> For sure.>> Keep working at it and we love what you guys do. Love the company. Again, watching the success of NVIDIA is like watching the changing of the guard in the computing industry because the enablement is massive.
Dion Harris
>> Sure.>> So again, all verticals, all industries, not just apps and tech, all industries.
Dion Harris
>> It's been a fun ride. And like I said, I often say that NVIDIA has been like a 25-year overnight success in the sense that with the advent of CUDA in 2006, that had no obvious value, but being able to understand that general purpose computing would be a thing for GPUs and now seeing all the different use cases, gaming was the first, it extended into a lot of the scientific applications, AI was 2012, and now we kind of see this->> This multiplication applies to all now neural networks.
Dion Harris
>> Exactly.>> Again, this is a beautiful thing because that's called the long game.
Dion Harris
>> Exactly.>> So you guys have played... This video actually circulating. I just saw some great videos of Jensen going back years and years ago.
Dion Harris
>> Yes.>> A lot younger. And you actually have the same message. It hasn't really changed.
Dion Harris
>> It hasn't changed one bit. I mean, it all started with the advent of accelerated computing, putting a GPU with a CPU, and it was recognizing that parallel programming was going to be the way forward for all types of applications, whether it was going to be scientific applications, whether it's going to be database processing. AI came along and became a killer app. Video processing is an obvious one. So we really approach each domain in the same approach. We basically say, "Let's look and see how can we add value to developers by giving them software and solutions that make it easy for them to adopt accelerated computing," and then once you find a developer community that really sees value in it, that drives the growth.>> And I think the AI factory and the physical AI vision and mission aligns with that too. It's not like it's a special version for retail.
Dion Harris
>> Exactly.>> Data's data.
Dion Harris
>> Yeah. Absolutely.>> You can't run an AI factor without a data factory.
Dion Harris
>> Yes. Yes. And so, I mean, I think that's one of the things, like I said, the fact that we have such a fungible platform that can be used to serve databases, can be used to run AI, can be used to simulate CFD simulations, and that really lies in our software. All the work that we've invested in our CUDA acceleration libraries, that is really the magic of the platform. In other words, you can't just hand someone a GPU and a network connection and say, "Go run your quantum circuit simulation.">> It's software. Software is awesome with AI.
Dion Harris
>> That's it.>> Yeah. Thank you for coming out.
Dion Harris
>> Thank you so much, John.>> another hour, NVIDIA here inside theCUBE. Again, the physical AI factory equation is bringing the physical and digital worlds together, tokens are output of the factory, software gets smarter, faster, cheaper to write, spend more money, make more money, Jensen's law, we'd love that. Of course, we're doing our part. I'm John Furrier with Dave Vellante here at our NYSE Studios, part of the NYSE Wired community, and of course, theCUBE. Thanks for watching.