Ramini Hasani, co-founder and Chief Executive Officer of Liquid AI, discusses groundbreaking advancements in data centers and artificial intelligence technology. This conversation is part of theCUBE's NYC Wired series, hosted by John Furrier, co-founder and co-CEO of SiliconANGLE Media. Liquid AI's pioneering work in neural networks and edge technologies highlights the evolving landscape of AI factories shaping future data centers.
In this episode, Ramini Hasani shares insights on Liquid AI's origins and mission, exploring their innovative liquid neural networks born from research at MIT's Computer Science and Artificial Intelligence Lab. Hasani and fellow co-founders, including notable figures from MIT, develop adaptable AI systems capable of delivering powerful intelligence in compact forms, aimed at revolutionizing edge devices and robotics. The conversation also touches on the work of theCUBE Research Analysts who delve into the company's contributions to AI advancement.
Key takeaways from the discussion include the importance of adaptability and efficiency in AI systems, as emphasized by Hasani. They highlight the capability of small foundation models called nanos, which can match the performance of larger models, offering low-latency and privacy-sensitive computing. The conversation also addresses the potential for future architectures where hybrid solutions will bridge on-device intelligence with cloud-based systems, in line with insights from industry experts and analysts.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Ramin Hasani, Liquid AI
In this interview from theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, Glean co-founder and CEO Arvind Jain joins theCUBE’s John Furrier to unpack what’s really working in enterprise AI today and what comes next. Jain explains why knowledge access remains the first successful AI use case at scale and how Glean’s enterprise search brings AI into everyday work. He details the past year’s lessons with AI agents – from the need for guardrails, security, evaluation and monitoring to democratizing agent building so business owners (not just data scientists) can create production-grade agents.
The conversation dives into Glean’s vision of the enterprise brain powered by an enterprise graph, highlighting the importance of deep context, human workflows and behavior to reduce “noise” and drive outcomes. Jain outlines core building blocks – hundreds of enterprise integrations and a growing actions library – that let agents securely read company knowledge and take actions across systems (e.g., CRM updates, HR tasks, calendar checks). He discusses how organizations are standing up AI Centers of Excellence, prioritizing “top 10–20” agents across functions like engineering, support and sales, and why a horizontal AI data platform that unifies structured and unstructured data – accessed conversationally and stitched together via standards like MCP – sets the foundation for AI factory-scale operations. Looking ahead, Jain says Glean’s upgraded assistant is evolving from reactive tool to proactive companion that anticipates tasks and accelerates productivity.
play_circle_outlineTransforming AI: Ramin Hasani of Liquid AI on Revolutionary Liquid Neural Networks for Edge Devices
replyShare Clip
play_circle_outlineTransforming AI: How Algorithmic Innovation Cuts Computation Costs and Unleashes Generative Capabilities in Language, Vision, and Audio
replyShare Clip
play_circle_outlineLiquid AI’s aim to become foundational software for physical AI applications.
replyShare Clip
play_circle_outlineEmphasis on the trend of decentralized AI processing at the edge.
replyShare Clip
play_circle_outlineInsights on improving AI training and inference capabilities in real environments.
In this conversation from theCUBE + NYSE Wired: AI Factories – Data Centers of the Future, Liquid AI co-founder and Chief Executive Officer Ramin Hasani joins theCUBE’s John Furrier to unpack how efficient, adaptable AI is reshaping the edge – and why that matters for next-gen enterprise infrastructure. Hasani traces Liquid AI’s roots out of MIT CSAIL and explains “liquid neural networks,” a biologically inspired class of models designed for adaptability after learning. The discussion explores why small, capable models at the edge complement hyperscale traini...Read more
exploreKeep Exploring
What is the origin story of this startup and what is its focus?add
What advancements were made in utilizing small technologies for AI and how might these technologies be scaled to understand various human communication modalities?add
What is the most important focus for individuals or companies at the present moment regarding technological advancements in AI?add
What are some potential ways to deploy AI at a large scale while ensuring accessibility and privacy, particularly in environments with limited internet access?add
What are the key factors for developing adaptive computation systems in the future?add
>> Hello, I'm John Furrier, host of theCUBE here in theCUBE's NYSC Studios here in New York. Of course, we have Palo Alto connecting Silicon Valley and Wall Street's part of our NYSC Wired in theCUBE program and community. Always featuring the entrepreneurs, making it happen. We got Ramin Hasani, co-founder and CEO of Liquid AI, hot startup, really kind of hitting the nerve, the neural network scene, but really kind of setting the table for what the edge will look like. And as innovation hits the scene, you're going to see a lot more innovation around algorithm and software dealing with models. Ramin, thanks for coming on theCUBE. Appreciate it.
Ramin Hasani
>> Thanks for having me.>> So at the NYC Wired, one of the things we've been focused on is kind of a community approach to what the experts are doing, what's happening in the scene. You guys jump off the page because we covered you on siliconangle.com, our site. But also, you're hitting an area that we are seeing where the puck's going. And that is, as the centralized resources like hyperscalers and neoclouds are spending all the CapEx, obviously for training, you start to see the conversation. In fact, I was having a public debate with another analyst around large language models and small language model. It was kind of like a cage match of like, "Large language models the only thing that mattered." I'm like, "Well, small's good too, but you can slice and dice."
But really, what it means is that as the centralized or those key resources get built out, the enterprise, and then real business has to operate with devices at the edge. So, we see a trend towards the edge which points to some of the things you're working on. So, let's get into it. What did you guys, what do you guys do? How were you formed? Tell us the origination story, how it all came together and why this focus?
Ramin Hasani
>> Absolutely. So, we are a two and a half years old startup. We started, spawn out of MIT. We have been working at the computer science and artificial intelligence lab. We were four co-founders. We were all, including Professor Daniela Rus who is the director of CSAIL, MIT. And we've been decade long focusing on how can we bring intelligence into confined or small spaces, so which is basically devices. And for us, device at MIT was robotics. So, when we were doing our research, we were thinking about how can we get inspiration from, let's say, nature and physics to build a completely new algorithm that doesn't sacrifice on the quality of intelligence when we are controlling robots, but something that actually allows us to bring the competition significantly lower so that we can have the best of intelligence actually on devices? So what we tried to do, we came up, this was my PhD thesis, in 2020, I defended, which started in Vienna and then completed at MIT and continued research at MIT. We have, we've invented a new class of neural networks. These neural networks, we call them liquid neural networks and liquid for flexibility. These were inspired, as I mentioned, by biology so that make the systems like so, stay adaptable even after learning. So, if you have adaptable computation on the edge, not that they're continual learning systems, but they are more adaptable. That means they can hold more knowledge and compress more knowledge into confined spaces. So, that was kind of the whole thing. We started controlling robots, like flying drones, doing self-driving cars, and then also fixing vehicles. And then we started looking into how can, how we, how can we predict, let's say, do predictive machine learning. This was part of our research. And we started digging into what are the fundamental properties of this technology that makes them better than what it was before in our vision intelligence and machine learning? And then, what we did, we solved a bunch of breakthroughs and then we actually came up with some ideas about how can we take these small technologies? For example, we showed that with a cup of handful of neurons, you can drive a car. Like 19 neurons, you can drive a car. That was very, very small, and that became kind of nature machine intelligence papers and stuff. And then, building on top of what we have done, we thought that, imagine if you scale this technology, can we go and evolve these systems for them to understand language, for them to understand vision, for them to understand audio? modalities, data modalities that humans can communicate. So, this realm of generative AI. With that mentioned, we started building efficient and very capable general purpose AI systems at every scale. Our first scale was like enabling kind of, from first principles, designing for, let's say, form of compute. So, it matters where the intelligence goes, and that was kind of the thesis of Liquid AI. Yeah. Two and a half years ago, started in Cambridge, Massachusetts our first office, and then our second US office was actually in San Francisco. And we are 70 people and we have an international subsidiary also in Japan, in Tokyo. So, we have been serving->> So you're a distributed team, basically....
Ramin Hasani
>> Yes.>> And you're global.
Ramin Hasani
>> Yes.>> And so, the breakthrough obviously was, you came in from the robotics angle, MIT started killing it on robotics, which we've been covering too. But take me through the mindset of when that breakthrough happened because the world's like, "Large language models." Again, I was mentioning that debate I'm having that. Most of the general public, even inform would be like, "Oh, large is everything."
Ramin Hasani
>> Yes.>> Small is just as important because now you have smaller devices, but also as the compute and XPUs and some of the hardware configurations and software, and look at CUDA, what CUDA's done, we think that's going to move to the edge too. You're going to have the ability for these small factories, these AI devices, be like many factories, I mean, but it's distributed computing. But robotics and agents kind of go together too. So, you got the agent wave, and then that leads to the physical AI world, right? So, this is kind of the trends we're seeing.
Ramin Hasani
>> Absolutely.>> Today it's agents typed up, well, AI infrastructure, that's where the action is. Agents come in next and then ultimately physical AI. You agree with that?
Ramin Hasani
>> 100%.>> All right. So, if you believe that, then what's next? What the most important thing right now that people should pay attention to?
Ramin Hasani
>> In fact, when we started the company, my vision was like to become the software layer for physical AI because imagine you have a ubiquitous kind of type of software that, NVIDIAs and AMDs of the world, they're building kind of the hardware landscape of physical AI where you can power agentic AI and then bring robotics and AI systems all around us, right? So, what we thought is that, can we bring a technology that can be general purpose enough and efficient enough and basically ubiquitous enough so that we can put them on any hardware? Users coming in, they define a use case, and then with that use case and also the place where they want to host this intelligence, be it a coffee machine, be it kind of a mobile phone or be like on a laptop or a satellite or anywhere outside of a data center, we should be able to host that. So, becoming the software layer of physical AI. So, that's what the thesis was.>> Yeah.
Ramin Hasani
>> And we are seeing, obviously scale enabled this new realm of kind of intelligence and the attention to artificial intelligence because we've been working on this artificial intelligence for the last 70 years.>> The road of super intelligence is coming fast, right?
Ramin Hasani
>> Right. Yes. And then, so the thing that we started realizing is that solving intelligence, solving the general intelligence problem is a multidimensional problem. It's not just scale. What are you scaling? So, it's a matter of scaling data, scaling algorithms and scaling methods. So, it's so many things that goes inside this process. So, when we started looking into designing this artificial general intelligence systems, generally intelligent systems, so we started looking into, from scratch. Yeah. What should be the architecture? The fact that these AI systems actually got really good because we figured out an architecture which was called transformers, right? Everything is built on top of transformers. Transformers is brilliant. Why? Because they're on a structure. They're matrix multiplications and you can build matrices of millions of parameters to matrices of trillions of parameters. And then, it's just the matrix multiplication that enables, like this form, in many different form factors. This is the beauty of transformers. But on the other side, they are also consuming a lot of energy, right? They are basically exponential. The more information an AI system processes, the exponentially you're quadratically actually, like you're having more and more computations. What we thought is that can we actually match the expressivity and simplicity of this kind of place, of our transformers with looking into all the other, let's say mathematical operators that are available. So, what we designed in house, we designed, we didn't want to hand tweak and find out an alternative architecture. We wanted to do that systematically. We build an AI that designs AI. So we built in house, we designed an AI system to explore through all the mathematical functions that are available. And then we tried, we told the AI, "Look, you want to go... I want, I want us for you to design an algorithm that can understand language, vision and audio, but at the same time, bringing all of these things on a Qualcomm chip, on an AMD chip, on an NVIDIA chip." Basically, these are kind of the criteria. So define this kind of criteria and achieve the highest performance. And then, let the AI go search through a search space and find out what should be the ultimate architecture that enables kind of that kind of result. Surprisingly, we figured out that there are also smaller instances of models that can perform as good as the larger instances of the models. So this allow, this algorithmic approach allowed us to explore not just blindly scaling these models, but really figuring out what is it that is fundamentally real about this idea of gaining high performance kind of computers at a certain, at a given capacity.>> And with a certain intelligence level.
Ramin Hasani
>> Correct.>> And having the kind of intelligence at scale, you've done both. It's interesting, you mentioned scaling, I've given some talks with Bill Tai, Brian Bauman and I have talked about matrix multiplication I think with Bill Tai. It's just matrix like in high school. This is where the GPUs come in and I think that's where you see the NVIDIA get in the wins here, and so that's one. In the old computer science days when you build software, the expression was garbage in, garbage out. The same could apply to scale. You can scale garbage too, right?
Ramin Hasani
>> Absolutely.>> So, scaling is not the one thing. So, I want to ask you about the scaling laws involved and some of the things you guys do. And the other thing that I want to ask you is, last year at NVIDIA GTC in San Jose, Jensen Wong was on stage and said, "KV cache is the operating system for AI factories." And I'm like, well, that's not an operation, that's networking. So, networking is the new OS, and that's basically what he said. And we believe that too. So, networking as a fundamental OS. That's counterintuitive if you're a CS person, but not really if you think about it. So how about the impact to scale, why managing the models at scale, big and small, and then the role of networking to, your impact at Liquid and also to how people want to design hardware.
Ramin Hasani
>> Absolutely. Well, high band beat memory, that's the whole thing that is actually the holy grail right now. Memory is extremely important, as you think about it, and the speed of communication for a long context kind of memory is like, again, it's like the crucial kind of component of any future intelligence system that you want to build. So obviously, Jensen's talking about that kind of memory.>> More tokens, maybe. Spend more, save more, or make more.
Ramin Hasani
>> Yeah. It's brilliant.>> All that Jensen's law. Tongue in cheek, but it's true though. You may need more tokens.
Ramin Hasani
>> Absolutely. And then, you can be smart about tokens, and the consumption of tokens as well. So, what we thought is that, all right, so you need memory. Let's deeply think about the computations that goes through computation of memory at larger scale. So, that means the more, the challenge is always scale, as I mentioned, scale has multiple dimensions. One scale is like parameters, number of parameters. Another scale is like the scale of data that you can process at inference, at the test time. So, the bottleneck is always the algorithm that becomes quite radically kind of expensive. Can we reduce this cost? When you saw that DeepSeek moments and all these other things, they were all about efficiency.>> Yeah, innovation. That was actually innovative.
Ramin Hasani
>> Absolutely, because we want to really like reduce the cost. Why? Because at the end of the day, what we want to enable, we want to enable reliable, long context kind of memory, interaction with the data that the user has, and then satisfy a data job that a person has to perform with much more efficiently. And we build Liquid AI on the foundations of efficiency. In fact, I would say efficiency's our DNA because we're thinking about algorithms extremely, across the stack. When I say stack, a stack of foundation model development. When we build models, the economics of liquidity is that the more information you want to process, the cheaper it gets. Very similar to how Jensen does it.>> Would it be safe to say that Liquid AI is AI for AI? Or how would you describe?
Ramin Hasani
>> I would say a foundation model for devices. So, let's say if you want to bring in intelligence systems like ChatGPT experiences, multimodal kind of systems that perform very reliably with high performance on a specialized applications, and you want to serve them, let's say immediately on your entire cars. Imagine you're an automotive company and you want to build an in-car intelligence. You cannot rely on the cloud. So, you have to have intelligence onboard. What is onboard? They're, on a car, usually there is a small little computer. So, you want to be able to bring in all these amazing generative AI experiences and also as as good as ChatGPTs of the world onto that car.>> Yeah.
Ramin Hasani
>> This is where Liquid comes in. Imagine you want to power Apple intelligence, the device portion, Samsung intelligence, the same thing. You want to have a smart TVs, smart devices in your home. Imagine you want to have places where you don't have access to internet, places where privacy becomes very, very concerning, and you want to immediately get access to intelligence. You're talking about sovereign AI. Imagine you want to deploy AI in global scale, let's say for your entire citizens of your country. You want to enable AI, but then immediately you don't want to pay billions of dollars first. But first, what you want to do, you want to basically bring in top down. Everybody has access to a mobile phone, a tablet or whatever. However, we can run Liquid foundation models on Raspberry Pi's. S,o if you can run them on Raspberry Pi's, that means like you can bring intelligence on the edge for everybody.>> And that changes the whole equation. By the way, DeepSeek, one of the things I like about DeepSeek is it's a tell sign, it's an indicator that you can use old school stuff. I mean, they basically were basically-
Ramin Hasani
>> Yeah.... >> reverse engineering CUDA and didn't have access to the big GPUs, so they used the old one. But speaks to the bubble problem because there's no bubble if you can reuse H100s, or GB200, whatever's out there can be reused. I want to ask about architecture because you're bringing up some really cool points that we're watching. If you go back, say, 10 years ago, or even eight years ago, the cloud was great. Cloud computing and SaaS, and the cloud was viewed in all technical circuits as a horizontally scalable system.
Ramin Hasani
>> Yes.>> You don't have to buy a data center, you go in the cloud, they higher level service use. It's horizontally scalable. If you look at what's going on now with what you guys are doing and what we were just talking about is that the network's now the horizontal scale and you're vertically integrating into the cloud. Do you believe that to be the new architecture because that flips the script because now you can say, "Hey, I'm going to horirizontally scale across devices," and then go to the vertically integrate to the cloud for other stuff, whether that's Azure, AWS, or Google, whoever, Oracle.
Ramin Hasani
>> Absolutely.>> You believe that?
Ramin Hasani
>> I think so, because if you think about it, there's always a circular kind of story about we're going to the cloud and then coming on devices. And I think this story is now on repeat right now. So, I feel like now we are entering to the phase where they, I mean, eventually the world is going to be a hybrid kind of solution.>> Yeah, distributed computing.
Ramin Hasani
>> Exactly. And I think we are getting there. We are, I mean, the edge, edge is improving a lot, and computers, as you saw, NVIDIA announced the little things and your MacBook Studios are, Mac Studios are amazing. And you see that.>> I mean, you're already seeing, you got Thor, the Thor, Silicon from NVIDIA, Prometheus Chip from Amazon. These point to the fact that the devices are going to have, I mean, terrestrial and space. I mean, you're talking about this is the new edge. What's your vision on that? Because if the devices get faster, probably there'll be new hardware reference implementations. That means what we know today as a device, a wifi access point will change. We all have phones, we're going to have pins or whatever they're calling these devices.
Ramin Hasani
>> Yeah.>> We're going to have things-
Ramin Hasani
>> Glasses.... >> Glasses, I mean, personal area networks are coming, so retail's impacted, healthcare will be impacted. The inbound data coming off the edge, right now inference is the hot topic, but no one's talking about training. I mean, that's new data coming in, and most of the AI factory or physical AI work is synthetic data. So, how do you view that? I mean, how does your mind think about that? Because the possibilities are, okay, intelligent edge, true intelligence, not like industrial manufacturing use case, which is great by the way, but if an intelligent edge comes, you need training and inference when you need AI to figure out what to do on that device.
Ramin Hasani
>> Absolutely.>> So, what's your thought on that?
Ramin Hasani
>> So, think about it like this. So, we're talking about what you say, training inference, what's the common denominator of the future is adaptability. You want adaptive computations on devices. So, if you can enable something like that, that's great. So, this is where we got started. With liquid neural networks, the ideas where like, can we have continuously adaptable systems that are much more kind of resilient to input data as they receive, and be able to perform in novel environments or on novel data, the same way that they perform on the training data that they have seen. So, this comes very much into the of what we're building in the future. There's innovations across software and innovation across hardware that is required to enable that adaptive form of computation. So, what we believe is that a co-development between the two, softwares, hardware co-development is something that is absolutely essential. We're working very closely with AMD, we're working with all the other providers, like Sony's of the World and Qualcomm's of the world. And with everyone that we talk to, the ideas are basically enabling that kind of future. And I see future of competition, future of chips are going into that direction of hosting more adaptive form of competition. These adaptive form of competitions are, if you think about it, is basically being able to train on an incoming series of data, as you perform the forward computation. So, you can perform backward computation or recursive self-improvement, performing that kind of operations, and that is kind of the future.>> And that's real data by the way, too. That's not synthetic data. That's going to have huge value.
Ramin Hasani
>> Yes.>> All right. Ramin, you guys are really in a right, good spot. I like what you guys are doing. It's got the direction that we see too. Let's zoom out. Let's talk about kind of where you are today because you're in the poll position. What's going on with the momentum? Can you share some stats where the momentum is, some of the deals you're doing? What's going on with the company?
Ramin Hasani
>> Yeah, absolutely. So, I mean, it has been incredible because I think everyone is starting realizing how much you can get out of small models. Recently, from a product site, we released a series of small foundation models, which we call them nanos. These are liquid foundation models, LFMs. They're very tiny. Imagine a 300 million parameter model, performs as good as GPT5 on specialized applications. That's what we are enabling today.>> So, smaller is better for specialized applications.
Ramin Hasani
>> Exactly. And they, if you can, you see, the challenge of a small foundation models was always quality. When you make the system small, they can go and match inside the memory of a device. But then they have to be fast, low latency. The biggest challenge is that how much intelligence can I get out of this system? How intelligence is this system? Let's say you want to do document summarization privately on a device, you can do that. Let's say you want to identify kind of personally identifiable information, PIIs kind of, you want to mask out kind of those information, you can do that. Let's say you want to build agentic AI for a full loop of, let's say multiple models like working together to perform, booking your travels or calling an Uber for you or something. And if all of these things are going to be, they're kind of privacy-sensitive kind of information. So, what do you want to do? You want to do computation directly on device.>> Yeah. And have policy on that too.
Ramin Hasani
>> Correct.>> That's where the chips come in.
Ramin Hasani
>> Absolutely. So then, for all of these values that I mentioned, we're building these foundation models that can process audio, text and vision plus signal, other types of signal. So, we're also working with financial time series and stuff because our models are like, from an architecture point of view, they can process any data to anything. So, that's kind of our specialty. We see in two industry clusters, we see a lot of attraction. So one is OEMs, and original equipment manufacturers, which is like consumer electronics, automotive and all those places. And then another one is, in financial services and e-commerce. In financial services, what is very fascinating for, what is the work of small models is the fact that they can be extremely low latency. Today we have models that can go sub 10 millisecond in activation. So, imagine a bulk of text coming in, how fast you can complete those. Okay?>> And then, you still talk to the large language models and cache it.
Ramin Hasani
>> Absolutely.>> I mean, this is where hardware. Ramin, we ran out of time. Definitely would like to have you back. I think that you're in an area that bridges the agentic to AI factory, physical AI world, robotics, every use case. I mean, the disruption on the business model transformation, again, the new architectures are kind of coming. It's counterintuitive, but the cloud's vertically scaling. I mean, they might not like that, but that's what they are.
Ramin Hasani
>> Yeah. It's resource. I would tell you that the future is going to be a hybrid solution where the most sophisticated applications of AI, let's say AI scientists is going to go on the cloud, and the more kind of tasks that you want to perform reliably, privacy-sensitive, latency sensitive and efficient kind of implementation of these AIs are going to stay on a device and there will be orchestration within the cloud and the device.>> Well, all that's matching what we're seeing on our research side. Real appreciate you sharing your insight. Congratulations on the success funding, big series A. We'll be chatting further. Thanks for coming on.
Ramin Hasani
>> Absolutely. Thank you very much.>> I'm John Furrier with theCUBE, here at the NYC Cube Studios, breaking down all the leaders in the AI factories, the future of the data center. And the edge obviously is a data center too. The devices are getting smaller, faster, cheaper, more intelligent, and as the models need to perform at the edge with real data, you're going to start to see a whole nother architecture. Liquid AI is one of the companies that's looking good off the T, as they say. I'm John Furrier. Thanks for watching.