Steve Tuck of Oxide Computer Company appears at the New York Stock Exchange for the AI Factories series with NYSE Wired and theCUBE. Tuck describes how Oxide Computer Company designs integrated on-prem cloud computers for artificial intelligence workloads. Drawing on decades of hardware, cloud and software experience, they explain the company's co-designed hardware and orchestration stack, developer friendly APIs, deployment velocity improvements and the operational model that aims to bring hyperscaler-class efficiency and control to enterprise data centers.
Tuck emphasizes that on-prem cloud computing is resurging because organizations require sovereignty, better economics and operational control for AI deployments. They cite efficiency gains, reducing non‑computational power overhead from roughly 25% to about 1.2% on Oxide Computer Company's platform, highlight supply chain and power constraints and identify target customers including Fortune 2000, regulated industries and sovereign cloud initiatives while scaling manufacturing and services. The discussion addresses data center modernization, AI infrastructure, energy efficiency and edge computing strategies for enterprise deployments.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Steve Tuck, Oxide Computer Company
In this segment from theCUBE + NYSE Wired’s “AI Factories – Data Centers of the Future” series, theCUBE’s Dave Vellante sits down with Rob Biederman, managing partner at Asymmetric Capital, to unpack a disciplined approach to early-stage investing amid AI-scale infrastructure shifts. Biederman explains Asymmetric’s founder-first model: writing $1–$10M checks (often via SAFEs), joining boards as they form and helping operators with go-to-market, operations, finance and strategy (not product/engineering). He shares why the firm avoided 2021’s lofty SaaS multiples in favor of backing proven builders earlier (single-digit pre-money), and highlights portfolio execution such as a cash-efficient LATAM e-commerce company scaling from ~$1-2M to about $50M in revenue. The discussion also explores Asymmetric’s subscale buy-and-build plays (e.g., pool cleaning in San Diego, sleep apnea clinics in Houston), where density, tech-enabled services and platform ops expand margins and enterprise value.
Biederman weighs in on AI economics as enterprises race to “AI factories,” cautioning that not every AI workload creates ROI and that overbuilt compute assumptions could face a reckoning. He argues that winners will prove a clear 10× value equation and avoid scaling go-to-market before product-market fit. Additional insights include early liquidity discipline (returning $0.20 on the dollar before the fund’s third anniversary), portfolio survivability (34 of 35 companies still operating; three positive exits), and guidance to founders: make your value proposition relevant, credible and differentiated. Tune in for candid perspective on how capital efficiency, ownership discipline and anti-thematic sourcing intersect with a world where GPU-dense data centers and AI-scale software are reshaping enterprise infrastructure and economics.
In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future, Steve Tuck, chief executive officer and co-founder of Oxide Computer Company, joins theCUBE and NYSE Wired's Gemma Allen to discuss why on-premises infrastructure needs a complete architectural rethink to keep pace with the AI era. Tuck explains how his experience scaling cloud computing at Joyent under Samsung exposed the cracks that emerge when infrastructure is assembled from disparate hardware suppliers. Oxide's answer is a purpose-built cloud computer — a fully integr...Read more
exploreKeep Exploring
What problem are you trying to solve?add
Who are your target customers, and why are some organizations moving from public cloud back to on‑premises or private cloud solutions?add
Where is Oxide being deployed, and what are the main constraints (for example, power efficiency and component shortages like DDR5/DRAM) affecting AI/data-center buildouts?add
Why was Oxide started?add
Why did you raise a $200 million Series C shortly after your Series B, even though you didn't financially need the capital?add
>> Welcome back to theCUBE Studio here at the New York Stock Exchange. This is our AI Factories series with NYSE Wired, and joining me now is Steve Tuck, CEO and co-founder of Oxide Computer Company. Welcome, Steve.
Steve Tuck
>> Thank you. This is incredible.
Gemma Allen
>> So you have had a career that spans hardware, software, cloud. Founded Oxide in 2019, had a very interesting year from the investment perspective, living the dream, really, from the perspective of typical tech founder. Raised a Series B and C in the space of what, three months?
Steve Tuck
>> Yes, pretty tightly coupled.
Gemma Allen
>> Talk to me about the gap in the market you saw and Oxide is solving for.
Steve Tuck
>> Sure. So, yes, started out in more of the traditional enterprise hardware side at Dell for 10 years, then spent 10 years at a cloud computing company, Joyent, beginning to appreciate that software teams wanted to meet infrastructure at a higher level of abstraction, this thing called cloud computing back in 2009, 2010, and we built a cloud computing company. It was acquired by Samsung. We had to go grow to Samsung scale, which is, if you haven't experienced, it is intimidating, and began to realize that when you're building cloud computing from a disparate set of parts of multiple hardware vendors, you're doing all of your own software, cracks start to emerge in the architecture and they get really exposed at scale. And so as they say, that scar tissue-based company formation is what led us to Oxide. And to your question of the problem that we are striving to solve for, we believe deeply that cloud computing is the future of computing, not this notion of renting someone else's capacity, but an architectural construct. A software developer should not have to know about a switch or a server or a piece of storage. They hit an API and they get elastic compute and storage and networking services. And the operations teams of this infrastructure have completely redesigned what computing looks like behind their data center walls.
Gemma Allen
>> So you're all in on-prem?
Steve Tuck
>> And we think that experience needs to be brought to the on-prem market, that today still very much looks like, when I was at Dell, a rack-and-stack approach of servers and storage and networking. And we wanted to build, in product form, cloud computers that could be easily deployed on prem.
Gemma Allen
>> So if you think about the world of cloud computing, the on-prem, public cloud, private cloud debacle of 10, 15 years ago, it's almost comical now to look back on some of the narratives that were so prevalent at that time, right? In some respects, we're in the same world and narrative hype noise/signal scenario with AI. But what we do know about on-prem is that there has always been an element of sovereignty, autonomy, control, and in this world of AI, it seems to me as though there is an increasing need for certain verticals in particular to maintain ownership.
Steve Tuck
>> There really is, and I love, I mean, the narratives of that era, where first of all, public cloud computing was this experiment, right? It was not ready for production workloads. Would it ever be? And then you saw that shift in narrative to public cloud is going to be the only way that infrastructure is consumed, and lo and behold, the Fortune 2000 on-prem operators would say, "Wait a minute. In the future, I can't imagine more than 50% of my infrastructure running in a service provider model." And so I think, yeah, you've got this now bounceback of on-prem is cool again. Hardware is cool again.
Gemma Allen
>> It's funny, because in the world of AI actually, and we're seeing this in a couple of spaces, boring is safe and safe was cool again.
Steve Tuck
>> Right.
Gemma Allen
>> And that is the reality.
Steve Tuck
>> Totally. And so I think the trap that we need to avoid is not reshuffling the deck again, the same deck from 10 years ago, which is what is the new treatment of the same pieces of the architectural puzzle, that we can reconfigure with a new name as there's this surge in need for modernization on prem, and AI is shining a really bright light on it when it comes to things like energy efficiency and security and orchestration. The old way of doing things is vastly inferior to what these leading AI application deployments need, and that was what was a big enough problem for us to start the company in the first place. And it turns out, at the end of the day, designing hardware and software together is where you can achieve these outcomes around efficiency.
Gemma Allen
>> Well, I guess all eyes are on CapEx spend, and there's no room for inefficiency in this model, right? Or for waste. So talk to me about some of the customers, like the ideal buyer persona for Oxide. Are we talking about folks who don't want to have to go down this scrappy route of meddling with hyperscalers and different feature sets of their product offering? Are we talking about folks who just want the service delivered and run efficiently with an element of reassurance? Who's the ultimate buyer?
Steve Tuck
>> Yeah. If you think about the hallmark of public cloud computing and elastic compute, I mean, the target customer was really all of general-purpose computing. If you think about Amazon early days, the beauty of EC2 is before public cloud computing, you had to preordain what your hardware config was going to be for your software. "My software is memory-intensive or compute-intensive or IOPS-intensive, and so this is the hardware configuration I need and God help me if it changes." Cloud computing ushered in this more elastic world, in which it didn't matter if I was launching new applications or the profile of my application changed. Month to month, year to year, this pool of resources could be reconstituted in software in real time to be able to support that. And so the short answer is we're serving certainly larger enterprises, because those are companies that have on-premises infrastructure investments, and will as far as the eye can see. Think Fortune 2000 and emerging next-generation enterprises. But what's driving adoption and conversations are a bit of bounceback from the public cloud. I think there was a rush into public cloud, and the economics didn't quite line up. Architecturally very valuable, economically a little challenging, and so you have a refactoring of, "I want that cloud computing experience, but I need better economics. I want to actually have CapEx and depreciate, and I want control and ownership of a certain subset of data." And the sovereignty piece is a big one.
Gemma Allen
>> Yeah, for sure.
Steve Tuck
>> There's just certain subsets of data that I'm not going to put in someone else's service or in someone else's walls. I shouldn't have to give up cloud computing capabilities to do that.
Gemma Allen
>> So you see a move, a shift back to on-prem by some of the larger industries like banking, perhaps, where there is definitely a certain element of risk that we cannot not name from the perspective of an agentic, especially in the agentic era, right, where you have systems talking to each other and executing on tasks autonomously. I mean, you need to ensure that you have control at least of the footprint, but behind that, the orchestration layer perhaps. Who else are you targeting? Is it like are you looking at this geographically? We have a lot of folks in too who talk a lot about sovereign cloud from a geographical perspective.
Steve Tuck
>> Sovereign cloud is a big one. I think it's following that same pattern of we started in the U.S. with regulated industry, financial services, public sector, the energy sector, places where people have data sets that they want to retain, but they do need advancements in cloud and AI. And now you're seeing that pattern in sovereign cloud, where these initiatives are, "Keep all of the data within our borders," and, "How can we have the controls and the validation that not only is the data secure, but the system that the data is running on is secure?"
And that's been another interesting through line for whether it is frontier model companies or financial services companies or governments, is there's a re-analysis of that whole-stack security rather than just the outside-in threats. How do we understand the kinds of systems and especially firmware that is running on the system that is hosting our most secure data? And that has been an important element of this redesign of the stack. You have to be able to do it for efficiency, but also for security.
Gemma Allen
>> And broadly speaking, are these Fortune 200 companies or so bringing this in house from the perspective of ownership from a facilities perspective? Are we seeing like the rise of these huge, huge data center players, this 2PL model that was so common for so long in public and private, right? But where's the trend going?
Steve Tuck
>> Yeah, I think it's being deployed everywhere. It's data centers that companies own, it's co-location facilities, it's at the edge. It is some of these neoclouds that are emerging that are building out the power and the capacity. And I think another consistent theme is this reexamination of power and power efficiency, because we're out of power and everyone is discussing all of these ambitious AI projects, that the first limiter that's being discussed is the fact that we don't have enough power for all this stuff. And rather than figuring out how to produce more power ... which we also should, we're vastly behind in power production ... there's a reexamination of the fact that, back to where we were 10 years ago and where we are today in a lot of on-prem data center buildouts, you've got wildly energy-inefficient designs. On average, a rack with your traditional 1U, 2U servers and storage and networking, 25% of the power disappears to spin fans, so you have three-quarters of your capacity before you even get started. The hyperscalers, it's closer to 1.5, 2%. With Oxide, we've got it down to 1.2%, so you can get this massive improvement in energy efficiency. And then the next shortage is components, DDR5, DRAM. Like the memory industry, the storage industry.
Gemma Allen
>> We saw what happened in Korea this week, right, from the perspective of energy and memory colliding.
Steve Tuck
>> That's right.
Gemma Allen
>> And the supply chain impacts to the U.S. are huge. It's a fascinating time, when you think about they say they have three weeks of supply in Korea. It's like 70%, I think, of the global supply chain. It's crazy to think about the capacity constraints that are just right there, like a day away.
Steve Tuck
>> And so I think that just serves even more to focus on how can we do more with less. You can't just leave hundreds and hundreds of racks that are running inefficiently at 25% power overhead, and then a utilization rate that on average is 25% at best. So you've just got these ... I can't remember where I read it, but someone was talking about data centers are eating the world and they're running empty. The utilization of these assets is vastly underutilized, which is where you need good orchestration software in your hardware.
Gemma Allen
>> I think in some respects, the on-prem world has somewhat of a marketing challenge there, right? Because there was certainly a belief for a long time. I used to sell public sector at Microsoft, and the view was very much that on-prem is less efficient, it's more costly, you're going to have more wastage, but you have more control and ownership, right?
Steve Tuck
>> Right.
Gemma Allen
>> You're actually countering that in some respects at Oxide, and we have to kind of, I guess, figure out how those two narratives meet.
Steve Tuck
>> Yeah. I mean, actually I think it is not a fault of the on-prem operators, but I actually think it is not far off to say that historically it has been inefficient and a laggard in some ways, and it's because of just the ecosystem. You have to assemble a kit car of five to six to seven vendors to get in that rack that you're deploying, or 50 or 100 racks, and you have to have an operational army to be able to integrate and assemble and stand it up. It may take many, many months from when the boxes land on the data center shipping floor. And meanwhile, the public cloud hyperscalers, if you go on their data centers, it's a totally different world.
Gemma Allen
>> It's fully integrated.
Steve Tuck
>> The design center for the computer is not a 1U box, it's a rack, and networking is designed in and it is vastly more energy efficient, and it comes with all the software that you need to run a cloud. And so why we started Oxide was why shouldn't on-premises have those same innovative capabilities, where instead of a 90-day deployment, it takes two hours?
Gemma Allen
>> Wow.
Steve Tuck
>> Instead of traditional virtualization that is ticket-based development, it is the end of an API, everything elastic, developer-friendly, and that you can get some of the same operational efficiencies that the biggest technology companies in the world enjoy for themselves to be able to improve efficiency, improve margins, improve economics. But it took us a good four and a half years just to get the first products to market, because we were taking on a lot.
Gemma Allen
>> And let's not forget cost creep in virtualization too, right? We won't name anyone in particular, but those P&L line items are pretty mega, so they're certainly ripe for opportunity for sure. So obviously other folks see that too, because you've just had a monolithic raise, really, from the perspective of time and pace. Two raises in three months.
Steve Tuck
>> Yes.
Gemma Allen
>> Talk to us. Give me the story here.
Steve Tuck
>> Yeah. So we raised a hundred-million-dollar Series B several months ago, and again, we'd been deploying products into enterprise markets and customers had began adopting the platform, and then we're just starting to hit that scale inflection point. And then we really, really hit a scale inflection point, and we went from a pretty good pace of sales and operations to, for the last seven months, our manufacturing operations team can't build them fast enough. And then we're fighting through supply chain issues like everybody else, and making sure that we can be out far enough ahead for the largest projects that our customers want to go build. And so we had this commercial momentum, and that always draws investors right back in, to, "Can we please?" The conversations always start with, "You really need to raise more money to secure the future of the company." And it's like, "Well, I mean, you want to own more of the company. It's fine. It's good. We want you to."
Gemma Allen
>> Let's call a spade a spade.
Steve Tuck
>> "Yeah, we want you to." And at the same time, though, they were also right. I mean, if we had been able to see a little further into the future and know that some of these supply chain things were coming. We're very happy to be as capitalized as we are, but we raised a $200 million Series C pretty quickly thereafter. And I think the biggest reason we did it, because we financially didn't need to do it, was over the last five years, the questions have been how does it work, and how is it different than what I'm doing today and how can I migrate to it. And as we've answered all those questions for customers, the last question was, "Are you going to be here in five years? Are you going to be here in ten years?"
I just mentioned that I was at Joyent, acquired by Samsung. If you haven't heard of Joyent Public Cloud, that's for good reason today. And these customers are making big strategic bets that are five, ten, fifteen-year bets, and we want to be able to say without blinking, "Yes, we're going to be here in 10 years and 20 years and 30 years," and the capital is part of making sure that we have that future to go execute on that.
Gemma Allen
>> I mean, it's an interesting market to compete in, right? Because from the hyperscaler perspective, you raise a good point. They run their own operations so clinically, right? But that doesn't necessarily often translate into the user opportunity or the buyer opportunity, especially for certain-sized-market players. It can be clunky, it can be scrappy, it's a lot of overhead. So for sure, they're competing on all fronts, but the user experience or the buyer experience might not necessarily always be the optimum option, right? So I'm certainly interested to see how this goes for you, Steve.
Steve Tuck
>> Yeah. I think the next chapter of it has very much been in the AI space, because I think among on-prem being cool again, compute is cool again, general-purpose compute, because people are now really talking about how much general-purpose computing wraps around these GPU buildouts. And whether you're doing reinforcement learning or a lot of the agentic stuff, if you use ChatGPT or Claude and it says, "Researching," or, "Searching the web," that's all being done on CPU cores. And all of that wrapper for the end-to-end pipeline for these AI projects requires the kind of orchestration that cloud computing affords. And so we are, again, squarely focused on the on-prem space. The public cloud hyperscalers are doing a terrific job offering these as rental services, and we want to give businesses the ability to have access to that same capability when they can purchase it, deploy it, depreciate it, control all access to it, so that no administrator, no third party, has any access to their control plane. That's the sovereignty that comes along with owning your cloud computer.
Gemma Allen
>> Taking mission control back in house.
Steve Tuck
>> That's right.
Gemma Allen
>> So tell me, next six to twelve months ahead, are you hiring? I'm sure you're looking at the global footprint for this, and especially from a sales opportunity perspective in sovereign cloud, what does the next 12 months look like ideally?
Steve Tuck
>> Yeah. We're hiring as quickly as we possibly can. I think 12 months ago, we had three or four RECs that were out online. I think today we've got 20 and hiring multiple people in each one of these roles, and so that cuts across manufacturing operations, supply chain management, logistics, warranty. We're building out another set of layered services for our customers, whether it's deployment, data center services and support services, and then of course engineering and product. So we are definitely building out the team from a go-to-market perspective, expanding into new countries, and making sure that we can support these sovereign cloud efforts that are underway. The frontier model projects are wildly fast-paced, but it's been great to see some of the sovereign cloud efforts are actually moving much more quickly than government projects have moved in the past. And that's where we get excited, because I think one of the things that we bring is that velocity, is that ability to cut project times that are measured in years down to quarters. So definitely expanding in a bunch of regions globally.
Gemma Allen
>> Well, check out Ireland, Steve.
Steve Tuck
>> We will.
Gemma Allen
>> We've got a great climate. We've got a very wet climate for the data center world, right? Which is why so many large hyperscalers have such a large data center footprint there.
Steve Tuck
>> Love Ireland.
Gemma Allen
>> If you love the rain.
Steve Tuck
>> Our future is to make sure that we're a global provider in every market, and so we'll be building and shipping new products and services. We have a whole bunch of new software features that are coming this year, and it's a nice treat for customers because they don't have to do anything to get them. They just show up in an update.
Gemma Allen
>> Yeah. Amazing.
Steve Tuck
>> There's no license contract.
Gemma Allen
>> Yeah. You're not going to get some vendor-lock scenario on the back of any win.
Steve Tuck
>> No. We have a contrarian belief that when you buy a computer, that it should come with all the software that you need to run it, and there shouldn't just be these complicated licensing constructs that you have to manage along with. So no, it's just shipping more, shipping as fast as we can. We have our next-gen platform that we'll be announcing very soon, which is new hardware, new software, fits in existing racks that customers use, so a lot to build.
Gemma Allen
>> An exciting space, and welcome back to theCUBE. I know you're a long-term friend of John and Dave, who are in Barcelona right now, so sorry to miss you.
Steve Tuck
>> It's okay.
Gemma Allen
>> Hope we have you back again soon.
Steve Tuck
>> No, love theCUBE. This is theCUBE all grown up. I mean, this is amazing.
Gemma Allen
>> TheCUBE is growing up now. Thanks so much for coming on the show.
Steve Tuck
>> All right. Thank you very much.
Gemma Allen
>> I'm Gemma Allen, coming to you from theCUBE Studio here at the New York Stock Exchange. This is AI Factories, one of our segments with NYSE Wired. Thanks so much for watching.