In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future, Greg Matson, senior vice president and head of marketing and products at Solidigm, joins theCUBE's John Furrier to discuss how high-capacity storage has become the defining asset class of the AI factory era. Matson explains why storage — once an afterthought in compute architectures — now sits front and center in every major AI deployment, driven by the relentless data demands of inference and agentic workloads. He details how Solidigm's 122-terabyte SSD, still the highest-capacity drive shipping to the AI industry, is functioning as a direct memory extension for GPU clusters — enabling persistent context through KV cache infrastructure and keeping GPUs productive rather than idle.
The conversation also explores the structural scale of demand, with Matson noting that AI infrastructure requires 25 to 35 exabytes of storage per gigawatt of data center capacity. Multiplied against the hundreds of gigawatts announced through 2030, that figure signals a tectonic and permanent shift in the storage market. He highlights Solidigm's partnerships with VAST Data, CoreWeave and major hyperscalers, while pointing to neoclouds as a growth driver that analysts project could represent half of the GPU-as-a-service market by 2030. The discussion extends to the edge, where hospitals, universities and distributed AI deployments face the same space and power constraints as hyperscalers — making dense, high-capacity storage equally critical across the spectrum. From reinventing internal processes with an AI-first mindset to scaling storage infrastructure across environments where every rack unit and watt of power counts, Matson provides a practical roadmap for how organizations can maximize ROI from AI investment rather than treating infrastructure spend as capital expenditure alone.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Greg Matson, Solidigm
This interview examines artificial intelligence factories and the growing role of high-capacity storage in modern data centers. Greg Matson of Solidigm, senior vice president strategic planning and marketing, presents deep experience in high-capacity solid-state drive product strategy and market development. Matson discusses Solidigm’s 122 terabyte solid-state drive and upcoming higher-capacity form factors and explains the evolving role of storage as an extension of graphics processing unit memory. They highlight integration points between storage and compute for AI workloads.
theCUBE Research produces the segment with John Furrier of theCUBE guiding conversations about neoclouds, hyperscalers, partnerships and edge deployments. Matson outlines demand forecasts, architectural implications and market dynamics for storage in AI infrastructure. They note Solidigm could sell twice as much product today and estimate 25 to 35 exabytes of storage per gigawatt of data center capacity. They position storage as a primary architectural asset that drives return on investment when combined with software platforms such as VAST and close graphics processing unit integration through key-value cache and memory extension. Analysts emphasize design-for-future and edge-ready form factors for next-generation data centers.
In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future, Greg Matson, senior vice president and head of marketing and products at Solidigm, joins theCUBE's John Furrier to discuss how high-capacity storage has become the defining asset class of the AI factory era. Matson explains why storage — once an afterthought in compute architectures — now sits front and center in every major AI deployment, driven by the relentless data demands of inference and agentic workloads. He details how Solidigm's 122-terabyte SSD, still the highest...Read more
Greg Matson
SVP, Strategic Planning & MarketingSolidigm
John Furrier
Co-Founder & Co-CEOSiliconANGLE Media, Inc.
HOST
In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future, Greg Matson, senior vice president and head of marketing and products at Solidigm, joins theCUBE's John Furrier to discuss how high-capacity storage has become the defining asset class of the AI factory era. Matson explains why storage — once an afterthought in compute architectures — now sits front and center in every major AI deployment, driven by the relentless data demands of inference and agentic workloads. He details how Solidigm's 122-terabyte SSD, still the highest...Read more
exploreKeep Exploring
Why is storage becoming such a critical, high-demand component of AI infrastructure and driving strong interest in companies that supply memory and storage for GPUs and other AI compute?add
Why is high-capacity storage—especially GPU-attached and network-attached storage for AI—so hot right now?add
What is the largest storage capacity currently available in the smallest form factor?add
How is the AI era changing demand in the storage market, and how much storage is needed per gigawatt of data-center capacity?add
>> Hello, I'm John Furrier with theCUBE. Here at theCUBE's NYSE studios. Of course, we have our Palo Alto studio connecting Silicon Valley and Wall Street. It's part of our NYSE Wired brand and community. This is our AI Factory series, where we talk to the leaders who are making it happen and bringing in the future of these AI factories that are going to enable the AI-native application revolution. Certainly the agentic wave is upon us. Greg Matson's here. He's the senior vice president, head of marketing and products at Solidigm. Greg, great to see you again, CUBE alumni. Welcome back to our set here at the NYSE.
Greg Matson
>> Love to be here, John. It's super exciting to be in the NYSE.
John Furrier
>> So, you're friends with everyone these days because you have the most important product that everybody wants. It's in tight supply. The memory business is what's really powering a lot of the GPUs and the XPUs and all that compute. NVIDIA's success, Arm's success, AMD's success, all these semiconductor companies that have that compute that's powering the AI revolution need storage, and you guys are leading all.
Greg Matson
>> Well, who would have thought? GPUs have been in the headlines for the last couple of years, and no one thought about storage along the way. Today, it's the hottest commodity, I think in... I don't want to say commodity, actually, but the hottest product-
John Furrier
>> The asset. It's an asset class.
Greg Matson
>> It's a huge asset class. With the increase in compute power, led by NVIDIA and others, it's really putting huge demands on storage infrastructure and creating a massive amount of demand for us.
John Furrier
>> The impact of storage, obviously, in every conference I've been to that's been on AI infrastructure, storage gets closer and closer to the crown jewels, and the systems are getting denser, tighter, engineered supercomputers. Basically, the NVIDIA racks and all the clusters that we're seeing are designed architecturally to be optimized for high-velocity, low-latency data transfer. This is where storage is key. Now, the expensive storage is HBM, high-bandwidth memory, but then now you're starting to see storage being re-architected. Take us through what's the key driver to the success of Solidigm right now? Is it the fact that you got the form factor and the architecture? Take us through why this is so hot right now.
Greg Matson
>> There's multiple levels to this. There's storage that is attached to the GPUs, so that's driving very high-performance, liquid-cooled storage that feeds GPUs directly and there's network-attached storage, it's very high capacity and space-efficient, power-efficient storage. We're the leader in high-capacity storage for AI. We were the first to market with our 122-terabyte SSD about a year and a half ago and still the primary shipper of that capacity to the AI industry. But what's happened over the past two years is GPUs eat data. Basically, AI is all based on data. It sucks in as much data as you can possibly have. You're doing inference, you're creating information and when you're doing a inference run, you need data immediately accessible. And so, what's happened is that you need context to be stored or more context, you're blowing out of the HBM and the DRAM in terms of capacity. NVIDIA has actually announced a whole new context, memory, extension, reference design, so that storage now is an extension of the memory within GPU.
John Furrier
>> Yeah. I love this topic. It's something that we've been talking about in theCUBE for two years now, it's actually happening. We've all seen it. If we've been online and use ChatGPT or any one of the big frontier models, we've all seen the experience where sometimes it's smarter than others. It remembers things or it has chat history. We've all seen it when it didn't work. Well, it redoes the same thing. That's the memory working. This is not just the compute and GPUs. The memory plays a significant role, has to infer, does that... It's getting better and better. The past six months alone the models have been coming out have been phenomenal.
Greg Matson
>> Yeah.
John Furrier
>> That's powered by storage.
Greg Matson
>> Powered by storage, absolutely. If you imagine you're in ChatGPT, you're writing code and you've loaded your code base in there. And then, all of a sudden it disappears, and you have to recompute that. Well, your GPUs are idle at that time. That's why this whole KV cache infrastructure and context memory extension phenomenon has happened is because now you can actually store that context. It's permanent and you could go right back to your task if you take it offline.
John Furrier
>> NVIDIA's done an amazing job, you pointed that out in the KV cache from two years ago when it got announced. I was like, "That's going to be the magic..." It was. That takes us to the changing landscape. First of all, your form factor is phenomenal. I've covered it on theCUBE many times, people have seen the videos. You got terabytes this big. What's the biggest capacity, smallest form factor you have right now?
Greg Matson
>> So, right now we have 122 terabytes is the largest capacity. We're developing the next one, twice as big and even the next one after that, four times as big, that you'll see soon.
John Furrier
>> Smaller, faster, cheaper, as they say, has always been a nice model in semiconductors. But the impact to the ecosystem is phenomenal. I interviewed Renen at VAST, the founder, and these neoclouds are building out... You're starting to see the role of software and VAST and DDN and others, they talk about this OS. They treat storage not as like, "I'm in the storage category." They call it a data platform. Whatever they call it, some call it in a pipeline or whatever they want to call it. But the bottom line is it's software. It's an operating system. And VAST actually uses that word, operating system. That changes the game too. Talk about that impact because you're enabling and the AI infrastructure's enabling a new paradigm in storage, which will affect the neoclouds, it affects the hyperscalers.
Greg Matson
>> We've had a longstanding partnership with VAST, since the beginning. They built their operating system on top of our highest-capacity SSDs in the market. That combination of the magic of their software and their AI operating system combined with our high-capacity SSDs has really been a phenomenon, especially in the AI area, and adopted super widely in the neoclouds and AI factories.
John Furrier
>> The impact for that, it goes up the stack to the applications. We're seeing a huge amount of new kinds of services. I was talking with Anand, who runs the AI competency lab here at the NYSE and ICE, the parent company. They're seeing services they've never seen before. They can now... "Okay, you're in the data services businesses." All kinds of applications. Share some of the high-level applications that's being enabled from this dynamic that you're creating?
Greg Matson
>> Well, if you think about high-frequency trading and messaging, data obviously creates money here, right? We've been partnering with NYSE to deploy super high-capacity storage in their newest data platform. It's been very exciting for us.
John Furrier
>> Yeah, they've been very progressive. They got the Polymarket deal. Data's changing. Talk about the landscape, your standpoint. Obviously, you're pedaling as fast as you can to make as much of a huge demand curve. Talk about the supply-demand? What's the current state of the demand? I know it's high, but scope the order of magnitude of the demand curve.
Greg Matson
>> The demand is like something we've never seen. To be honest, the storage market has been pressed for supply like something that's never happened before. It's kind of a tectonic and permanent shift in the ecosystem because this whole AI era, again, needs data, data is stored in storage. And for us, we can sell twice as much easily today and I don't see it stopping. We need somewhere in the order of 25 to 35 exabytes of storage per gigawatt of data center capacity. Think about the hundreds of gigawatts that have been announced through, say, 2030. It's a revolution in the storage industry and it's not something that's going to go away anytime-
John Furrier
>> Talk about your relationship with the neoclouds. Obviously, you guys have good relationship with all the suppliers, people who put the AI factories together from the manufacturers and the assemblers, but also the neoclouds have emerged on the scene. Of course, you've got AWS, Google Cloud, Azure, Oracle, you name it, every cloud and the hyperscale that's out there, they're buying. But now, you have the neocloud. You mentioned CoreWeave before we came on camera. I just wrote a post about a company called Argentum, which has got this new model around financing. This is an explosive area and sovereign cloud and sovereign AI is only going to accelerate more build out. This neocloud seems to be a hot area.
Greg Matson
>> Hugely hot. And, of course, we're partners with the hyperscalers that you mentioned and deploying AI storage in a big way there, but we're also focused on neoclouds because we do see that they can be a growth factor. In fact, some analysts predict that they might be as much as half of the GPU as a service market by like 2030. And so, we have to be part of that ecosystem. And we're partnered very closely with CoreWeave, for example, that you mentioned and many others in the market. And globally, both in Europe, Middle East, and of course the US.
John Furrier
>> Well, since you mentioned, how is the build out vibe going in Europe and Middle East? Obviously, we see a lot of activity there, still smoking hot? No turning back at this point, they're still on full throttle?
Greg Matson
>> No turning back. We're still catching up to the US. US is leading the AI build out from a neocloud perspective, but the evolution and innovation in Europe and Middle East is definitely hot.
John Furrier
>> Yeah, and I'm sure you get this a lot because we see it on theCUBE. The AI-native world's exploding, seeing the developers, even at NVIDIA, they rarely talk about consumerization, but OpenClaw is a big part of the NVIDIA GTC keynote, highlighting what agents will look like. Again, NVIDIA's interest there was to show the world like, "Look, we're enabling this wave of agents." And we saw the coding with agents happening in the enterprise. We think agents are going to come right back in. So, the question comes in, what's the ROI in all this?
There's a lot of capital being deployed. You guys are making a lot of money on that. Of course, you're a key ingredient in the architecture. The role of the CFO has come into mix. The ROI question has come up at the C-suite because tokens are going to be doing work. It's an economic value proposition, not an IT CapEx spend only. What's your view on this? I know you have an infinity to the business side of it here running product at Solidigm. What's your take on this ROI... How should people frame this? What is the frame? How should we think about it?
Greg Matson
>> Well, it's an evolution of or... Actually, probably I'd say a revolution of your business. You can't just say, "Give everyone ChatGPT or Copilot," and expect something to happen. It really is. It starts with people and processes. You have to reinvent your processes. First, you have to figure out your processes and reinvent them in a AI-first mindset. Then, you can get tremendous benefit in terms of productivity per person. The way we look at it is we're not trying to reduce people or save something, we're trying to increase. And for our perspective, it's shrinking software development timelines so we can get our products out sooner for our customers. Even chip design, we're putting that in place in a big way.
John Furrier
>> I'm sure there's a lot on the roadmap. You can't tell me, but I'll try to ask anyway. Outside of the demand curve, what's the biggest change since we last spoke last year? We're almost a year since you were on theCUBE. Obviously, the demand and the supply, obvious. What other factors have changed the most in the industry, in your opinion? What's your perspective?
Greg Matson
>> Well, from a storage perspective is that it's now in everyone's line of sight, right? It was an afterthought. In traditional compute, storage was necessary, but not the first thing you thought of. And even at the beginning of AI, especially in the training era, storage was not thought of. And now, in the inference and the agentic era where we need data at hand, immediately available to the GPUs, that has changed everyone's thought about storage. You heard Jensen say it at GTC, "Storage just got a promotion." So, for us, it's now the key ingredient to having a super-efficient, highly-productive... and getting your best ROI out of AI.
John Furrier
>> I was in the room, I saw the slide, had all the storage manufacturers up there. I said to Dave Vellante, I'm like, "Dave, who will be standing in five years?" Because there will be a shakeout. Not everyone will cross that chasm. There's a mindset involved in this new era. You just highlighted it. Storage is not an after thought anymore, it's a primary asset in the build of the architecture. It is front and center. The top architects, the top deployers or top builders and operators want to know the storage. So, what is that mindset that is needed to compete and win in the future?
Greg Matson
>> Well, you have to think about the future because we're not even started. Anyone listening to this, how much agentic activity is happening in your company? It's barely scratching the surface. You have to design for the future and you need high capacity, high performance, power-efficient storage to make sure that your investments, big investments in AI, both from a process and people perspective, but the infrastructure perspectives pay off big.
John Furrier
>> Well, Greg, it's great to see you guys because I'm a big fan of Solidigm. Your product's exceptional. Everyone talks about it because it's high quality. It's smaller. It's faster. Form factor is amazing. You're making it bigger. I got ahead of myself this Mobile World Congress, or MWC they now call it, I published a HyperConverged Edge report, basically saying that there's going to be an AI factory at the edge, and that's going to be a smaller form factor. But the edge providers, like the telecoms and the carriers, they all have infrastructure, they got towers, they got central offices, you can put boxes in there and systems in there. Maybe a little too early, but that's coming.
Greg Matson
>> We're seeing the same challenges that you have at the hyperscalers, the very biggest AI deployers in the world right now, same thing. You could even go down to a hospital and they have space constraints, they have power constraints, but they need big data. You could dramatically improve patient outcomes if you have a massive amount of storage in a small space. We're actually seeing the exact same thing across the spectrum of customers.
John Furrier
>> Just to give the validation to that, not to reinform my thesis, but I think everyone would agree. Amazon's rebooting Outpost, Google's now selling distributed cloud for these use cases. That feels like a data center. It is a data center. It's a factory.
Greg Matson
>> It is.
John Furrier
>> And it is constrained by footprint. So, premium space and form factor matter the most. What would you say to that when someone says, "Hey, we can't afford to put the space here"? You anticipate drives to be smaller, faster, cheaper enough to handle the edge?
Greg Matson
>> Absolutely. We're focused on the edge. We've been partnering with research institutions, for example, colleges, universities, hospitals, and deploying incredibly-dense storage and very small form factors. Our drive might be 60 or 122 terabytes, but you put a series of those drives in a two-year server and it's massive amounts of storage for an institution and it takes up almost no space. It's a two-year server.
John Furrier
>> A telephone closet back in the day.
Greg Matson
>> Yeah.
John Furrier
>> It's funny. I was talking to the NVIDIA guys a couple of years ago and I saw the demo of the operating room of the future and it was cameras, sensors, and they had all this data. They ran a digital twin of it in Omniverse, and they had a simulation of everything that could go wrong with the surgery. You had other doctors coming in remotely and robotics doing the surgery. That's very Star Trek, Star Wars-like.
Greg Matson
>> Right.
John Furrier
>> Think about the data processing that. You got computer vision, you got sensors. That's not going to go backhaul. That's got to stay local.
Greg Matson
>> It's got to stay local.
John Furrier
>> So, they might bring models on-prem, so there's a lot of distributed computing going on here.
Greg Matson
>> Yeah, because your storage needs to be close to your compute, right? Not in all cases does it have to be immediately there, but your latency is important. And having that data, and you described a very high-pressure environment, the operating floor and-
John Furrier
>> Well, the agentic infrastructure is booming right now. You've seen the control plane. The AI-native developer community's on fire. The Linux Foundation just started the Agentic AI Foundation, which is going to sit on top of the Cloud Native Foundation, which we're a part of. We expect this to be a very active market to take advantage of the memory, the reasoning, all those things that's needed to have that storage that's close to the process. And then, right next to it, you guys do. Congratulations and thanks for coming on theCUBE.
Greg Matson
>> Oh, it's been a pleasure, John.
John Furrier
>> All right.
Greg Matson
>> Thank you for having me.
John Furrier
>> I'm John Furrier. This is breaking down the AI factories. This like an under-the-hood series because there is so much activity going on under the hood, inside these factories and making it go faster. You're starting to see it in the results of the big foundation models and the cloud providers and the neocloud, the new kinds of clouds, they're optimizing their systems to make it go faster and be smarter and put intelligence into all the applications. This is the big trend and Solidigm's leading the way. I'm John Furrier. Doing our part to share the data with you. Thanks for watching.