In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future, Adam Lewis, head of engineering at SandboxAQ, joins Pranav Gokhale, chief technology officer and co-founder of Infleqtion, Anuj Jaiswal, chief product officer at Fortanix, and Amit Sinha, chief executive officer of DigiCert, to talk with theCUBE's John Furrier about how quantum computing is crossing the threshold from theoretical promise to product reality across AI, cryptography and material science. Gokhale showcases Infleqtion's neutral atom quantum computer — boasting a commercial-system world record of 1,600 qubits — set to be demonstrated at GTC alongside NVIDIA's GB200 GPUs. Sinha highlights that 40% of the top 100 websites have already adopted quantum-safe key encapsulation, yet enterprises face a sweeping migration challenge he likens to Y2K times 10.
Additionally, Jaiswal details the "harvest now, decrypt later" threat facing enterprises whose aging data center cryptography remains vulnerable to future quantum attacks, explaining how Fortanix guides organizations through the full transition to post-quantum standards finalized by NIST. Lewis unpacks SandboxAQ's approach to Large Quantitative Models, or LQMs — purpose-built AI that incorporates quantum mechanics and physics-based modeling to solve hard scientific problems in drug discovery and material science that conventional large language models cannot reach on their own. The panel underscores a pivotal architectural shift: QPUs functioning as co-processors alongside GPUs to accelerate workloads from molecular simulation to encryption research. From designing better batteries and solar cells with hybrid quantum-classical compute to safeguarding enterprise infrastructure against a looming cryptographic upheaval, the discussion maps out why quantum is no longer a distant frontier but an immediate engineering priority.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Erwan Menard, Crusoe
In this interview from theCUBE + NYSE Wired: AI Factories - Data Centers of the Future, Adam Lewis, head of engineering at SandboxAQ, joins Pranav Gokhale, chief technology officer and co-founder of Infleqtion, Anuj Jaiswal, chief product officer at Fortanix, and Amit Sinha, chief executive officer of DigiCert, to talk with theCUBE's John Furrier about how quantum computing is crossing the threshold from theoretical promise to product reality across AI, cryptography and material science. Gokhale showcases Infleqtion's neutral atom quantum computer — boasting a commercial-system world record of 1,600 qubits — set to be demonstrated at GTC alongside NVIDIA's GB200 GPUs. Sinha highlights that 40% of the top 100 websites have already adopted quantum-safe key encapsulation, yet enterprises face a sweeping migration challenge he likens to Y2K times 10.
Additionally, Jaiswal details the "harvest now, decrypt later" threat facing enterprises whose aging data center cryptography remains vulnerable to future quantum attacks, explaining how Fortanix guides organizations through the full transition to post-quantum standards finalized by NIST. Lewis unpacks SandboxAQ's approach to Large Quantitative Models, or LQMs — purpose-built AI that incorporates quantum mechanics and physics-based modeling to solve hard scientific problems in drug discovery and material science that conventional large language models cannot reach on their own. The panel underscores a pivotal architectural shift: QPUs functioning as co-processors alongside GPUs to accelerate workloads from molecular simulation to encryption research. From designing better batteries and solar cells with hybrid quantum-classical compute to safeguarding enterprise infrastructure against a looming cryptographic upheaval, the discussion maps out why quantum is no longer a distant frontier but an immediate engineering priority.
In this interview from theCUBE + NYSE Wired: AI Factories – Data Centers of the Future, Erwan Menard, senior vice president of engineering at Crusoe, joins theCUBE's John Furrier to discuss how the neocloud is delivering AI infrastructure from electrons to tokens — spanning gigawatt-scale training facilities to modular edge inference. Menard unveils Crusoe's new Edge Zone product, a containerized modular data center called Spark that delivers half a megawatt of compute for low-latency inference wherever demand requires it. With rack-level energy density surgi...Read more
exploreKeep Exploring
How should hybrid cloud and edge infrastructure evolve to support distributed AI nodes, and what data-center design, scale, and use-case considerations (e.g., energy density, modular edge sites, latency, and compliance) are required for AI workloads?add
What is your organization doing in collaboration with NVIDIA, and what was announced at GTC?add
What AI infrastructure and inference services does Crusoe Cloud offer, and how do they support managed Kubernetes, bring‑your‑own‑model deployments, security/isolation, and performance SLAs?add
What are some examples of "war stories" pushing projects to their limits, and how is AI being used internally (including instances of dogfooding or serving as "client zero")?add
>> Welcome back everyone to theCUBE's Palo Alto Studio, I'm John Furrier with Dave Vellante hosting an all day pre GTC Pregame coverage of GTC, and also the industry around AI factories and how the future of AI infrastructure continue to accelerate. Many panel discussions. This fireside chat is with Crusoe. They're leading the wave in AI build out and innovative approaches. Variety of stories we've done on theCUBE on that, but this topic really is about AI agents in demand, AI software, gigawatt scale infrastructure. Erwan Menard is here, SVP of Engineering, Crusoe. Great to see you again. Last time you were on theCUBE was 2014. I interviewed you in San Francisco. Boy, lots changed.
Erwan Menard
>> I think we do better now, John.
John Furrier
>> Yeah. We're more distinguished, more experienced. We're accelerated.
Erwan Menard
>> Thanks for having me.
John Furrier
>> We've been following you guys for a while ago. Great success story. A lot of activity. You guys are really knee-deep in building out next generation infrastructure for AI. It's well documented on the internet. People can go search. A lot of great content out there, but it really hits at the AI factory vision. That's been talked about now two years since Jensen said it. We've been following up before. Great. We're building out these big AI factories, but also there's AI factory services.
Erwan Menard
>> Yes.
John Furrier
>> We just came back from Mobile World Congress now called MWC. The telcos trying to figure out what their opportunities. They have the footprint. They have the energy. They have the cabinets, buildings. So we're seeing AI factories at the edge, a hyper-converged network edge that's going to need to do a ton of inference. So it's very clear training, check the box. Now, we're onto inference.
Erwan Menard
>> Yes.
John Furrier
>> And all this value creation that's unlocks happening. Give us an update on what you're working on Crusoe, to bring more action to the table.
Erwan Menard
>> Yeah. Well, so Crusoe, neocloud, what is pretty unique is that we go from electrons to tokens. We source energy, we develop data centers. We serve infrastructure as a service, and more recently we launched inference, so we serve tokens. I think one of the news for us at GTC is to talk about a new product that is called Edge Zone, meaning that I can build for you a mini AI factory wherever you need it, close to your demand. And so to your point, in a case of inference at the edge to get that extra latency improvement, we can locate that AI factory close to your users, and that factory is a modular data center we manufacture ourselves.
John Furrier
>> One of the things when we talked about doing this session, we've been talking about AI factories. The title of this one is Data Centers of the Future with an S, because it's not just one data center. It's distributed computing, it's hybrid cloud, whatever you want to talk about. It's hybrid cloud, which is just distributed computing. Now, you have these AI nodes and each node is injecting intelligence. So no matter what it is in the network, this is foundational. Explain from your standpoint how you talk about that, because this is something that the industry is now seeing for the first time. We expect to hear a ton this week more about this.
Erwan Menard
>> Yeah. Well, I think first of all, gigawatt scale data center is super important to get the right economies of scale. We've built a number of them. And what's important there is to design for this AI era. The design of the data center is completely revisited just because of the density of energy. Five years ago, you would have designed for, let's say, 15 kilowatt per rack energy density. You take GB200, you're already at 120 kilowatt per rack and it's going to go higher from there with Vera Rubin. So you need to design those data centers well. And there is a lot of value at the edge. That's why we build the modular data center, we call it Spark. It's a container where you get half a megawatt. So gigawatt scale, half a megawatt. I can put 10 of those together, 20 of those together. I can build 5, 10 megawatts, and that's going to be super helpful when you know you need to inference the model and have all the data saved in a specific location. So think of healthcare, think of retail, manufacturing, obviously, governmental use cases as well. And so you need to be able to serve both ways because you want to get the best performance, super latency at the edge, but you also want to be compliant with requirements of geographical specific deployment.
John Furrier
>> The gigawatt in a smaller footprint talks about the capabilities, but tailored to the footprint size. Talk about how that works and what makes that work, because you have intelligence. So let's just say it's edge of the network could be a hospital or could it be a telco tower, managing all the spectrum and lights of spectrum? What makes it work? Because it's almost counterintuitive. At a gigawatt, how do I shrink that down? So you're not shrinking anything, you're just packaging it for the unique needs of the edge node. Is that right? Explain that.
Erwan Menard
>> Yeah. So basically, you're going to train the big model in the gigawatt data center and you can also inference from there, but you can choose to take the model weights. So the much smaller file, you don't have all the data you use for training. It's a much smaller piece of work and you have it inferenced at the edge, closer to the user. And for that, you need less GPUs. So you take advantage of the fact that for inference, you need less infrastructure, and you specialize the whole software stack on top of that hardware to excel at inference and use the less possible hardware to serve as many users as you can.
John Furrier
>> And this works because the factories talk to each other?
Erwan Menard
>> Yes.
John Furrier
>> Infrastructure plural.
Erwan Menard
>> It's an infrastructure plural. You have the network. So for example, I can imagine a case where there is a few models that are going to serve 90% of what the users need in that geography. I put them in that modular data center, Spark by Crusoe Cloud, my Edge Zone, but I can still call to the larger cloud for the request that's going to need a model that is hosted elsewhere. So this is data centers plurals, but it's about specializing the infrastructure to the-
John Furrier
>> Make it intelligent.
Erwan Menard
>> Yes.
John Furrier
>> Let's get smart about what to send over the network, when to infer, where to do what, when and where. All right. So let's get to some of the hard news with GTC and NVIDIA. Talk about what you guys are doing within NVIDIA. What's announced?
Erwan Menard
>> Well, we're super happy to be Nemotron 3 launch partners. So we have the Nemotron 3 Super. We have those models available on our inference service. We call it Intelligence Foundry on Crusoe Cloud. That's one. We're very, very happy to contribute to Dynamo. So we're allowed into inference. We launched our inferencers a few months ago, very successful. Dynamo is a fantastic project. We are contributing a tokenizer to the industry. And we see that piece of software saving you sometimes 30 times time to tokenize data you need to bring to the training and then the inference. So that's a nice contribution. Very happy to do. We're going to be early adopters of the Vera CPU that's coming soon from NVIDIA. Very, very interesting to mix with the GPUs for reinforcement learning. And then DSX, the standard to build AI factories, which we are fully embracing.
John Furrier
>> And that Spark Zone also with Vera works well too because of the footprint requirements you might not need. A lot of GPUs, you don't need a Vera Rubin rack in a smaller footprint. You can use a little bit of this, a little bit of that, a little bit of ...
Erwan Menard
>> Yeah, there is definitely some interest in using CPU as well. I think today actually, we deploy with the Blackwell generation. That's just awesome. We're very, very effective. If you recall, Blackwell gave that improvement of performance compared to Hoppers for inference specifically. And that's why we're deploying the first Edge Zones with Blackwell.
John Furrier
>> Yeah, really. I mean, still worth ... I mean, the price of the hardware traditionally used to go down a lot. Now, with the value with the software, it stays higher value because you got software. And I think the open source piece becomes super critically seeing a lot more innovation.
Erwan Menard
>> Yes.
John Furrier
>> Talk about that piece of it because the Nemotron 3, this is like a huge, I won't say a huge change of direction. NVIDIA's always kind of been pro innovation, but now the ecosystem's emerging superfast. And their ecosystem wasn't huge. We go back 10 years ago, it was gaming and hardware ecosystem. Now, it's full.
Erwan Menard
>> It's full. I mean, work with them on the entire cloud stack. There's a lot of really interesting software open source in the DGX stack. And then for inference, looking at Dynamo. So we have our own inference engine. We serve tokens and we move all the parts under the hood as innovation comes. And so we take in Dynamo a number of really interesting pieces. We contribute to Dynamo. We have our own in-house KV cache as an example. We call it Crusoe MemoryAlloy. It's important to be able to mix all this innovation. I'm so fascinated. We are now three years after the ChatGPT moment, three years plus, and we are still seeing an incredible pace of innovation. So as a service provider, I want to take advantage of everything happening to give the best performance price per token to the customer.
John Furrier
>> As head of engineering, you see it firsthand. I want to get your perspective on this because I've been a big fan of the radical co-design principles of NVIDIA. And the way they're doing it's not a cliché. They actually mean it. And when you're engineering these large scale systems, I mean, data center is a system, it's an operate system. It's a full on power plan of compute and all kinds of components. It's a whole super computing. What does that mean to you? Because as someone who's partnered with NVIDIA, how does that co-design work? Take us through. You don't have to reveal any confidential secrets, but talk about the philosophy of extreme co-design and why that works.
Erwan Menard
>> I mean, so you go through the basics of partnering early, meaning that we get to see the products early on, we get to give feedback on the roadmap and so on. But I think the co-design is completely augmented by the fact that they bring all this software. Like the DGX software stack and the ability we have to talk with people who actually manage internal cloud services within NVIDIA. So they contribute software, they've used it themselves, and then we can talk together on how to make it better for a public cloud like us. This is where there is a ton of value. The hardware is there. They show you early, you want to have the samples. I don't take that for granted, but that's table stakes. The software piece on top is where I think they make a huge difference.
John Furrier
>> Explain for folks watching, because I think this is a good opportunity to get a little mixture of experts action going here with you. The benefit and the DGX software stack. Why is DGX so important for NVIDIA? How would you answer that?
Erwan Menard
>> I think it's about serving the demand. I mean, at the end of the day, we are looking at a situation where there is still so much more demand than we can serve. You were talking about the pricing of hardware. You can look at H100 indexes out there and it's there, right? It's not a Crusoe topic. It's an industry topic. And why is that? Because we have so much demand. So personally, the way I look at NVIDIA's approach to this is that they want to help us being successful, because if we are successful, demand gets served, research advances, use cases gets embraced at scale, and that's beneficial for the whole ecosystem.
John Furrier
>> Yeah. I mean, I really like the philosophy Jensen on stage. Get the adoption going, magic happens. And really, this is a historical example of a rising tide. Floats everyone's boat.
Erwan Menard
>> Absolutely.
John Furrier
>> Talk about Crusoe Cloud, because I think this is an area you're starting to see a lot more. I know you're heading that up too. Cloud services at hyperscale, you mentioned scale, is a huge advantage, and this is only going to get bigger in our view, at least on theCUBE Research. Talk about the Crusoe Cloud. What's it about? How does it work? What are some of the highlights, momentum, and key services?
Erwan Menard
>> So Crusoe Cloud Infrastructure as a Service, you can get virtualized clusters or managed Kubernetes clusters. Actually, we launched the managed Kubernetes flavor a year ago, March 2025, and we are now at 52% of the whole traffic in the cloud under that service, meaning that AI engineers see how we can save them time because we have clusters that are self-healing. We have a command center simplifying your line.
John Furrier
>> And the go-to stop for Kubernetes.
Erwan Menard
>> Yes.
John Furrier
>> Which is the substrate for all the AI infrastructure.
Erwan Menard
>> Absolutely. And what to me, the epiphany, because we're working with very advanced customers, like people like Cursor as an example, are clients of ours, and they push the limit on everything. So when you manage to save them time with this managed service that abstracts some of the complexity, it's a clear signal that you're doing something helpful. So that service now is more than half of the business. That's for infrastructure as a service. And then we have the inference service where you can get the popular model. We talked about Nemotron 3 Super. That's now in the intelligence foundry, the little model catalog on Crusoe Cloud. But the most important part is bring your own model. So people who've trained a model from the ground up or post-trained and improved and open model with their data, and then they give it to us and we play it back with an SLA on latency and throughput, and this becomes the brain of the agent-
John Furrier
>> In their secure environment?
Erwan Menard
>> Absolutely.
John Furrier
>> Not like out in the wild?
Erwan Menard
>> No, it's a dedicated cluster if they want to. We can go as far as that. We can say, "The hardware this is running on is yours and yours only." And we take a commitment with NSLA on the performance. So we save them the complexity of managing all the hardware. We take care of this and they can focus on the agent, which the model is powering.
John Furrier
>> I love having Chase on. You guys and Crusoe are really pushing the envelopes. You're talking about Cursor. You guys are also pushing the envelope. What is some of the things that you're pushing? Could you share any, I won't say near death experiences, but like war stories of pushing stuff to the limit and also how AI is working for you guys? Because one of the common themes and successful folks we talk to is they're also client zero of their own products.
Erwan Menard
>> Yes.
John Furrier
>> They're driving everything first before they even bring it to a customer. What are some of the things you can share?
Erwan Menard
>> I'll give you an anecdote. So this week, we are releasing a service for people who want to do inference with us that allows to tune the model. So you have a little workflow, you bring your data, you tune your model, you make it yours. Internally, who's the most excited person about it? The head of legal. He wants to be a dog footer, as we say, an early tester, taking the corpus of contracts they're working on day in, day out. Taking a very good open source model, deriving a model that is Crusoe legal specific and starting to work with the help of that model. This is not a use case. We've talked about this for a couple of years. What I love is the fact that a head of legal would volunteer to be a dog footer, an early tester on something like this because they see the value. And we were 300 people end of 2024, 1,200 end of 2025. We're going to be 2,500 at the end of this year. The whole company needs to scale and AI is amazingly helpful to HR, to legal, and of course to our developers who are writing code.
John Furrier
>> Yeah, you bring up a good point. I see this example all the time. It's either underserved markets like a legal vertical, in this case he's legal, but we'll put them on that category. But also fast-growing companies where you have gaps that you need to fill and AI is accelerating value. So that saves on hiring, that saves on just figuring out how to fill that gap. But a lot of underserved companies like the legal profession, they're technically not viewed as early adopters. So the fact that he's leaning in saying, "I will drive this. "
Erwan Menard
>> I guess the legal guys-
John Furrier
>> He's got burning demand.
Erwan Menard
>> The legal guys at Crusoe are, which is to me an epiphany. I work with a recruiter on a bunch of roles in the engineering and product is built five agents that accelerate the whole process from listening to the interviewers, improving the job description, all of this flow of hiring somebody, being accelerated by five agents helping the recruiter. This is the type of people we want at Crusoe. This is the type of experiment we're making.
John Furrier
>> All right. So we actually got you here. Put a plug in for your engineering team and the company. So you guys are on rapid growth, rocket ship. What's the culture like there? What's it like working there? Is it a lot of hard problems? Is it let chaos rain, rain in the chaos? How would you describe the Crusoe culture?
Erwan Menard
>> Well, it's a lot of hard problems. I mean, when you build infrastructure and you keep growing it at exponential scale, there is always a problem to be solved. And so we like to say think like a mountaineer, probably because our founders were coming from Colorado and mountaineering background, but the point being, aspire for something big, but keep the humility of planning for every possible scenario. And I really like that tension between ambition and humility and just getting it done. We need a highly available cloud so that customers are happy. We save them time with products like this, managed Kubernetes and so on. And that way we get the reward that we're doing something that's helpful and we can continue investing. This whole notion of we aspire to solve a big problem, training and serving AI at scale with humility. I think that's pretty much our culture.
John Furrier
>> Well, great to have you back on theCUBE and love the venture, love the new business opportunities. I love the AI infrastructure build out. A final question, what's your focus this week? What do you hope to get done? How are you going to attack the GTC fire hose of information, meetings? What's your objective?
Erwan Menard
>> No, the calendar's pretty full. What I like is that we are a neocloud that goes from electron sourcing to supplying tokens and everything in the middle. And that's what GTC is about this year. We're going to talk about all of this, sourcing energy, building the data center, building the cloud services, the software stack that saves you time. Our entire spectrum is present this week, so we're going to learn as much as we can this week.
John Furrier
>> Well, we love what you're doing. You guys are the AI factory example in our mind, so congratulations. Thanks for coming on.
Erwan Menard
>> Thank you, John.
John Furrier
>> All right, I'm John Furrier for the GTC Pregame show, theCUBE and the NYSE Wired program at CUBE Original and also an open community. Brian Baumann, Dave Vellante, myself, Gemma, Alan, Liam are all here and the whole CUBE team bringing you Sunday special coverage. Thanks for watching.