In this SC25 interview, theCUBE’s Dave Vellante and Savannah Peterson sit down with Glenn Dekhayser from Equinix and Alan Bumgarner from Solidigm to discuss the critical infrastructure required to support high-performance computing and AI. The group moves beyond the hype of GPU shortages to analyze the "warehouse-style computer," emphasizing the symbiotic relationship between massive compute power and the storage required to feed it. Bumgarner explains the logistical challenges of transforming raw, "unpretty" data into the clean datasets necessary for accurate stochastic gradient descent calculations, while Dekhayser outlines Equinix’s hybrid strategy. He details how enterprises can utilize interconnection to leverage the best of hyperscalers and neoclouds while maintaining data sovereignty and governance.
The conversation also challenges the myth that AI adoption requires an immediate, all-in migration to the cloud. Dekhayser advocates for building "AI factories" that prioritize data curation and pipeline management on-premise, allowing organizations to export data for processing without getting locked into specific platforms. To close the segment, the guests offer advice to students and new entrants in the HPC space, stressing the importance of maintaining a generalist worldview. They encourage the next generation to look beyond the code to understand the physical realities – heat, power, and hardware – where "stuff gets real" in the data center.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
SC25. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for SC25
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for SC25.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
SC25. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to SC25
Please sign in with LinkedIn to continue to SC25. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Glenn Dekhayser, Equinix & Alan Bumgarner, Solidigm
In this SC25 interview, theCUBE’s Dave Vellante and Savannah Peterson sit down with Glenn Dekhayser from Equinix and Alan Bumgarner from Solidigm to discuss the critical infrastructure required to support high-performance computing and AI. The group moves beyond the hype of GPU shortages to analyze the "warehouse-style computer," emphasizing the symbiotic relationship between massive compute power and the storage required to feed it. Bumgarner explains the logistical challenges of transforming raw, "unpretty" data into the clean datasets necessary for accurate stochastic gradient descent calculations, while Dekhayser outlines Equinix’s hybrid strategy. He details how enterprises can utilize interconnection to leverage the best of hyperscalers and neoclouds while maintaining data sovereignty and governance.
The conversation also challenges the myth that AI adoption requires an immediate, all-in migration to the cloud. Dekhayser advocates for building "AI factories" that prioritize data curation and pipeline management on-premise, allowing organizations to export data for processing without getting locked into specific platforms. To close the segment, the guests offer advice to students and new entrants in the HPC space, stressing the importance of maintaining a generalist worldview. They encourage the next generation to look beyond the code to understand the physical realities – heat, power, and hardware – where "stuff gets real" in the data center.
play_circle_outlineBuilding On-Premises AI Stacks: Addressing Data Control with Co-Location and Neocloud Solutions for Modern Data Management Challenges
replyShare Clip
play_circle_outlineEquinix's role in providing flexible interconnection for various computing needs and environments.
replyShare Clip
play_circle_outlineThe need for clean, organized data before implementing AI models and computing processes.
replyShare Clip
play_circle_outlineImportance of establishing sovereign data locations to avoid vendor lock-in and egress costs.
replyShare Clip
play_circle_outlineEngaging a wide perspective in tech education, emphasizing overlap between different technology disciplines.
Glenn Dekhayser, Equinix & Alan Bumgarner, Solidigm
Glenn Dekhayser
Global Principal TechnologistEquinix
Alan Bumgarner
Director and AI TechnologistSolidigm
In this SC25 interview, theCUBE’s Dave Vellante and Savannah Peterson sit down with Glenn Dekhayser from Equinix and Alan Bumgarner from Solidigm to discuss the critical infrastructure required to support high-performance computing and AI. The group moves beyond the hype of GPU shortages to analyze the "warehouse-style computer," emphasizing the symbiotic relationship between massive compute power and the storage required to feed it. Bumgarner explains the logistical challenges of transforming raw, "unpretty" data into the clean datasets necessary for accurat...Read more
exploreKeep Exploring
What are organizations in the finance sector currently considering regarding the use of AI and data management?add
What are the best practices enterprises are using to integrate AI across different environments and what role does Equinix play in that process?add
What are the challenges and considerations involved in managing and preparing data for storage and usage, particularly in the context of using AI models to identify duplicates and clean files?add
What are the considerations to keep in mind when starting AI projects in relation to cloud infrastructure and data sovereignty?add
What is the definition of an AI factory and how do different perspectives perceive its value?add
Glenn Dekhayser, Equinix & Alan Bumgarner, Solidigm
search
Savannah Peterson
>> Good afternoon HPC fans, and welcome back to lovely St. Louis, Missouri. We're here
midway through day, one of our three days of coverage
here on theCUBE at SC25. My name's Savannah Peterson,
bringing you all the best and brightest and nerdiest and even some outs today
with Dave Vellante. That was a cool segment we just did.
Dave Vellante
>> Yeah, I mean, there's just
the applications of high- performance computing. It's just fascinating. I mean,
just mind-boggling actually.
Savannah Peterson
>> Yeah. Well, and there's
so much synergy now between the high-performance
computing community, scientists, other people doing cool stuff. You got to have relationships
like data and storage and data centers all coming
together, which is exactly what our next guests are
going to tell us about. Glenn and Alan, thank you
so much for being here.
Glenn Dekhayser
>> Thanks for having us.
- Hey.
Savannah Peterson
>> Glenn, you must really dig it up here.
Alan Bumgarner
>> This is our second time today.
Alan Bumgarner
>> I know. It's kind of new
for me too, so I'm excited.
Savannah Peterson
>> Well, you've done great.
Did you secretly just want to sit in Gary's seat?
Alan Bumgarner
>> Yes. Yeah, because I mean, how many times do you get
to interview a legend.
Savannah Peterson
>> Right. Oh no, I know. I know. I was absolutely honored to
be a part of that situation. So everyone is talking about
two things they haven't always talked about in the technology hype cycle. We're actually talking about storage. We're actually talking about
data centers. It's gone beyond. We need the AI ready infrastructure to make the magic happen. So I'm curious, Alan, what the relationship is like and what had you bring
lovely Glenn to us today? Why is this such an important tension? Not tension collaboration.
Alan Bumgarner
>> If you think about what
everybody's trying to do today, and everybody's trying to
populate, give as much power as they can to this great
big graphics cluster so they can run this
stochastic gradient descent calculations and determine the
future of the world with AI. But when you break down
the mechanics of how all of this stuff really operates
inside of a data center, it really turns your data center into a warehouse style computer. When you have a very large
graphics cluster and it has to... You can't feed that air, so
you have to feed it data. And you have this big network
that's connected to a lot of storage where all your data is pretty and it's been cleaned and
it's nice and it's sitting-
Savannah Peterson
>> Pretty data. Who doesn't like that? >> Yeah, your pretty data.
- Not
Dave Vellante
>> everybody has it. Most people don't.
Alan Bumgarner
>> So then you have to run this
command that pulls it all in.
Alan Bumgarner
>> And once you get all this data in and you can run all these calculations, you converge model layers
time at a time in a very tail latency sensitive manner. The more power you can give to this thing, doing those calculations, the
faster you can get it done, the more accurate you can make your model, the better the fine-tuning. All of the things that make
these models not hallucinate or do things that you want them to do becomes more important. And so you have to have a lot of data, and you have to have enough power to give to this thing over here. And so the less power
you have to do over here and the more efficient it
becomes, the naturally, the better the calculations come, the more accurate your models become, and all of those things happen. So it's a very close relationship between what your storage array can
do to keep your data clean and in a proper global
namespace to pull it over to make this model more
accurate, to do the things that you were trying to achieve.
Dave Vellante
>> So Glenn, six, nine months
ago, a lot of discussions with organizations,
particularly in finance, they were saying, we are
building our own on-prem AI stack because we don't want to just move all the data to the cloud. We got plenty of data up
there. We're using the cloud extensively for AI. But it's going to be too
expensive to move all this data, and we want to have sort of
control over our own destiny. So we're building our own stack. The problem is there aren't
a lot of solutions out there, so we got to build our own stack. Today what I'm hearing from many of those, not all of them... JPMC can hire 1,000 AI engineers
and build this stuff out. They've got the chops to do that. But many enterprises saying,
well, we looked at it, we have to retrofit our data centers. They're really designed for today. We're sort of rethinking that. So we're thinking about
co-location facilities or neocloud. So I'm trying to help me understand where you fit now in this
new era between, okay, you've got hyperscalers,
you've got the neoclouds, you've got Equinix, who's got
great facilities expertise. How should we be thinking
about Equinix today?
Glenn Dekhayser
>> When it comes to AI,
the way that enterprises, organizations of any size
are going to accomplish this, the word is and not or. You're not going to go
in the cloud or on-prem or to a neocloud or to the edge. And every use case you
do will look different. You might use a different model, which may imply a different provider, which may require different
network, different data. So what we're seeing
enterprises start to coalesce around is some best practices. So Equinix's role in that best practice is
allowing a customer to, in a very agile way, interconnect
to all of the things, the edges, the neoclouds,
the hyperscalers, their own locations, different regionally and allow for all of
the sovereign governance that you implied in your question. But what's very interesting
from a data perspective, this event here it's about HPC,
we talk about big clusters and lots of power and liquid cooling and
all that great stuff. And look, organizations
are going to get there. Certainly neoclouds are going to be all liquid cooled, right? They're going to have
specialized facilities that all they're going to have is GPUs and they're going to be optimized. Their data centers will
be optimized for that. Very, very difficult for a to have their own data
center that's like that. It just isn't worth, not
just the capital expenditure, but the operation of that data center. It doesn't make sense for
them to do themselves. But there's going to be a
need for them to have some of this processing power
in a place that's sovereign so they can do their own models. They don't want to necessarily
do that to a neocloud. Maybe they can't move the data. So like I said, there's an and, and all these different
use cases are going to imply different requirements. So what you're going to end up with is companies using GPU
resources in a hyperscaler to start your POC, perhaps going to neocloud, running your pilot. Maybe you start running production there. But the one thing that's
repetitive across all these use cases is that you need
to acquire the data. Where are you're getting that from? All of your edges, all of your
IOT devices, all your logs, all your customers. You got to acquire all that
data, you got to manage that data in its raw ugly
format, the not clean format-
Savannah Peterson
>> Yeah. Not pretty. >> Not pretty.
- Pre-makeup, pre-mani.
Glenn Dekhayser
>>
- Not the pretty format is unpretty data.
Alan Bumgarner
>> Soon to be pretty
- Right.
Glenn Dekhayser
>> And Nvidia's got all
sorts of great software,
Dave Vellante
>> and a lot of companies
have really good software
Savannah Peterson
>> to help curate this and
get it into these formats.
Glenn Dekhayser
>> And there's processes and known best practices
for getting it there. But that data pipeline where
you put that data pipeline, that's the repetitive part
regardless of the use case. So that becomes a very strategic thing that the companies can do first as they're getting into this journey. And then start to use all these resources and you start to see this real federated or what we're calling distributed AI with all the interconnection
that we're enabling. And so this is where we
believe enterprises are going, and that's how we've really
optimized our network to accomplish that. >> That very much aligns with the work
Dave Vellante
>> that we're doing at theCUBE Research. In fact, Jackie today posited
that the data center is going to become actually that a data
center where you store data. You don't necessarily
load it up with GPUs, use that existing facility
to put your data in. How is your data because
you've got that facility, and then use capabilities
that say Equinix has, or like you said, neoclouds and clouds, and that's where you're
going to do the intense sort of GPU work, as you say, everywhere.
Glenn Dekhayser
>> Yeah. So what Equinix
is really trying to do is to provide those things that
customers need to consume but may not want to do themselves
or can't do themselves. So whether it's a liquid
cooled environment for GPUs or whether it's something
as basic as tape libraries as a service, providing
a sovereign S3 archive for all this AI data, which really is a timely
thing given the new shortage of shingled drives that's coming out. So I got to store all this data, I've got all this raw unpretty data or these models, these
checkpoints and training. Where am I going to put this
stuff if I need to keep it? Because I can't get more drives. Everybody got rid of their tape libraries because they replace it with
these high-capacity drives. Oops, I can't buy any more of
those and now they're full. What do I do? Well, I've
got to evacuate that data. So it's these kinds of
things that we're trying to provide these things to be consumed, these building blocks for solutions and interconnection's, big part of that. Some of these other
services are part of that. To help customers solve the
problems that they're going to face as they get down this AI environment they weren't
architected for originally.
Dave Vellante
>> And you can do that economically
because you get scale. >> Oh, yeah.
- Yeah, yeah. Absolutely.
Savannah Peterson
>> Well, and you guys are
the OGs of doing that.
Glenn Dekhayser
>> So you're definitely the right
partner to play in that game. I'm curious because there's so much uncertainty about right now when it comes to people determining their solutions. Is they AI going to be good for us? There's a lot of questions in
our industry, as well as a lot of really great hard
work that's happening, particularly at shows like this. What's one myth that you wish you could just
completely eradicate about? And I'm going to ask
you about data centers and I'm going to ask you about storage because I think we're in a place and even the conversations
we've had already today, everything that's old is new again. But also we can do things
differently now than we could before and we don't necessarily have some of the same constraints that we used to. Alan I'm going to start
with you since it's your second round with me today.
Alan Bumgarner
>> So the question was one
thing that I could eliminate?
Savannah Peterson
>> If you could just magically
educate everyone on one thing about storage right
now, what would that be, for all your potential customers
who might be listening?
Alan Bumgarner
>> Oh, I have two. So I have to choose. >> You can have two. I'm really
Savannah Peterson
>> generous today. I'm in the good mood. >> 1 and 1A.
- Yeah.
Savannah Peterson
>> I started with Gary this morning,
Alan Bumgarner
>> but I'll tell you, here's one
Alan Bumgarner
>> thing that I would love to eliminate. I was in New York with Gary
probably at the beginning of the year, and he just happened to be the audience on a
panel that I was speaking at. And I was on a compute
panel, which was very rare for a storage guy to
be on a compute panel. And when they got to me,
I asked the audience, so how many people in the audience
think their data's clean? And one person raised their
hand and that was Gary?
Savannah Peterson
>> Or they don't have any data.
- Not right.
Alan Bumgarner
>> He runs the Los Alamos
National Lab super computer.
Glenn Dekhayser
>> No, but he's not lying.
- Very clean data for reasons.
Alan Bumgarner
>> But the thing that we have
Glenn Dekhayser
>> that I see all the time
in storage is I have... And now there's AI models
that will go do it. You can go and look at
all of the data inside of your environment and find duplicates, and you can find bad files and you can do all of
these things with it. And now you've got these pretty
object stores where I have as much metadata per object as I want to. I can manipulate all of these things. And I think there's this
great big growing argument on how do you keep your data,
how do you keep it clean? How do you make it as compact as you can? And what are the things
that you want to go do to make it ready to be used? And I think that you'll see a lot of people in environments right now before they even get to the
point where Glenn can ingest all of their stuff and let them use these
sovereign AI data centers that Equinix builds so eloquently,
is you have to go fix all of those problems because you don't want to upload duplicates, you don't want to spend all the money to
do all of these things. For me, if it was just an easy way to do that on ingest and vectorize it immediately, I think that would be what I'd do.
Savannah Peterson
>> I love that. Glenn, what's something you wish more people knew?
Glenn Dekhayser
>> That I think the myths that we believe anyway is that-
Dave Vellante
>> Just us.
- ...
Glenn Dekhayser
>> you should go all cloud with your AI endeavors from the beginning and then move it out later. Because I'm not saying
don't use the cloud. Cloud has an absolute imperative purpose and should be used in
those first use cases. But if you don't think upfront about establishing a sovereign
place for that data to be, you're going to run the risk of getting locked into a
platform that's very difficult to get out of later
from a data perspective, especially if you start using interfaces and APIs that are opinionated. So that's the problem is
that getting the data out, everybody focuses on the egress costs and sure that's painful and eventually that
will be the main driver of you probably doing what
we're telling you to do upfront. But doing it upfront,
building the beginnings of your AI factory, it
doesn't need any GPUs. That's the myth is that
you can start an AI factory with just storage and just compute for
doing your data curation and pipeline management because
that's the beginning of it. You can send that data to
wherever the GPUs you want to use are and have at it, right? That's the flexibility. But you start there because
you're starting sovereign and you're now in a position
of flexibility and leverage.
Dave Vellante
>> Wait, explain that further.
So you're saying you can start without the accelerated
computing piece of it. >> Correct.
- But with a vision that you're going
Dave Vellante
>> to eventually add
Glenn Dekhayser
>> that in. Is that right,
or not necessarily? >> Well, no.
Glenn Dekhayser
>> You're going to use it,
but you don't need it in that sovereign space right away. When you're starting your first use cases, you're experimenting,
you're trying what models and whatever, but your data is here, you can project that data. Most of the storage companies that are out there today
have the logic built in to project data to mirror
it, replicate it in very consistent ways to
different cloud platforms. You're looking at the Dells
and NetApps, the pures of the world, they all do this. This is table stakes. And
even if they don't, you want to use a platform that doesn't have that. There are third-party tools that can move that data in a consistent
way up to the cloud. So move it to the cloud, use those GPUs. Move it to the neocloud, use those GPUs. Here's the good news. You've
got your sovereign copy. There's no egress. You just
delete it out of the cloud. You're just delete this. So you never have to worry about egress ever if you have that plan and start the right way.
Dave Vellante
>> So you talked about AI
factories a little bit today. What is an AI factory to you? I mean to us it's like you're
producing intelligence. And how is it different from
the traditional data center and how do you guys,
maybe it's not migrate, but balance the traditional workloads and the accelerated workloads? >> It's a great question because
I was at GTC DC a couple
Glenn Dekhayser
>> of weeks back, and Jensen defined AI factory as a factory it
creates, what? Tokens. >> Creates tokens, yeah.
- And I think
Glenn Dekhayser
>> that's one perspective
if you're in neocloud,
Dave Vellante
>> because yes, you, you're
literally measuring it by tokens, how much power you're putting in and how much tokens you're getting out. That's one kind of AI factory. I think enterprises see
a little differently. They don't care about
the tokens, they care about the business outcomes. That's all they care about. So that's why an AI factory,
that's why when I say can start with just the storage, that's what I mean. You might add GPUs as
you want to do training and inference either here or perhaps we've got 277
data centers in 77 countries now, 77 metros and 37 countries now. So running inference, some inference is not latency sensitive. It can be run in a central place. A lot of it is latency sensitive. It's not all about chatbots. >> No.
- So any kind of video or real time stuff.
Glenn Dekhayser
>> So having that distributed
inference is extremely important.
Savannah Peterson
>> Being able to deliver
GPU power at these edges, you don't need huge super pods for that, you're looking at smaller AI factories. And the floor here has so many... It is really cool to watch so
many distributed AI companies that don't know the
distributed AI companies. They're doing distributed inference, distributed training, distributed RAG.
Savannah Peterson
>> You are absolutely right. >> Even companies like Redis
that I saw last Friday
Glenn Dekhayser
>> that can do caching and
then distributed caching. The opportunities for
distributing information and distributing this different
stages of inference around to make things very efficient. There's so many opportunities
to make things a lot faster and a lot more efficient across the board. But also to create
those business outcomes. It's not just about, oh, I'm
going to go get a super pod and see what model I can create. Some people need to do that,
pharmaceuticals, some people who need that bespoke private model. But most enterprises,
they want the outcome. They'll take a foundation model or an open source model,
they'll post-train it, and then they'll build
a very sophisticated RAG because you can't put that temporal private data into a model. Can't do that. It doesn't come out. So you need to be able to get things out and manage that data. This is classic IT.
It's what it really is. This is stuff we've been doing
for a long time with ERP, with data warehousing,
it's just a new model. And so you need really performant storage. This is where I see really cleaning up in this world. Being able to deliver that
kind of high throughput data to local inference at the edge. It's going to be, I think a
great play as this all unfolds. >> That's where the data is.
- Data is everywhere.
Alan Bumgarner
>> Data is everywhere.
- Data definitely is everywhere.
Savannah Peterson
>> Glenn, I totally agree with you.
Glenn Dekhayser
>> I'm glad you brought up that Jensen point
Dave Vellante
>> because like you, I slightly disagreed and that was not my personal definition. And I think that-
Dave Vellante
>> It's narrow. It is narrow.
Savannah Peterson
>> It's narrow.
- I mean it's huge, but it's narrow.
Dave Vellante
>> It's based on your perspective.
Glenn Dekhayser
>> Yeah. I say this with so much love,
Savannah Peterson
>> it's a classic tech nerd response. People aren't spending money for tokens. They're spending money
to be more successful. They're spending money
to go deliver the thing or build the thing or
solve, cure the cancer. They're not spending money because... Most people aren't
thinking about it that way. The tech community
thinks about it that way, but I mean, he's in front of
the entire public sector there at GTC, it's just kind of a
very different conversation. All right, last question for
you guys to close this out. And I'm going to have
to spice it up, Alan, because I asked you my
usual last question.
Glenn Dekhayser
>> You're in trouble, man.
- All right.
Savannah Peterson
>> We're going to take it a fun direction.
Alan Bumgarner
>> What would you tell a new person to our community... Community has been a bit of a theme today, which is really nice. And someone who's a student
at one of these universities or someone who's just starting to get involved in our
space, doesn't matter. They're age agnostic. What would your advice
be to them about coming to play in the HPC AI space right now? >> Oh, man. I wanted that question.
Glenn Dekhayser
>> Both get it.
- I just had this conversation.
Alan Bumgarner
>> Oh, you did
Savannah Peterson
>> just have this conversation.
Well, what'd you say?
Alan Bumgarner
>> I had a student from Stanford
and one from Colorado State
Savannah Peterson
>> and one from somewhere
else, I'll remember. But the interesting
discussion that we had is we were looking at do you want to be in systems administration or do you want to be in chip design? And the fundamental difference between those two is
really not that far apart. Because one of the
things that I do in part of chip design when we work with that team internally is I have to go look at all the
workloads that are out there and I have to understand all the system administration that's happening. And it's this one big kind of
ecosystem that you really have to have kind of a wide view
to do some very narrow things. And so the advice that I
tried to give this team was when you're thinking about
how this works over here for system administration
versus what am I going to use from this knowledge
to make a chip someday, three years from now, that's going to do something very specific. If you can have a view,
a wide view, of the world and think about it that way, and then pick which discipline speaks to your heart the most. But it doesn't matter because
even the person over here in system administration needs
to understand how that SSD or the chip side of it works because they have to administer it. So it's kind of connected. And my advice was make sure
you have options when you're walking through that stage in
your life when you're picking what you want to be when you grow up. But don't narrow it down too much.
Savannah Peterson
>> And realize that maybe there's more overlap than you might think.
Alan Bumgarner
>> There's a lot more than you think. Don't let somebody peg
you into a specialty. That's not how the real world works.
Savannah Peterson
>> As a proud generalist,
this is music to my ears. >> Here. Here.
- Yeah, yeah. Here, here.
Savannah Peterson
>> Glenn, what would your advice be?
Glenn Dekhayser
>> All right. So this
isn't Equinix marketing,
Alan Bumgarner
>> when someone asks me who's not
familiar with who Equinix is, I like to tell them that
Equinix is where stuff, I don't use stuff, gets real. And the reason I say this is
I've met software engineers, brilliant kids who are, some
of them work for hyperscalers, some of them work in the financial world, who have never seen a physical server, who have never seen a data center, who don't understand power, who don't understand the
stuff that they write-
Savannah Peterson
>> Haven't felt the heat.
Glenn Dekhayser
>> They have no idea-
- The wind.
Alan Bumgarner
>> Yeah. Quite literally.
- Like the ramifications, the
Glenn Dekhayser
>> consequences of the code
that they're writing of
Savannah Peterson
>> what the physical manifestation of that is and where it runs. And so very often I do tours
for these kinds of folks, I'll bring them to the data
center and I'll give them, and if you see their eyes
light, they can't believe trillions of trillions of dollars of economic activity going on and it just looks like fans
humming and water flowing and whatever and lots of blinky lights. But it's important to understand that as you're getting into this business, because there's so much
more than just sitting in front of a laptop and coding. Now, it's not that it's
not fun. I love it. I've done my share of it. It's fun. You type the command and the computer does something that you told it to. And how many things in this
world do what you tell them to? So that's a powerful feeling. Exactly. But the reality is they had
no idea what had to happen for that to be real, for that to happen because they don't see it. And I think if they saw
it and we taught them and we brought them more into that world, there'd be a greater appreciation
for what they were doing and perhaps a better
utilization, better architecture, better engineering out of the whole thing because really we'd like to throw around the words
architecture engineers in the IT world a lot. But we really are building a big machine, the software is part of
that, but it's a big machine and it needs to be architected like a machine and run like a machine. So I think that's something I would love to have more prevalent in the knowledge base of
these kids coming out.
Savannah Peterson
>> Well, thanks to this interview today, there will be more
prevalence of that, Glenn. Thank you guys both so much.
This was a real fun one.
Glenn Dekhayser
>> Appreciate it. Thanks.
- Thanks, guys.
Savannah Peterson
>> Thank you. - Yeah, good to
see you again, Glenn. And good
Alan Bumgarner
>> to be here with you, Dave,
at one of the nerdier,
Dave Vellante
>> but more fun events that
we get to do every year. I hope you're all having as
much fun as we are here in St. Louis, Missouri at Supercomputing 2025. My name is Savannah Peterson. You're watching theCUBE, leading source for enterprise tech news.