In this Future of Data Platforms Summit interview, Sam Newnam, senior director of AI Solutions at Hammerspace, joins theCUBE’s Rob Strechay to unpack how enterprises can overcome the chaos of unstructured data in the AI era. Newnam explains how Hammerspace’s approach, built on global namespaces, automation and pipeline acceleration,helps organizations organize, move and activate data at file-level granularity across hybrid, multi-cloud environments.
The discussion explores the urgent challenges facing data and AI teams as they attempt to power agentic applications with distributed, protocol-agnostic datasets. Newnam highlights how Hammerspace leverages open standards, Linux kernel innovation and metadata-rich governance to simplify hybrid cloud adoption, avoid costly forklift migrations and improve ROI by enabling enterprises to use existing infrastructure.
Key themes include accelerating AI pipelines by 50%, reducing TCO by cutting unnecessary data duplication and future-proofing for the zettabyte-scale data growth driving physical and agentic AI applications. Newnam also outlines how Hammerspace addresses governance with single-point-of-access models and auditability, ensuring secure and efficient data collaboration across global teams.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Future of Data Platforms Summit. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Future of Data Platforms Summit
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Future of Data Platforms Summit.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Future of Data Platforms Summit. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Future of Data Platforms Summit
Please sign in with LinkedIn to continue to Future of Data Platforms Summit. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Ed Beauvais, HPE
In this Future of Data Platforms Summit interview, Sam Newnam, senior director of AI Solutions at Hammerspace, joins theCUBE’s Rob Strechay to unpack how enterprises can overcome the chaos of unstructured data in the AI era. Newnam explains how Hammerspace’s approach, built on global namespaces, automation and pipeline acceleration,helps organizations organize, move and activate data at file-level granularity across hybrid, multi-cloud environments.
The discussion explores the urgent challenges facing data and AI teams as they attempt to power agentic applications with distributed, protocol-agnostic datasets. Newnam highlights how Hammerspace leverages open standards, Linux kernel innovation and metadata-rich governance to simplify hybrid cloud adoption, avoid costly forklift migrations and improve ROI by enabling enterprises to use existing infrastructure.
Key themes include accelerating AI pipelines by 50%, reducing TCO by cutting unnecessary data duplication and future-proofing for the zettabyte-scale data growth driving physical and agentic AI applications. Newnam also outlines how Hammerspace addresses governance with single-point-of-access models and auditability, ensuring secure and efficient data collaboration across global teams.
HPEDirector, Product Management AI & Cloud Data Infrastructure
In this episode of the Future of Data Platforms Summit, Ed Beauvais, director of product management, AI and unstructured data at HPE, joins theCUBE’s Rob Strechay to discuss the critical infrastructure requirements for scaling enterprise AI. Beauvais addresses key industry challenges highlighted in recent survey data, specifically the need for high-throughput, ultra-low latency data platforms to keep GPUs saturated. He details how HPE is infusing RDMA support throughout the entire data pipeline – from ingest to inference – and leveraging GreenLake to provide ...Read more
exploreKeep Exploring
What was the main challenge identified by respondents in the survey regarding data platforms for AI?add
What initiatives and solutions does HPE have to help organizations overcome data silos?add
What are the challenges associated with data silos, and how can HPE's solutions, such as data fabric, help address them?add
>> Hello and welcome back to the Future of Data Platform Summit: Update Edition. In this episode, I'm joined by Ed Beauvais, who's the Director of Product Management, AI, and unstructured data at HPE. Welcome in, Ed.>> Hey, Rob. Great to see you again.>> Yeah. Again, it's like we were together just six months ago and I think there's been a lot of announcements going on. You guys had your Discover over in Barcelona and it's been a lot of fun. I also saw you again on Jensen's keynote, the NVIDIA keynote, at CES when they're talking about what they're doing around their caching for inference and really looking forward. So great stuff. But we also, as part of this whole whole endeavor in the summit, we've been doing a survey. And then in that survey, we saw that scaling AI, and particularly the data platforms for AI, is a primary challenge by 65% of the respondents for that study. What can you tell us about the need for ultra low latency and high throughput for AI data platforms?>> Great question, Rob. So one of the things, as you mentioned, you mentioned Jensen. So certainly we're focused on partnering with NVIDIA. Absolutely, they're a key partner for us. But one of the things that we hear from our enterprise customers is that it's critical to keep the GPUs busy. And one of the areas that we're investing in is RDMA support. And we believe that customers want to look and infuse RDMA throughout the entire data pipeline. So that's not just the processing at the end, but it's ingest. It's working with the ecosystem of the data pipeline. And to be able to really provide and process inference at scale, you really need to rethink how we're doing data pipelines end to end.>> Yeah. I would agree. And I think even part of this study, and we'll put that up a little bit later, is around RDMA and open formats, and being able to do that was super important and something that all these 436 organizations were really looking for. I will take another step back here for a second and say a big part of 2026 is going to be making data platform stack less complicated and easier to manage. In fact, the 34% of the people who responded reported that data silos continue to pose a significant challenge. How do you see HPE helping organizations really overcome this data silo challenge?>> Yeah. So we want to help customers, and I think HPE is uniquely positioned to be able to do this, we've got a number of great initiatives and solutions that you can leverage for addressing data silos. One of the key ones is data fabric. And our product design and the principle has always been around openness, openness to data. And when you think about the intelligence of data is if it's in a silo and it's a critical piece of business information, or if it's a critical piece of information that you're not aware of, that's risk to the business. Like you might be making decisions without all that data. So data silos are a key issue. We've got a great solution in our data fabric offering that allows us to eliminate those silos where regardless of protocol, regardless of file, regardless of object, you can get access to that data. And that's critical for AI so that you can make the best decisions possible.>> Yeah. You hit on a great point, that openness is also really highly rated. I think it was 87% said that they were looking for things that were open parts of their data platform. So love that part of it as well. There also seems to be resurgence of that openness in the data platforms, like I said, the 87% of organizations feeling that open data formats are important to reducing lock-in. How do you see this influencing the products you're bringing to those organizations?>> Yeah. So openness has always been a core product, design point, and strategy. We want to make sure that customers can get access to their data and, more importantly, metadata. And I think what you'll see in 2026 is that we see more organizations using technology like MCP, Model Context Protocol, for data discovery. That's certainly one aspect of it. One of the things we're doing in our product is we're making that metadata available in OpenTable format. OpenTable format's critical because now you can take that metadata and you can run SQL against it. And when you have MCP, now you can discover it. So think about the idea of what if we could federate all of that metadata and run SQL against it, find the data, get exactly what we need. These are the capabilities that we will have and we can enable with our product line, with the X10000.>> Yeah. I think the MCP part is huge for organizations and being able to get at the metadata and OpenTable format through that is huge because, again, you get into how do you bring all of this together. As you're using agentic methods on top, they can go and call into MCP and understand that stuff. I think that really it is such a great thing. One of the other things, taking a step back a little bit was, in the study, hybrid is gaining momentum, with 67% of data stored in either cloud or a hybrid environment, 33% still holding strong on premise. How do you look at helping organizations really move into this kind of hybrid model?>> Yeah. So I think at HPE, we're uniquely positioned to help customers in a hybrid world. We've got a great platform with GreenLake. And if we think about it, customers want that flexibility to leverage the cloud and then to also bring that data back on premise. Whether that's a sovereignty issue, a governance issue, a compliance issue, all those things are critical. So I think HPE is uniquely positioned in that regard with the GreenLake platform, with a common interface. And then if you need cloud, you can burst there. If you need on premise, you can burst there. And it also has to be easy to manage, simple to scale. And I think those are the design principles that we focused on.>> Yeah. And that's been part of it from the start with GreenLake. And I've followed it for quite some time since my time over there, and I think when you start to look at where you guys are going, you see that people are really continuing down on the data platform side to embrace the cloud operating model on prem, hybrid, and in the cloud.>> Yeah, no, I would agree. And I think what we've seen is if you're an enterprise customer, you've got it all. You have everything, but what you might not have is a common way of managing it. And so one of the things that we want to look at is as you bring on technology, are you making the burden easier or are you making it harder? And I think with the GreenLake platform, we want to make it easier, and that's why we have a great, I would say, interoperability story.>> Yeah, I would agree. I think that as we take a step back and we look at what organizations are looking for, it's really how do they use this? Because silos are not going away. It's how do you use things like the fabric to integrate in? How do you bring other things like MCP and OpenTable formats, which is huge. But let's kind of take a little bit of a step back even further and say, what are there, if anything, that organizations really should think about and consider from a data platform perspective that we really haven't covered at this point?>> Yeah. Well, I think one key aspect of any evaluation is when you're looking at a vendor, you should think about a couple of things. One, you should think about where are they going? At HPE, we've got a massive investment in innovation. And I think you saw in our announcement in Barcelona, we talked about investing in our own IP. And this is critical because we want the feedback directly from customers and have them have the ability to influence our roadmap so that we can build what customers need. The other aspect of it is innovation, and that's core at HPE. And we've got some great teams like our HPE Labs team that really helps us to stay on that forefront of innovation. And I think the third thing to think about is who are those vendors partnered with, right? We're partnering with the leaders in the industry, whether that's from a hardware perspective, from a software perspective, from an AI and GPU perspective, certainly that's NVIDIA. So we've got a great set of partners, a great set of customers and innovation that we can bring. So I think the question that customers should ask themselves, at least in the unstructured data space, is, are you building for the future of unstructured data? And is your vendor, are they doing all the things that they can to make sure you're well prepared? Because this, in 2026, we've seen it in 2025, this is an industry and a space that is rapidly evolving.>> Yeah, I couldn't agree more. I think that when you start to look at how organizations are going to move forward with their data platform strategy, it's a best of breed. But what they also said is they don't have the skillsets. That was number three on their challenges list. But the things that you're bringing there, being able to do this, and we've talked about this even in the last one, being able to bring automation into the stack, into the actual data layer, that has to be a huge piece for them as well about how, as they look towards the future, they can have AI built in.>> Yeah. And I think we're going to see a lot more of this with lots of alternatives in the market talking about how we're going to make it easy, how we're going to make it simple to scale, how we're looking at it and we talked about this in the beginning of our conversation, Rob, which is we want to look end to end across those data pipelines. And I think we're uniquely positioned to make it easier for customers to get to that value. And I think that's where we're focused, that's where we want to invest our innovation at is how can we make it AI being centric, how can we maximize the value of that unstructured data? How can we make that easy for customers? And then to serve that up so that customers can make great decisions, whether they're building new applications, whether they need to respond faster to their customers, we want to help. And that's what we're focused on.>> So one of the things that we see, Ed, is the fact that organizations are really looking to get more and more value out of what they purchase to really get to the ROI of AI. They want things to do more than one idea, more than one function. How is HPE really designing your products to help support that?>> Yeah, that's a great question, Rob. So with the X10000, we're focused on not just handling a single use case. Our products, our offerings, we really want to help customers in three core areas. One, obviously AI, we've talked about that a lot. The other area is analytics. So things like data lakes, and can we do processing of data at scale, processing of unstructured data at scale? And the third area, which I think is a key area as well, is really in the area around cyber resiliency and recovery. And with our architecture, with all flash, we can help customers recover faster than they've ever been able to do that before. So I think the key aspect of getting a massive ROI is having a platform that's not just limited to a single use case, and that's how we've thought about it and that's how we want to provide value to our customers.>> No, I love that. And I really think it's been fantastic diving in on this because I think, again, there's going to be so much more to talk about over the course of 2026. So thanks for coming on board, Ed.>> Great. Thanks, Rob.>> And thank you for watching this episode of the Future of Data Platform Summit: Update Edition. We'll be back with more.