Gemma Allen of theCUBE moderates a conversation with Said Ouissal of ZEDEDA and Padraig Stapleton of ZEDEDA that takes place at NVIDIA GTC 2026. The discussion examines ZEDEDA's Edge Intelligence platform and real-world edge artificial intelligence, AI deployments. Ouissal and Stapleton draw on experience deploying the EVE operating system, EVE OS, across industrial, energy, retail, maritime and automotive environments, and they discuss agentic AI, inference at the edge and NVIDIA IGX Thor capabilities. theCUBE Research frames the discussion.
Ouissal asserts that inference moves to edge locations and that NVIDIA Thor enables running powerful large language models, LLMs and vision language models, VLMs on-site. Stapleton emphasizes open-source EVE OS and platform flexibility to avoid vendor lock and accelerate production deployments.
Speakers illustrate customer examples that demonstrate immediate real-world impact, such as car wash conversational LLMs and Maersk vessels deploying on-site inference for operations and safety. Watch the full conversation to learn technical details on deployment architecture, security and performance optimization at the edge.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
NVIDIA GTC 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for NVIDIA GTC 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for NVIDIA GTC 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
NVIDIA GTC 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to NVIDIA GTC 2026
Please sign in with LinkedIn to continue to NVIDIA GTC 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Said Ouissal & Padraig Stapleton, ZEDEDA
Gemma Allen of theCUBE moderates a conversation with Said Ouissal of ZEDEDA and Padraig Stapleton of ZEDEDA that takes place at NVIDIA GTC 2026. The discussion examines ZEDEDA's Edge Intelligence platform and real-world edge artificial intelligence, AI deployments. Ouissal and Stapleton draw on experience deploying the EVE operating system, EVE OS, across industrial, energy, retail, maritime and automotive environments, and they discuss agentic AI, inference at the edge and NVIDIA IGX Thor capabilities. theCUBE Research frames the discussion.
Ouissal asserts that inference moves to edge locations and that NVIDIA Thor enables running powerful large language models, LLMs and vision language models, VLMs on-site. Stapleton emphasizes open-source EVE OS and platform flexibility to avoid vendor lock and accelerate production deployments.
Speakers illustrate customer examples that demonstrate immediate real-world impact, such as car wash conversational LLMs and Maersk vessels deploying on-site inference for operations and safety. Watch the full conversation to learn technical details on deployment architecture, security and performance optimization at the edge.
In this interview from the Nvidia GTC AI Conference and Expo in San Jose, Said Ouissal, chief executive officer and co-founder of Zededa, and Padraig Stapleton, chief product officer of Zededa, join theCUBE + NYSE Wired's Gemma Allen to discuss bringing cloud-grade AI inference to factories, ships and oil rigs through the company's newly launched Edge Intelligence platform. Ouissal reflects on Zededa's founding premise — that edge environments deserve the same application delivery experience as the cloud — and explains why the shift to agentic AI marks an inf...Read more
exploreKeep Exploring
When and why was Zededa founded, and what problem did it aim to solve?add
What will the emergence of the agentic era and the shift from model training to inference mean for Zededa’s team and product roadmap?add
What are the typical deployment timeframes for this technology, and how quickly can it be implemented?add
>> Welcome back to The Cube here on the ground, Nvidia GTC 2026 in San Jose. Energy is crazy, so much happening around us. I'm here at the Zededa booth, joined now with our CEO and co-founder, Said Ouissal, and chief project officer Padraig Stapleton. Welcome guys.
Padraig Stapleton
>> Thank you. Nice to be here.
Said Ouissal
>> Welcome, thank you.
Gemma Allen
>> Thanks for having us here. I mean, wow, it's so loud, so much going on, right?
Padraig Stapleton
>> Yeah.
Said Ouissal
>> Love the energy. It's really great.
Gemma Allen
>> Well, you guys have been operating on the edge long before anyone even knew what that fully meant, right? I'm sure you'll argue some people still don't, so maybe let's start there. Talk to me a little bit about Zededa and what's happening in this very moment.
Said Ouissal
>> Sure, sure. Zededa started a while back, 2017, to focus on edge computing. This was in a day and age where cloud was super popular and everybody loved the cloud. Not only because you could rent to compute, but also the way it changed how we build applications, how we deliver applications. We said early on in Zededa, "How can we take all that superpowers the cloud gives and bring it to the edge, and allow customers that need to deploy applications in oil and gas sites, in trucks, in factory lines, ships, whatever it is, how can they have the exact same experience as if they're using a cloud?" That's really what we created Zededa to do many years ago.
Gemma Allen
>> Bring the data and leverage it at the perspective of the device really, right? Happening right there.
Said Ouissal
>> Correct.
Gemma Allen
>> Some of the themes we heard this morning, and I mean, we've been hearing them for a while now from Jensen. He talked a lot about the agentic era, right? This is no longer about training models. This is about execution and an inference era that perhaps we haven't really witnessed before. What will that mean for the team at Zededa and your product map?
Said Ouissal
>> Yeah, so today we announced Edge Intelligence platform. It's basically a platform that makes it very easy to deploy agents and models at the edge, building on the same foundation that we've already been deploying in many customers. To your point, I think as Jensen said, we're at an inflection point for inference, and it's all about now enabling inference anywhere, not just in the cloud, but also in these factories, in these production lines, these stores, these ships. We really want to help our large customers and enterprises to accomplish the same thing that we do today for applications as you shift to agents.
Padraig Stapleton
>> I think that's key, right? I think there's a real coming together of a lot of different components that's making it, now is the time, so everything from the hardware platforms we've seen from Nvidia, some of the software tools. All those are now truly enabling our customers in manufacturing, oil and gas, retail, to actually start deploying these types of solutions in production.
Gemma Allen
>> The developments around Nvidia, IGX Thor, what does that mean from your perspective? Is that a competitive advantage, threat? How do you weigh that?
Padraig Stapleton
>> I think it's really good because what Thor is bringing to the edge, it's bringing a capacity and a compute power that was never available before. One of the things that we have been working on and we're showing at our booth and our event tonight is that with Thor, you can now run a very powerful LLM, VLM model at the edge in an industrial location. You can marry that with agentic AI software and capability, which allows you to run different types of use cases, everything from inspection, on widgets going down the line, to safety type of applications on an oil rig or a platform. Thor basically enables a lot of new applications at the edge, and so I think it's a great thing for the industry.
Gemma Allen
>> Fantastic. Talk to me about some typical customer use cases and examples for Zededa. You guys started in the oil industry, right? That was the first problem you saw, but those problems are changing.
Said Ouissal
>> Yeah, no, I mean, so if you think about the oil and gas industry, it's been an industry that's been data driven since its early days, because there's a lot of capital needed to drill for oil, transport, and everything else, shifting obviously to more and more renewable energy. Before you spend millions of dollars to go and explore or get oil out or whatever it is, you've got to do research. You've got to get your data out, so they were one of the earliest ones that said, "Hey, we've got lots of data being generated at the edge. How can we analyze and process the data rapidly?" AI is a fabulous way to analyze large amounts of data, especially when it's unstructured, so we've seen them really being an early adopter in our platform, deploying AI models before anybody else. Customers like Schlumberger SLB, one of the large oil and gas services companies, leveraged us across their operations, but we have other customers that actually are operators or owners of oil and gas. Having said that, we right now are deployed across a variety of different use cases, industrial automation, we're running in retail stores, we're running on Maersk's vessels, the big container ships. We're running there, providing computing and AI capabilities, so really edge is quite broad and quite across the different verticals.
Gemma Allen
>> Maybe just to come in there for a second, I mean, in terms of the edge, right, it feels as though the hyperscalers have never really had the same, made their mark in the space, for understandable reasons, right? Their system is fully integrated. It's end to end. You need to be on AWS or Azure or whatever it is to ensure that you get the value across the board, right? They're not worried about necessarily devices on the edge. How do you think that's going to go from the perspective of their future roadmaps? Do you think they're going to try and start eating some of your lunch in the space? What do you see happening?
Padraig Stapleton
>> Well, I'm sure there will be PowerPoints to that effect, right, but the reality is always something different, right? That's what we've seen in the past, okay? I think the challenge is, at the edge, that we have done a good job, I think, addressing, is the edge is diverse. Use cases are diverse. Hardware, equipment to deploy are diverse. The environment you're deploying is diverse, so that really goes against what the hyperscalers have built, which is a very homogenous built out solution. It's in a data center, same hardware, et cetera, servicing, so I think it's diversity of the edge, diversity of use cases, that I think, as long as we continue to innovate and drive value for our customers, I think we will be fine against the competition.
Gemma Allen
>> We hear a lot right now about the SaaS-pocalypse. It's a term that is creating a lot of fear inertia in tech. You guys were in an interesting position, because although it's a SaaS model, you're coming at it from a different vantage point, right? Talk to me a little bit about how you think about the competitive dynamics of that.
Said Ouissal
>> Yeah, so I mean, a lot of news right now going on around that. I think software is changing rapidly. I think any software as a service is going to become an agent, and that's how we think about it as well. As part of today's launch, we're actually also building agentic capabilities in our platform, so you can talk to our platform, you can ask it to do onboard notes, you can ask it how your edge is doing, you can even ask it to create an edge AI agent for you that it will deploy for you. We think that that's really where the industry's going to go, is more and more of these agentic capabilities will be created in the platform itself, and I think, yeah, if you're trying to stick on the old SaaS model, I think that's going to be a lot harder.
Padraig Stapleton
>> Yeah. I think that, and the fact that what really happens, our customers run on EVE. They run their businesses on top of EVE, so whether that's an oil and gas platform, whether it's a manufacturing, automotive, et cetera, they're running their core businesses on top, and so that's, for them, as long as we keep bringing value and we keep their businesses up and running, I think we have a really strong position. I think we will evolve. As Said said, we'll bring some of the agentic type software to the cloud and to the agents at the edge, but I think it's the fact that we're enabling them to do things with their business today that they could not do five years ago, I think is what's going to stand to us.
Gemma Allen
>> Speaking of enablement, there's a great line. I can't coin it, but, "The best companies in the world don't have great customers, they've got great hostages," right? Right now, we're at a point where people talk about vendor lock all the time. It seems, though, a lot of companies are on a mission to solve it. How do you guys think about that? What do you see actually happening in industry, tactically?
Padraig Stapleton
>> Yeah, I think from day one, as we mentioned a few minutes ago, we open sourced EVE OS, because we wanted that to be available to everyone to either develop on top of, adopt, or to build upon and build controllers. I think what we do, because of the open source nature of EVE and the fact that EVE can run across any diverse type hardware, be that like an ARM, Nvidia, x86, et cetera, that allows our customers to have an open network, an open ecosystem where they can pick and choose the hardware vendors, but have the same consistent orchestration built on top of our software, using our software on top of it. It allows them to evolve. As their needs evolve, as the use case evolves, maybe a different hardware vendor, they can continue to do that, but continue to manage it in a secure, scalable way using our solution.
Gemma Allen
>> How complex is the reverse engineering question in that space? Are you seeing a lot of cases whereby customers and use cases are just struggling to be able to reverse engineer out of some of these line of business applications or different structural implementations?
Said Ouissal
>> Yeah, no, we built our platforms so that our customers could, rather than building, quote-unquote, infrastructure and plumbing, they could focus on business outcomes. What we are seeing with our customers is they love the fact that we take care of the underlying architecture and infrastructure, so they can actually put all their investments into building great apps to improve industrial automation or improve their oil and gas drilling business, or improve their retail operations. I think that's what customers will continue to seek for, is, "If you solve all my problems and you're anticipating them a couple steps ahead, I can focus on actually solving my business problems." Just like the cloud has done it for many of these customers, they expect us to think at the edge.
Gemma Allen
>> From the perspective of Zadeda, where is the growth happening? Are you moving into new geographical locations, new types of industries all the time? Give us some examples.
Said Ouissal
>> Yeah, no, our roots were early in energy and industrial. That's kind of where we really wanted to go in. Not the easiest verticals to go in because you're going into large operations, existing operations, lots of legacy that you have to deal with, also on the software side, unfortunately, but nowadays, we're in eight, nine verticals. We're deployed on ships, we're deployed in retail stores, we're in vehicles. We're seeing a great use case around Starlink, so people that are connecting now things with Starlink, they need edge compute nodes, so that's been a really great boom for us, to see the growth. I think all in all, more of these verticals. In addition, we are expanding in the Middle East. Actually, a year ago, we set up our Middle East office in Abu Dhabi. We have operations in Saudi, because we're seeing them really adopt rapidly AI and new technologies to make an impact, so we're excited to be part of that.
Gemma Allen
>> Wow. In terms of what's ahead, we're here at Nvidia GTC. It's a lot happening on the ground. What are your thoughts for the rest of this week, and then, what's ahead for the rest of 2026 for you guys?
Padraig Stapleton
>> Well, I think rest of this week, obviously our product launch today where we're basically rolling out our new edge intelligence platform, which I think really dovetails well with what Nvidia are announcing this week. I think for the rest of the year is that it will be taking that platform and helping our customers deploy more and more AI in their businesses. We have two products that we're launching today. One is an edge AI appliance, and the other is our edge labs. What that's doing is actually taking the knowledge that we have built and our experience around deploying AI at the edge and packaging it and bringing it to our customers to accelerate their adoption. Because, what we need to do is help our customers go up the learning curve and actually start to deploy this in production. I think that's what the focus for us from this launch will be for the rest of the year.
Gemma Allen
>> Final, final question. What are the deployment timeframes? How quickly can this happen?
Said Ouissal
>> We already have customers live. We have a customer that builds car washes, of all, and these are deployed around the US and they're bringing AI to the car wash, predictive maintenance, for when you drive up, it can detect the model of the car using AI and tune the car wash correctly. It scans the license plate, so if you're enrolled with your license plate, it'll know who you are and immediately give you ability to use credits or give you an offer. They're even running a conversational LLM in the pedestal, so you can talk to your car wash rather than hitting buttons. They've rolled this out already at hundreds of car washes, going to 10,000, so just one example. We have customers in manufacturing that build cars, one of the largest car manufacturers in the world. They're using AI right now for quality inspection control running on Zededa. This is well beyond the art of the possible, well beyond of what could happen. It is happening, and the winners I think are the ones that go in early and try to amass this technology and put it in their environments.
Padraig Stapleton
>> Just as numbers, we have, we're deployed in over 100 countries at the moment.
Gemma Allen
>> Wow.
Padraig Stapleton
>> As well as on all the oceans with Maersk, so we're everywhere you go.
Gemma Allen
>> Wow. Well, it's a fascinating company and a fascinating time. Thank you so much for joining us on The Cube.
Padraig Stapleton
>> Thank you.
Said Ouissal
>> Thank you so much as well. Appreciate it.
Gemma Allen
>> I'm Gemma Allen, here on the floor at Nvidia GTC in San Jose, talking to these great guys at Zededa. Thanks so much for watching. Stay tuned.