We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders. Signing in with LinkedIn ensures a professional environment.
play_circle_outlineBuilding a Trust Layer: Ensuring Security and Reliability in AI and Autonomous Agents for Robust Data Infrastructure
replyShare Clip
play_circle_outlineEnhancing Trust in AI Transactions: Opaque's Role in Confidential Solutions and Security Compliance Through Cryptographic Guarantees
replyShare Clip
play_circle_outlineMention of hardware manufacturers and hyperscalers adopting confidential computing capabilities.
replyShare Clip
play_circle_outlineEnsuring Safe AI Deployment: Standards, Privacy, and Trust Layers in Enterprise Agentic Systems
Aaron Fulkerson, chief executive officer at Opaque Systems Inc., joins theCUBE’s Dave Vellante and John Furrier during theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders 2025 event to explore the future of confidential AI and infrastructure trust. The conversation addresses the growing demand for verifiable security in AI environments, particularly with the advent of autonomous agents.
Fulkerson outlines how Opaque builds encrypted AI stacks to protect sensitive data in training and inference workflows. From trust layers to threat models, the...Read more
exploreKeep Exploring
What is the role of a trust layer in the context of the internet, and why is it considered a necessary component?add
What are some key developments in confidential AI capabilities by major technology companies, and how does Opaque contribute to that space?add
What guarantees do microchip manufacturers provide in relation to confidential computing and how does it differ from confidential AI?add
What is the difference between private AI and confidential AI?add
>> Welcome back, everyone, to theCUBE here in our Palo Alto Studios. I'm John Furrier, host of theCUBE, with Dave Vellante, my co-host, also co-founder. Dave, 16 years doing theCUBE, interviewing leaders here. Robotics and AI infrastructure is the focus. Three days event, of course, with the NYSE Wired community. We had the Rosewood event yesterday. Aaron Fulkerson is here. CEO of Opaque, CUBE alumni, good friend. Great to see you. Thanks for coming on. Appreciate it.
Aaron Fulkerson
>> Hey, wonderful to see you guys again. It's always a pleasure.
John Furrier
>> So the leaders are all here talking and the focus has been, okay, chips, software, a little bit of geography, sovereign cloud conversation, but the wave of agents coming is clear. Security, the role of data is huge. You guys are the middle of it. Talk about what you guys are working on now and how you fit in there because security is everywhere. Security is a data problem. Securing the LLMs. We had... Our last guest here making a lot of business from Scale AI because they went with Meta. Little trust issue there. So there's a lot of trust, quality problems, evaluation. What's your focus on that?
Aaron Fulkerson
>> The trust layer is a missing component of the internet that Vint Cerf's been talking about for 30 years. The reality is that one of the things he said he regretted was that he wished he'd built in a cryptographic layer into the network which allows you to have an identity and trust layer. During this first phase of the internet which is a human internet, it was manageable. You could have organized bad actors but we could buffer them out of the systems because humans operate at a rate-limited human speed. But with agents, it's very different. When we're talking about agents, we're looking at these things that are self-replicating and they have human-like capabilities. But quite different from humans, they operate at machine speed. So that creates a whole new demand for a trust layer in our infrastructure that it's not a survivable gap anymore. It has to get built into it.
John Furrier
>> Yeah. I mean, trust has been kicked around as an abstraction, not as a primitive or first principle. What is customers doing to deal with this right now? Because the change management piece of a big conversation we're getting involved in, but also the structural changes in the industry happening. What are they doing?
Aaron Fulkerson
>> So there's a lot that's been done over the last 10 years to build primitives that allow you to build a trust fabric that's globally scalable across the network. In fact, you've seen all of the largest technology companies build this as part of a confidential AI stack. So Apple's got it with their private cloud compute, Meta has it with their private processing for WhatsApp that they just announced in April-ish. Microsoft's been a real leader in building confidential AI capabilities internally for their internal services so-
John Furrier
>> Amazon.
Aaron Fulkerson
>> Amazon's got it with Nitro Enclave. So they've provided the primitives to do this, there's just new awareness that because of this demand from AI agents that suddenly everybody's clamoring for it and it's exactly what Opaque does. What Opaque allows you to do is to deploy what all of the biggest technology companies have built around confidential agents or confidential trust layer where you have cryptographic guarantees that are provable. That, one, this is in fact the agent that I expect and it's got verifiable integrity from a hardware attestation to, two, during the processing, it's processed in an encrypted environment with cryptographic guarantees. And then, three, post facto, you have an attested audit trail signed by the hardware. And we do this for Service now, we do it for Encore Capital, we do it for a bunch of enterprise companies but what it allows them to do is to have a turnkey deployment for confidential AI that previously was only accessible to the world's largest technology companies.
Dave Vellante
>> So I think about Nitro Enclaves. We, at one point, called it like a secret weapon and then of course other hyperscalers followed suit. You saw VMware with Project Monterey. Supposedly that's still alive but you haven't heard much about it. So now, you're filling that gap for any non-hyperscale environment or do you live alongside the hyperscale environment?
Aaron Fulkerson
>> No, we deploy inside the hyperscalers as well or on-premise. The thing that's changed is 10 years ago, Intel came out with a new kind of microchip called SGX and it's called Confidential Computing. It was pioneered with Azure and Intel and also the UC Berkeley RISELab where Opaque spun out of. But since then, it's... Every microchip manufacturer offers these confidential guarantees. NVIDIA released it with H100s-
Dave Vellante
>> ARM.
Aaron Fulkerson
>> ARM has it. AMD, NVIDIA, any of the major chip manufacturers have these confidential capabilities and all of the hyperscalers now expose primitives for you to build a confidential stack. But there's a big difference between having confidential computing which is kind of like tires on a car versus a car which I'm using an analogy to describe confidential AI. So it takes the properties of the microchips and exposes these guarantees through hard ratestation and key management handoff to provide verifiable guarantees that creates a trust layer.
Dave Vellante
>> Yeah. You're building value on top of that.
Aaron Fulkerson
>> Yeah. Absolutely. So you look at Meta with the private processing. Perfect example, there was a good paper published about this, where you can now run LLMs confidentially on your WhatsApp messages where they can prove to you that they cannot see your messages, but you get the benefit of running LLMs. So the same is true for ServiceNow or insurance providers that we have as customers where they want to take advantage of, let's say, confidential RAG. We just made an announcement this week at the Confidential Computing Summit which is an event that we host. There was a lot of overlap between your guys' event as well as our Confidential Computing Summit. AMD, NVIDIA, ARM, Intel, Microsoft, they were all there making product announcements. Anthropic made product announcements around confidential AI on Tuesday as well. Anyway, the point of this is that you can have guarantees that you cannot see the information even though you're using AI features.
John Furrier
>> Describe the difference between private AI and confidential AI because I love this topic because confidential computing's been around. We've seen storage and compute separate. That creates advantages. This is data AI confidentiality here.
Aaron Fulkerson
>> Well, it's-
John Furrier
>> What-...
Aaron Fulkerson
>> guarantees around the model weights are kept private and it's also guarantees around data policy, regulatory compliance, and that your data is kept private. So private AI generally means that you're hosting it yourself within your own environment so that's great. Cohere and others are doing this. They'll brand it around private or secure AI. Confidential is an entirely different set of guarantees. If you think about what's happening in the market today is you're seeing people who are building agentic workflows around discreet internal use cases that create optimization around routine decision making or tasks. Well, okay, there's some sensitive data flowing across these agentic systems that are probabilistic. If it's just the simple workflow, maybe that's all right even if it's sensitive. In some spaces like ServiceNow, insurance, and financial services, they're like, even that, I have to have guarantees. I have to have provability that my data isn't leaked but where we're going is much bigger than that. Each one of these workflows are getting chained together into composite agentic systems. Well, how do you protect your data in that situation and have it provable? And then as this becomes an agentic web where there's going to be far more traffic on the network from agents than there will be from humans, how do you know that your data's kept private and there's not a malicious actor or maybe an unintentional leak somewhere in that chain. Without these guarantees, you would never know.
John Furrier
>> Yeah. And that's why the rise of OLTP data is... Because there's transactions happening with the agents, so there's so much activity, traffic. It's interesting because now when you look at what we've been riffing on our last Cube pod, Dave and I always like to argue about these trends, we were talking about the fact that, will agents have SLAs? And the answer is yes. Yeah. They will.
Aaron Fulkerson
>> Yeah. They'll be cryptographically enforced and provable both before, during, and after, is how it's going to work and we're already seeing this a lot-
John Furrier
>> Yeah. And you guys are the bridge because this is where I think this is really-
Aaron Fulkerson
>> It takes a whole ecosystem. There's no way Opaque can build the trust layer for the internet.
John Furrier
>> Explain how we get to a SLA. Because soon, these will be working on behalf of tasks that will be paid for at some point or paid for in some form.
Aaron Fulkerson
>> So it takes a whole ecosystem, and the ecosystem has to involve, obviously, the hardware innovators. It has to involve the hyperscalers. It has to involve the frontier model labs. It has to involve independent software vendors that are building. It has to involve the enterprise deployers, Accenture and McKinsey. All of these people were represented at the Confidential Computing Summit. So it was the CTO of McKinsey. It was the global data lead for Accenture. There was every microchip manufacturer, every hyperscaler. DeepMind was there. Anthropic was there. Everybody's coalescing around reference architectures that provide you these cryptographic and provable guarantees and that's what it's going to take. Everybody's-
John Furrier
>> That's a great trend, by the way, Aaron. This is a great trend because in all these ways, de facto standards have to emerge.
Aaron Fulkerson
>> Absolutely.
John Furrier
>> What are you seeing on that? Are people rallying around certain things? Can you share more on that?
Aaron Fulkerson
>> Yeah-
Dave Vellante
>> Yeah. Are these de facto or are they de jure standards?
Aaron Fulkerson
>> So right now what's happening is... They're not our standards, right? They're not Opaque's. We have our own reference implementation that we've done with a bunch of large enterprise companies. Accenture's got theirs. They've adopted Opaque's because Accenture's an investor in Opaque and they're doing multiple deployments using our reference architectures. It's slightly different from what Azure's doing which is slightly different from what Google Cloud's doing. Anthropic just announced this week a reference architecture as well. So everybody kind of has their own reference architecture and what we were discussing at the summit is, "Okay. We need to make sure that we can take this zero trust chain from the hardware out to an MCP endpoint or an A2A endpoint. Have a registry that you can look up against to verify the integrity of an agent you're interoperating with. And we've got to have standards around all of this." So everybody's coming together, literally this week, and we've made a commitment over the next 12 months, "All right. Let's start building out reference architectures that we agree upon and putting them out in the open-
Dave Vellante
>> Which part of the value chain is responsible for the mathematical provability?
Aaron Fulkerson
>> The root of trust comes from the actual microchip. So the root of trust is bound within the CPU or GPU that has a trusted execution environment baked into the silicone is an encryption key. And then each user's data, they hold their own encryption key and there's a confidential mesh effectively that allows you to orchestrate the handoff of these keys into the encrypted cache inside the silicone where the data gets processed with these guarantees.
Dave Vellante
>> Right. Got it.
Aaron Fulkerson
>> So it takes the whole ecosystem but the root of trust begins with the hardware.
John Furrier
>> I have to ask the question because obviously AI infrastructure leaders have been here all week. Overhead, latency performance, how is that addressed? Can you talk about some of the-
Aaron Fulkerson
>> Sure. So right-...
John Furrier
>> or it's a non-issue?
Aaron Fulkerson
>> So right now, the overhead and NVIDIA, Intel, AMD, ARM, they all have published papers on this as does Google and Azure. They're posting performance hits of single digit percentages. So it's less than 10%. Certain workloads, it's a little bit more than 10%. Everybody claims it's going to get down to 1 to 2% performance overhead. Now, you might think, "Well, what about the performance hit?" Well, do you guys remember when we go to a conference and get on the public wifi and log in to our email or bank accounts without an SSL cert on the network? That was only like 12 or 15 years ago. It was insanity, right?
John Furrier
>> Yeah. Plain text flying around.
Aaron Fulkerson
>> There was all this plain text flying around on the network and then you're at an open source conference or a black hat conference. People are like, "Hey, this is your login and password." The same thing is going to happen. We know that with HTTPS, there's a 1 or 2% performance hit. There's going to be a couple-
John Furrier
>> Yeah. You got to deal with it. Anyway, the hardware's getting smarter too so-
Aaron Fulkerson
>> Absolutely-...
John Furrier
>> so we're seeing a little bit of step function and the gain on the performance. This is why we had the chip conversation.
Dave Vellante
>> It's the minimus and we've certainly seen, with virtualization, it got there. You know-
John Furrier
>> Yeah. Every trend-...
Dave Vellante
>> VMware.
Aaron Fulkerson
>> Yep. But this is just going to be the standard.
John Furrier
>> It's minuscule compared to the value.
Aaron Fulkerson
>> Yeah. Absolutely correct. Yeah.
John Furrier
>> Because SLAs, this is something that's not what's talked about in agents but it's coming fast.
Aaron Fulkerson
>> I mean, here's the reality. You guys know this, the listeners know this, is we have a global crisis of trust and it's across every institution, every system, every infrastructure, whether it's technological, governmental, or societal. And us technologists have helped contribute to this global crisis of trust. The reality is it's not going to get better until it gets a little bit worse. AI is going to make this worse. So we have to solve this problem of having a network trust layer. It has to be solved-
Dave Vellante
>> And that's why everybody's collaborating because they know this is a-
Aaron Fulkerson
>> That's what everybody's collaborating-
Dave Vellante
>> It's a deal breaker-
Aaron Fulkerson
>> Absolutely-
John Furrier
>> They have to. Well, they have to because if they don't, then everyone loses. And we've seen this in many ways. I mean, we saw it in open systems around network protocols. So I think the consensus has to get there because at the end of the day, if you look at AI, the speed is so fast in terms of velocity, of the market-
Aaron Fulkerson
>> Human-like capabilities at machine speed. You can have a single agent in one hour accomplish what an entire group of people couldn't accomplish in a year. It can take down a network. It can take down a power grid. It can cause a nation-state event. So this trust layer is actively being built across all of the vendors-
John Furrier
>> People love agents. They love the promise of it but they also have fear. And the confidence is not there, the enthusiasm there, but no one wants agents running around shipping code into production. They want to know that it's trusted. It's been delegated the task. It's being managed. This is what we're hearing about evaluations.
Aaron Fulkerson
>> Well, Jason from Anthropic announced at our summit on Wednesday, so yesterday. He said 60, 65% of the code at Anthropic is written by Claude. He projects that by the end of the year, it will be 90 to 95% of the code at Anthropic will be written by Claude. So it's an exciting time we live in but it's also one that... Oh, by the way, another thing. They dropped, on Wednesday, Jason announced it at the summit, I'm sure it was announced at other places, an ASL-3 model. So that is a artificial intelligence, security level 3, which has a life... They base it off of nuclear, cyber, and biological capabilities. And they're particularly concerned about its biological capabilities and how it can augment and create some really dangerous things. So they've built into the model some guardrails to prevent this from happening. But Anthropic and DeepMind and everybody who's transparent are deeply concerned about the capabilities of these agents. So again, it just goes back to we have to have a trust layer because we have to be able to put some... Not just assurances or legal contracts around this or some governmental regulation, that's not trust, that's hope. You have to have it baked into the fabric of the network, into the hardware.
John Furrier
>> Awesome. What's going on with Opaque? Talk about the conference. Give a quick plug for what you guys are working on. Great mission, totally endorsed here. What's going on?
Aaron Fulkerson
>> Well, we did a product announcement around a new version of our platform that allows you to do confidential agents for RAG. It's a bunch of pre-built workflows where you can call against other back-office systems that are commonly sensitive or confidential enterprise data. And then, use that as part of a agentic workflow for RAG. And then, we also announced that we joined the Cisco Outshift Agency Foundation where they're building an open stack for agents. And I know, Dave, we saw each other at the Cisco live event. We were talking about this earlier. This seems like a really ripe opportunity for Cisco to be incredibly relevant because you have to build this into the network.
Dave Vellante
>> Totally. They're bringing AI to the network, they're bringing their security to the network, and Jeetu is leading. He's a builder.
Aaron Fulkerson
>> But this is-
Dave Vellante
>> My takeaway here, this is not a niche. This is-
John Furrier
>> Mainstream....
Dave Vellante
>> compulsory for mainstream enterprises and for any AI pipeline.
Aaron Fulkerson
>> It's absolutely correct, yes. Agree.
John Furrier
>> Aaron, we're really grateful for you to come on your busy day. Thanks for coming by and sharing. Confidential AI will be standard. It has to be. And again, we support you and the industry. Thanks for putting that together. Appreciate it.
Aaron Fulkerson
>> Thank you both for having me. It's always a pleasure to chat with you both.
John Furrier
>> Awesome. TheCUBE here with the NYSC Wired community. I'm John Furrier, Dave Vellante. Thanks for watching.