In this interview straight from the New York Stock Exchange, theCUBE’s John Furrier sits down with Arcade.dev co-founder and CTO Sam Partee, and Harrison Chase, co-founder and chief executive officer of LangChain, for a wide-ranging conversation on the rise of AI-native development. From early pilots to production-scale systems, the discussion looks at how advances in large language models are reshaping software engineering, lowering barriers for developers and accelerating the shift from tools to intelligent, agent-driven platforms.
Behind this shift? An emergent infrastructure stack spanning observability, security boundaries, authorization and error recovery. As enterprises move from prototypes to mission-critical deployments, the guests explore how platforms such as LangChain and Arcade are enabling safer integrations with enterprise systems, cloud environments and sensitive data. Topics range from coding agents and sandboxed execution to MCP standards, audit logging and compliance. Together, Partee and Chase outline what it will take to build trustworthy, scalable AI systems in a world where agents increasingly work alongside – and sometimes ahead of – their human counterparts.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Mixture of Experts Series
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Mixture of Experts Series.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Mixture of Experts Series
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Mixture of Experts Series. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Sam Partee, Arcade.dev & Harrison Chase, LangChain
In this theCUBE + NYSE Wired: Mixture of Experts segment from the New York Stock Exchange, theCUBE’s John Furrier sits down with Raj Verma, CEO of SingleStore, to unpack how the intersection of technology and finance is shaping enterprise strategy. Verma shares why SingleStore is “on course” for the public markets, reflects on brand-building through the company’s partnership with golf Hall of Famer Padraig Harrington and connects that ethos to how SingleStore helps organizations fix struggling data “swings.” The discussion zeroes in on what’s next as Wall Street watches the AI infrastructure buildout: after chips and systems, the software and data layers set the pace for value creation.
Verma outlines why enterprises must modernize “brown” data estates into “green” ones to safely bring corporate context, governance and compliance into LLM workflows via RAG – and why commoditized data-at-rest puts the advantage at the query layer that unifies data in motion with data at rest. He predicts agentic AI will gain reasoning capabilities in roughly 18 months, cites industry indicators like Google reporting ~25% of its software now built by AI and argues that high switching costs will give way to disruption as buyers reassess legacy vendors. The conversation closes with concrete momentum: ~33% YoY growth, ARR in the ~$135M range, gross dollar retention ~98%, cloud NDR ~130, ~50% of business now in the cloud, landing ~3 new customers per day, a path to cash-flow breakeven in the next two quarters and a teaser for AI-related announcements in the next two months. Listeners will find notable stats, real-world use cases and forward-looking views on how databases power reliable AI at enterprise scale.
Sam Partee, Arcade.dev & Harrison Chase, LangChain
Sam Partee
Co-Founder & CTOArcade.dev
Harrison Chase
Co-Founder and CEOLangChain
In this interview straight from the New York Stock Exchange, theCUBE’s John Furrier sits down with Arcade.dev co-founder and CTO Sam Partee, and Harrison Chase, co-founder and chief executive officer of LangChain, for a wide-ranging conversation on the rise of AI-native development. From early pilots to production-scale systems, the discussion looks at how advances in large language models are reshaping software engineering, lowering barriers for developers and accelerating the shift from tools to intelligent, agent-driven platforms.
Behind this shif...Read more
exploreKeep Exploring
Is the new agent-based programming model (and the tools/supporting systems for building LLM agents) considered infrastructure?add
What is LangChain, how has it evolved, and why does the company focus on observability for AI agents?add
How can AI agents (e.g., a ChatGPT-like model) be authorized to perform actions on a user's behalf (such as sending email), and how are protocols like Anthropic's Model Context Protocol used to securely integrate LLMs with enterprise or cloud systems?add
What's a good way to get started learning to build agents and to use coding agents to help build software?add
Why does your blog post identify four cross-cutting concerns for tooling quality—agent experience, security boundaries, error-guided recovery, and tool composition—why is that breakdown important for ensuring tools/agents are secure and for decomposing requirements, and how do you expect that decomposition to evolve?add
How can AI agents be monetized (for example through advertising or payments), and what are the associated opportunities and risks—especially around context, memory, and privacy?add
Sam Partee, Arcade.dev & Harrison Chase, LangChain
search
John Furrier
>> Welcome back. I'm John Furrier, host of theCUBE here at our NYSE CUBE Studios. Of course, we have our Palo Alto studio connecting Wall Street, Silicon Valley. This is our NYSE Wired CUBE original program and community presenting the mixture of experts. We've got two experts here in the house on all aspects of development, agents, AI, generative AI, really should be a great conversation, where we're seeing the advancement and where the value's being created, what developers are honing in on, and how it's being easier to use, which is putting into production, which is the goal everybody wants. We got Sam Partee, co-founder and CTO, Arcade.dev. Very famous company with the tools. We'll get into that. And Harrison Chase, co-founder and CEO of LangChain, both pioneering and blazing the trail on all things data. Now agents, agent infrastructure is now mainstream. Guys, welcome to theCUBE.
Sam Partee
>> Oh, thanks for having me. Great to be here.
John Furrier
>> We were talking before we came on camera. Most notably, you guys have been digging into this. I won't say before it was fashion. It was definitely fashion when Gen AI hit. We saw the value of data, low-hanging fruit, search, these things you're doing, how it was going beyond APIs. So first I want to ask you guys where you guys see the progress relative to where it was say just two years ago relative to the adoption, the progress bar, and what's the state of the art right now?
Sam Partee
>> Wow.
Harrison Chase
>> Yeah. I mean, I can take a stab. I think LangChain came out about a month before ChatGPT. And a lot of people wanted to take ChatGPT and build it for their data, their APIs, their whatever. And there was this really simple pattern that people wanted to do, which was take the LLM and run it in a loop and have it call tools and have it do things. And three years ago and two years ago, and even one year ago, the models just weren't good enough to do that. And now the models are good enough to do that. And I think that's unlocking a lot of things. And that's where I think a lot of the real progress has been made where I think a lot of the infrastructure that we've been building over the past few years, you can now actually use more reliably and more effectively because the models are just good enough to run in a loop and call tools, which sounds really simple, but it's actually super powerful.
John Furrier
>> What's your opinion on this?
Sam Partee
>> Well, he's exactly right. I think the bar is also lowered for the common person. So an average person that is non-technical can come in and benefit greatly from using something like Opus or one of these models from OpenAI that really know exactly the intention of what you're doing and then can use the infrastructure that we're talking about to go out in the world and do your job for you without you really even knowing how to code.
John Furrier
>> One of the things that I found interesting in observing the agent and the AI space was when you look at the cloud growth, go back 20, say 2012 timeframe, everyone used AWS. If you were a startup, the list, we know the list, Dropbox, all the names, we know. And then they got complicated, they got the enterprise. So the cloud-native world grew. Kubernetes, the hardened infrastructure, get that SRE-like scale. But then all of a sudden, in comes ChatGPT, you guys came in. It almost was like a leapfrog. It was like a whole nother cast of characters coming and saying, "Well, I'm not a cloud-native guy or gal, because I didn't really want to go deep, but I know how to use cloud." But it was knowledge of infrastructure. And then that just grew. And then with the models coming in off the top, you had this new kind of class of developer. Infrastructure-like thinker. System thinking, not just coding, because that's getting easier. What's your guys' reaction to that? What's your observation? Do you see it similar? And does that change the makeup of who's building this agentic layer?
Sam Partee
>> Well, it's for the first time we've introduced purposefully programs that may not do the thing we want them to do because that allows them to generalize to a number of different use cases, right? And so this is a totally different type of programming and mental model that you have to go into with that, if you're using LangChain, you can walk right through and be able to have something kind of walk you through the steps of, okay, this is a different type of error handling and infrastructure you need to use now in order to benefit from the generality these models are capable of. And so it's a different style of thinking entirely for the person creating these types of programs.
John Furrier
>> But you consider it infrastructure?
Sam Partee
>> Oh yeah, absolutely. I mean, it's just different now. It's more thinking-oriented.
John Furrier
>> Yeah.
Harrison Chase
>> Yeah. And maybe internally we talk about some of the startups that are being built as gen AI-native, which basically means that a lot of the things they're building are like their products are agents, and that makes a big part of what they're doing. And I think that, yeah, that does require... On one hand, there is a lot of the same infrastructure that is needed because it is still software running the cloud, but there's other infrastructure as well. And so I think these AI-native companies, and to your point, yeah, what it looks like, the people building the agents in this absolutely is different. And I think we're starting to see this more and more, that what you're doing when you're building agents isn't so much building software as kind of like trying to align this agent or LLM with the way that you would do things as a human. And that's usually some combination of prompting. And so then great, who's best to do that? Whoever the subject matter expert is, and then some combination of tools. And you can use things like Arcade to make those robust and secure. And then you have people building tools, you have people building prompts, it all kind of comes together.
John Furrier
>> It's a great innovation cycle. And I wanted to call that out because I think people get confused. Cloud-native was a generational... It's not gone away because clearly, production environments-
Sam Partee
>> We couldn't do this without it....
John Furrier
>> will run on it, but you don't need to be a cloud-native guru or expert to run AI-native. So AI-native is the way, that's what I'm calling it. People are calling it that, so that's cool. But I guess my next question is for the folks that don't know what you guys do. Take a minute to explain LangChain and Arcade, because I think it's super important.
Sam Partee
>> Yeah. You want to go?
Harrison Chase
>> Yeah, I can start. So we started with LangChain as an open source framework. So we're an open source framework for building these types of agents. And we're talking a little bit before, and our open source framework, we actually have kind of like four different evolutions of it, because the space moves so fast. And so a big part of what we do, about half the company is on these open source frameworks. The other half of the company is really focused on what we call kind of like observability plus plus. So when we're talking about these new gen AI-native companies and the agents that power them, there's two kind of like interesting things. One, the input space to these agents is much larger than in software. In software, you've got little buttons you can click. In agents, you have chat boxes. You can type literally anything, usually mostly anything into there. And then the other thing is like the cloud-native infrastructure usually is like pretty robust and LLMs are actually not. A, they're non-deterministic. Even if you pass the same thing in, you might get different results. But then B, there's also, they're really sensitive to small changes in the prompt. And so all that means is basically like, you don't actually know what an agent does until you actually run it. And so we were building open source frameworks for building these agents. Then we built a platform for basically observing these agents, and that observability, different than software observability, powers a whole set of other things like evals and human annotations and product analytics over your agent. So those are the two things that we do.
John Furrier
>> Yeah. And we'll dig into some of those progressions. Arcade, you guys are doing great. Talk about what you're working on.
Sam Partee
>> Yeah. Well, obviously a partner of LangChain. We hook in there to really help people that if you want to be able to simply send an email from something like a model like a ChatGPT and you need it to log in as you, right? You have to have that agent be able to authorize and go and perform those actions as you. And you see this becoming more and more popular with the advent of things like model context protocol, MCP from Anthropic. We use that protocol to help agents authorize and perform actions on your behalf and do so in a safe way so that when an agent performs some kind of action, you know that it did so with the exact set of permissions and responsibilities that you wanted it to have.
John Furrier
>> What I like about the AI-native wave is all the optimization, the mindset, the intellect it's going into, how do I leverage the LLM and connect it to the enterprise systems?
Sam Partee
>> Yeah.
John Furrier
>> Whether that's an enterprise company or a bank or a cloud environment that's got hard and higher level services. I mean, you would agree that's the key to that.
Sam Partee
>> Absolutely. I mean, last night I was working on a skill to do an AWS deploy and it was with Harrison's new deep agents that just released. And using that, I can essentially say, "I want to take this application and deploy it on AWS." Arcade allows me to do the connection to AWS and the skill with deep agents handles all of the local different calls that are needed to orchestrate using Arcade to go and do that. And it is pretty amazing how quickly you can get something up in the cloud and running like that.
John Furrier
>> Yeah. We were talking before we came on camera about how easy it is to code and you're talking about some things Claude's doing and how it makes it so much easier. There's a lot of folks who want to jump in, either come out of retirement, if they have systems background, half my friends are coming in, I'm getting back in the game, and the young gun's coming up. You're seeing just a slew of talent. But there's a whole category of IT guys, cloud guys who are like, "Okay, wait a minute, how do I get started?" I mean, basic stuff. So take us through how I would say engage and start developing, haven't coded in 20 years. How do I get my hands on? How do I code?
Sam Partee
>> Good question.
John Furrier
>> I mean, what's the steps?
Harrison Chase
>> So I mean, one of the things that I'd probably recommend doing is just using a coding agent out of the box. And I think this is useful for two reasons. One, this will help you... Coding agents are really, really good now. You can kind of describe what you want it to do and be in natural language, and it will do those. And so that will help you build software. But I also think it helps you learn how to build agents, if your goal is to build an agent. We see that coding kind of like leads the charge in terms of where agents are going in the future. You see this, coding agents adopted MCP, MCP became the standard. And so I think by using coding agents, you'll get a sense for what prompting does, because you'll write like a Claude.md file or an Agents.md file. You'll get a sense for how to use tools and connect to Arcade, because you can use Arcade as MCP tools in a coding agent. So I think just using coding agents actually like dual purpose kind of like, solves two things.
John Furrier
>> What's your take on advice to me, the learner?
Sam Partee
>> Well said by Harrison. I would say go out and fail on something, iterate, go try to have an objective just like he's saying, try to use a coding agent to do it. Try to use even something as simple as like download a Claude-like interface or something like that where you can just type into the text box, see what happens. And as you grow, like he's saying, remember those prompts that worked. Remember the structure that enabled you to do that call, to go and perform the action that you wanted to have. And you'll eventually have a corpus of different things that-
John Furrier
>> How about integrations? Do I have to get my GitHub? What's the security? How do I integrate into my code? How would I... Just standard stuff.
Sam Partee
>> Well, right now, it's a lot of local connections. People keeping air tokens on their laptops. They're keeping their information that allows the permanent access as that user to the internet on their laptops. And enterprises really don't love that. If you want to go and deploy an AI app at scale, you really want it to be not only observed, but you want it to be observed and performed in those actions that your agent's taking as you with the permission set that you give it. And so Arcade is definitely purposely built to help enterprises do that.
Harrison Chase
>> And on that note, actually, like right now, a lot of the coding agents, you'll run locally. And I think one of the big shifts that we see coming is I think a lot of those will start to move to the cloud actually. Like sandboxes is a very new popular kind of like infrastructure thing that is coming up. And the whole reason there is like, yeah, great, you can run a coding agent on your laptop. What if you want to run two of them in parallel? You can do things with like Git worktrees and stuff like that, but like what if... Do you really want it to have access to everything on your laptop? Probably not. So like these secure environments for running agents are going to be a massive part of them scaling.
John Furrier
>> Yeah. One of the things I like about what you guys are working on in Arcade, you guys are like what I call the operating system for agents and on dev with the tool set, with the patterns you guys were putting out. It feels like as an operating environment, you call it a runtime, we're kind of coming into that OS of agents. How do you manage them? They're behaving in ways, they're learning, they're reasoning, they're inferring. So like, shit's hitting the fan on this. We saw ClawdBot, we saw Moltbook. I call that the ChatGPT moment. I mean, actually the DeepSeek moment for agents because it's like, okay, went a little meme-y on selling your humans, but I think that was a powerful signal that this is happening, that could happen. And a lot of people, like Stan was saying, "Wow, this is really working." Now, there's the dark side of that, "Oh, you got out of control." But I think that's just what it was. I actually liked it. I thought it was phenomenal, but it does bring the question, they're working.
Sam Partee
>> Yeah.
John Furrier
>> They're working while you're sleeping.
Sam Partee
>> It's really hard to not observe those kinds of agents, which something like OpenClaw, it's an open source project that's very young, right? And it does a lot. And you can tell that given its early state, even though it's very cool and it honestly does a lot for you, it's one of the first things that connects a lot of different applications, but being able to not only observe what it's doing, it's something in enterprise that you have to have in order to have a sale there. You have to be able to put it in one place, audit logs. You have to be able to see everything that it's doing. And people actually, we have joint customers that use the tracing side of LangSmith and the audit logging and authorization capabilities of Arcade. And with those, you get comfortable in the enterprise doing the OpenClaw-like deployment, which as you said, frankly, there is kind of a moment happening right now.
Harrison Chase
>> And maybe building on that, I think one of the things about agents is it's like a system around the model. And I think anything with the system takes a while to build and there's many different parts of that system and I think parts of that stack are evolving and becoming clearer. And I think like what LangChain and Arcade do together is a great example of that where like there's, in order to make OpenClaw A, better, but also B, more enterprise friendly, there's a lot. And it's also not just going to be us two that are up here kind of like solving that. There is like this stack that is starting to come up.
John Furrier
>> And I think it's not a one-time thing. I mean, DeepSeek has other things that look just like it. So I think to me, it was a wake-up call to the mainstream, like this is real. Get on board or get out of the way because it was like, okay, that validates the AI-native infrastructure because these things could run wild. And cloud-native, a lot of work went into observability, tracing, all the things you're mentioning. So you got a similar parallel. Again, it's not cloud-native, but what cloud-native went through we're starting to see with AI-native. So I guess my question to you guys is, what is the AI-native version of all the plumbing services that happened in cloud-native? What are the key keys? I mean, MCP's obvious one, but you're starting to see other things with your patterns.
Sam Partee
>> Sure.
John Furrier
>> Yeah. Take me through that.
Sam Partee
>> If you compare MCP to HTTPS, right? That's the protocol we've had around every time we go to website, you don't think about it, right? Yeah. We think about MCP right now. We won't in the future. We will have it so far abstracted upon that you will think very... You don't even worry that there's an S at the end of the HTTPS because you know that it's secure when you go to that website that you've been to a hundred times. We're building kind of a stack to make sure that the difference that is necessary for agents, which is a very different stack than just your typical cloud-native stack, is that same level of every customer can be positive that they're secure all the time.
John Furrier
>> You talk about in your blog post, I wrote this down, four cross-cutting concerns on tooling quality, agent experience, security boundaries, error-guided recovery and tool composition. Explain why you're breaking it down that way. I mean, maybe you got it wrong, but generally, it's directionally accurate from what I could tell. Why is that important? And you're starting to get into the slicing the salami and decomposing what the requirements are, how do you see that piece of it evolving?
Sam Partee
>> Yeah. Well, I mean, look, we were building tools over two years ago when MCP didn't exist. We have a corpus of talent on our team right now that has tool building knowledge and tools, again, these things that agents are capable of using for a long time lived inside of orchestration frameworks and now have a large corpus of dedicated actions that they're responsible for. And you need to be able to reason and understand with what they're going to be able to do, not only at the agent level, but at the infrastructure level, and what access they're going to have and what file permissions and all of these different things that breaking them down into composable patterns kind of helps you as the, let's say, English language programmer reason with. And so that's why we launched that tool patterns on Monday.
John Furrier
>> Harrison, what's your take on the requirement, where we are now? What's the traction points that you're seeing developers hone in on in terms of core building blocks or blockers they need to overcome?
Harrison Chase
>> Yeah. I mean, I think maybe just even being a bit pedantic for a bit, but I think the core parts are you've got the model. I think generally, data is important. So some sort of data store usually that supports kind of like semantic search, I think that's been a part for a few years now. I think we see observability being a part of that, because the biggest... We ran a survey a few months ago of agent builders, and the biggest blocker there was quality of the application. And some of that is solved by better models, but a lot of that is just solved by better context engineering. And that kind of gets into some of the error handling that Sam's writing about. The errors that tools throw, those are contexts, and you need to see what the LLM sees. And so that's observability. We see like tools are clearly an important part. All agents basically call tools. That's where kind of like Arcade comes in. I think we see that like code execution is becoming a part. So sandboxes, I would argue are like a new part of this stack. Other than that, I mean, also this stack is like so rapidly evolving, but maybe calling out two things that I think are like kind of interesting, but not part of the core stack yet, but like interesting. One, I saw a demo of some ads integrations actually. So one of the big issues with running agents is unlike software where the marginal cost to run it is very slim, if not zero, agents cost a lot of money to run. And so if you can subsidize some of that with ads, as OpenAI is exploring, that's kind of interesting. And again, there's actually not that many ads, but you see some of it, there's Amp. Amp is a coding agent. They have a free tier powered by ads. That's actually really interesting. And then the other thing would be payments. So I think like Stripe and Google have each announced kind of like in OpenAI, they're all doing things in the payment space. Nothing's like really a standard yet, but when agents can buy things, that's kind of interesting.
John Furrier
>> Yeah. I was riffing on the cue pod on Friday about the ad model. I was pro-OpenAI doing ads and we were debating, and I said, "Google AdWords didn't look like banner ads. It was contextual." The behavior with Search, the context was a link that was relevant to what you typed in. Very basic.
Sam Partee
>> Yeah. Interesting.
John Furrier
>> That created billions of dollars. I mean, these models have massive context. So the role of an agent is the ad model. The intent, everything's there. So it's not going to be a click or an ad. Maybe it is in the short term, but like it's going to know things. It's going to have reason saying, "Hey, you're going skiing this week and there's no snow in Aspen." So you're going to be like, "Okay, you probably won't ski much. Here's some restaurants to go to."
Sam Partee
>> There's some security problems though. Context isn't just all one big window, right? You have to be sure that when you were filling out that form earlier, that that PDF didn't happen to be in the folder that one of these agents had access to that ended up having your Social Security number or something in it. There's a lot that goes into the maintenance of that context window, and especially making sure that that context window has what you want in it.
John Furrier
>> And if it still has memory.
Sam Partee
>> Yeah, exactly.
Harrison Chase
>> I was actually going to bring that up as another block that I completely forgot about, but you're talking about kind of like ads in the context of... Ads in the context of a chat is one thing, like if you see everything there, but ads in the context of a memory system that knows what you've done over the past year. And again, memory's very early on, so this would be another thing we-
Sam Partee
>> Yeah, it's still early.
Harrison Chase
>> Yes.
John Furrier
>> We should do a deep dive on that for sure.
Harrison Chase
>> It's Wild, Wild West of memory out there right now. But like ads for a system that has memory, that's really interesting, because it can remember and then serve you ads proactively, and that's different.
John Furrier
>> I love this area. I think what you guys are doing is really super great work for the industry, but also it's intellectually and also from a tech perspective kind of intoxicating. You think about the magnitude.
Harrison Chase
>> It's fun.
Sam Partee
>> It is. It's fun.
John Furrier
>> The magnitude of what's going on. Hard problems to solve, systems kind of mindset. It's not just code away, look what I built, get on the Apple Store. It's a system. And it reminds me of the cloud days when... I remember 2012, all startups did it, but it really broke away when it solved the compliance security problem, CIA cloud.
Sam Partee
>> No one wanted to use S3 until it was this and this is certified, right?
John Furrier
>> It's unsecure. Don't put your stuff in the cloud, but every-
Sam Partee
>> But look what happened.
John Furrier
>> I'm not going to buy a data center. I'm going to start Dropbox and Twitter and the cloud, right? Whatever, all those things happened. But now we're similar with AI, we're starting to see as we start getting into production, security and compliance is an issue. It's not like classic security like CrowdStrike does threat detection and endpoint protection. It's a whole nother security pattern. How do you guys view the compliance and security conversation, not as a category, but as a direct issue for the AI-native infrastructure? How would you tackle that?
Sam Partee
>> I'll tell you first off, one thing that's interesting about the trend that you're talking about from cloud-native that's different is that trend went to the cloud, right? And I think a lot of, especially larger enterprises really want a lot of this data in-house. And so it's important when we do, we have joint customers that do this, they want everything, their vector database, they want their agents, they want their tool execution environment, their MCP environment, they want that all inside their VPC and deployed within their unit so that they know exactly where that data is going and that they know exactly who asked for it and when they asked for it. And those kinds of things we're just basically now getting to with the infrastructure layer. I mean, we've been building in this space for some time now, and it's largely been a solo developer on their laptop until basically recently, the past year and a half or so. But then again, the models were only really worth doing it like that.
John Furrier
>> Sam, you bring up a good point because all the definitions of what we talk about in the past, observability, applications, all are changed. I mean, if you're going to go on-prem, because that's where the crown jewels are, the data and the workflows, the proprietary information and the value for the customer, that's not a move to the cloud. So okay, that changes the paradigm for certain things, including security.
Harrison Chase
>> I think also on security, there's maybe like three ways that we think about it, and it's still early on. And I'd actually argue that some of the stuff that you guys are doing is the most important in terms of like off and giving kind of like proper .
John Furrier
>> Like what? Give an example.
Harrison Chase
>> Like making sure that if I'm connected to a Gmail tool, I can only see my emails. It sounds like really basic, but it's pretty easy to like set that up in a wrong way and that's like... And the reason that matters, it's actually less... Well, it's kind of related to like, I shouldn't be able to see Sam's emails, but it's also like prompt injection is a thing. And so if I'm talking to someone that's exposed to the external world-
Sam Partee
>> Exactly.
Harrison Chase
>> This is a different thing. So this also gets into another thing with tools, but basically like, you should basically assume that whatever tools you give the agent and those permissions, whoever you're exposing the agent to has those same permissions, because prompt injection is like very, very hard to like just completely bring away. And so I think like-
John Furrier
>> And it's possible. It's plausible, very plausible.
Harrison Chase
>> Yeah, super plausible. And so I think like A, having like off pass through, but also having an audit log of all the tool calls, like everything around... When the agent connects to the outside world, which happens through tools, that's a big point. A second point that we think about is with the observability traces themselves. This is why we think observability is different. Those observability traces we think can and will power security kind of like sweeps. Hey, is there prompt injection happening? Are there kind of different attacks happening? So observability powers that. And then the third thing that I'd call it where we're seeing it is in with code sandboxes, because you're often writing and running untrusted code. And so there's a bunch of different angles there, but when we think about sandboxes, one of the things we think about is how can we do that in a secure manner?
John Furrier
>> You know what I love about this topic is that it's like, I love plumbing conversations. So I love networking, but it's not networking. It's engineering and software. It's like there's like the confluence of all three are happening. You're going to lay down the pipes with Gmail, you better make sure it's hard. But if you do it right, you're good, right? So you got to get these things right. You can't screw this up.
Sam Partee
>> No, you can't. I mean, and like you said, it's plumbing. And for a long time, I have to admit, it was some dirty plumbing that we had to do, and it was hard. It was not the same type of O auth or SAML or whatever type of auth that you were using. It wasn't the same in this kind of environment where there's a new intermediary to be responsible for.
John Furrier
>> Guys, really appreciate it. We can go another hour. Love this conversation. We certainly need to pick it up. Sam, Harrison, great to have you on.
Sam Partee
>> Thanks for having me.
Harrison Chase
>> .
John Furrier
>> Put a plug in for what you're working on for the last minute we have. Talk about what's going on with the company, what you guys are trying to do, what do you optimize, what's your focus? We'll start with you. Go ahead.
Sam Partee
>> Yeah. Well, Arcade.dev, if you haven't checked it out, you can use it in any of the Claude, ChatGPT, et cetera, put it in Cursor and you'll see what we've been talking about today. You can go and do everything from read your email to open app pull requests and back. There's tons of different actions in the marketplace that you can go and download and use today, and we're only adding more. And so if you're using MCP, come use Arcade.
John Furrier
>> Harrison, put a plug in for you guys.
Harrison Chase
>> We have two big focuses. One's kind of like on the open source side. Then your newest thing there is deep agents, basically how can you build Claude Code for your domain? That's kind of like our focus. Open source, model-agnostic, everything there. Completely separately, observability is a super large focus of ours. It's completely separate from all of our open source frameworks because we think it's so important and we're really focused on what does it take to enable AI in the enterprise and we think observability is a big power for that.
John Furrier
>> Yeah. And that'll bring a lot of confidence to deployments.
Harrison Chase
>> Yeah. 100%.
Sam Partee
>> Absolutely.
John Furrier
>> Thanks for coming on theCUBE. Appreciate it.
Harrison Chase
>> Thank you.
Sam Partee
>> Thank you very much.
John Furrier
>> All right. We've got the mixture of experts, so you can't get any better than that. If you're looking at age and infrastructure, it's a whole nother category. It's a generational thing. AI-native builds on the success of cloud-native, but it's a whole nother set of infrastructure engineering, coding, and it's all happening in real time in front of us. We're doing our best to keep up at theCUBE. I'm John Furrier. Thanks for watching.