Bruno Kurtic of Bedrock Data appears at RSAC 2026 for a discussion with Dave Vellante of theCUBE Research and Christophe Bertrand of theCUBE Research on data context and agent governance for secure artificial intelligence, AI deployments. Kurtic explains why data context and metadata form the foundation for securing AI systems. They describe Bedrock Data's metadata lake architecture, the AI data bill of materials, DBOM, integrations across cloud and SaaS platforms and how Bedrock Data complements platforms such as Snowflake.
Key takeaways include that data rather than models drives AI security and that metadata provides the missing layer, according to Kurtic. They recommend that enterprises discover, classify and contextualize data to reduce AI risk, deploy DBOM for payload transparency and govern agents through entitlement-aware controls such as Bedrock Data's MCP server. The hosts emphasize prioritizing foundational data governance before broad AI rollouts and discuss the market and architecture implications for data security, data privacy and compliance.
Subscribe for ongoing coverage and expert analysis on RSAC 2026, AI governance, metadata strategy and secure agent deployments.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSAC 2026 Conference. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for RSAC 2026 Conference
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for RSAC 2026 Conference.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSAC 2026 Conference. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to RSAC 2026 Conference
Please sign in with LinkedIn to continue to RSAC 2026 Conference. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Bruno Kurtic, Bedrock Data
Bruno Kurtic of Bedrock Data appears at RSAC 2026 for a discussion with Dave Vellante of theCUBE Research and Christophe Bertrand of theCUBE Research on data context and agent governance for secure artificial intelligence, AI deployments. Kurtic explains why data context and metadata form the foundation for securing AI systems. They describe Bedrock Data's metadata lake architecture, the AI data bill of materials, DBOM, integrations across cloud and SaaS platforms and how Bedrock Data complements platforms such as Snowflake.
Key takeaways include that data rather than models drives AI security and that metadata provides the missing layer, according to Kurtic. They recommend that enterprises discover, classify and contextualize data to reduce AI risk, deploy DBOM for payload transparency and govern agents through entitlement-aware controls such as Bedrock Data's MCP server. The hosts emphasize prioritizing foundational data governance before broad AI rollouts and discuss the market and architecture implications for data security, data privacy and compliance.
Subscribe for ongoing coverage and expert analysis on RSAC 2026, AI governance, metadata strategy and secure agent deployments.
In this interview from RSAC 2026 Conference, Bruno Kurtic, co-founder and chief executive officer of Bedrock Labs, joins theCUBE's Dave Vellante and Christophe Bertrand to discuss why data context and governance — not the models themselves — are the true foundation for secure AI deployment. Kurtic opens with the thesis that AI security is fundamentally a data problem: enterprises sitting on petabytes of fluid, unstructured data cannot safely deploy AI agents without first understanding what that data is, where it lives, who owns it, and how sensitive it is. H...Read more
exploreKeep Exploring
What information about enterprise data must be known before it can be used safely?add
Why is data considered the core of enterprises, and why is your company called "Bedrock" (i.e., why must organizations get their data in order before rolling out AI)?add
To what extent is this an issue of data versus metadata, and should data and metadata be handled separately or in a unified way?add
What is your company's approach to metadata management, and why are you building a metadata lake as the foundation of your platform?add
Why did Snowflake invest in Bedrock Data, and how do Bedrock Data and Snowflake work together to help customers safely deploy AI on Snowflake (Cortex)?add
>> And we are back, theCUBE's live wall-to-wall coverage of RSAC 2026. We're getting deep into day four now. It's been a great event. Probably 30,000 people here or more. I mean, of course, all the talk is AI and agentic, but the community comes together for the Super Bowl of security once a year here in San Francisco. I'm Dave Vellante with my co-host, Christophe Bertrand, who's been with me all week. John Oltsik also out doing his reporter's notebook and gathering all the data. Bruno Kurtic is here. He's a co-founder and CEO of Bedrock Data. Bruno, good to see you again.
Bruno Kurtic
>> It's great to be here again.
Dave Vellante
>> Saw you the other night at the Greylock event. It was a good gathering. A lot of high-powered people there. And so it was good to see you there. But let's start with Bedrock data. I always like to ask founders and co-founders, why? Why'd you start the company?
Bruno Kurtic
>> Great question. I think I wanted to solve this problem of data management, data security, particularly because it is one of the rare data infrastructure problems that has not been solved yet. And we've been kind of skating along using different kind of crutches to solve this problem. But fundamentally, at the age of AI, data is everything. Data is the fuel and you need to feed the machine. You need to feed your AI systems to differentiate. And without proper control, you just incur way too much risk. So this was the time to solve this problem and I'm super excited to do it.
Dave Vellante
>> Talking to two data guys, so you're not going to get any debate there. But your fundamental thesis seems to be that AI security is not a model problem. The LLMs aren't going to solve. It's a data kind of context problem. And also, of course, governance, you got to start with governance. Context is an interesting topic these days. So explain your thesis.
Bruno Kurtic
>> So the thesis is the following. Before you can use your data, and every enterprise has monstrous amount of data, right? We're talking petabytes and petabytes of data. The challenge with data is different than the challenge with the infrastructure. Infrastructure is discreet. You can touch it, feel it, you can put your arms around. Data is not. Data is fluid. It's unstructured. It grows exponentially. And context about data is what is this data? Where does it live? Who owns it? What's in it? Is it sensitive? How did it get there? What's its lineage? All of these things are essential in understanding data such that you can safely use that data everywhere, including in AI systems.
Christophe Bertrand
>> Look, I mean, data management has been an issue since the days of mainframe, literally. And it seems like we're making progress, but not that much progress. We still have a lot of siloed data. We still have a lot of PII or personal data flying around. And now we are having agents potentially using data that really shouldn't be using, and it can do so with no consequence because they're not human. So is this a solvable problem? Is it better to try to solve everything before you go do some AI or engage in some AI project? Or should we try to fix some of the issues and sort of build a plane as we fly it when it comes to AI? What's your take?
Bruno Kurtic
>> Great question. To go back to your mainframe comment, there is a reason why we call them data centers. Intuitively, we've known for decades that data is at the core. It's not infrastructure. It's not server center. It's not network center. They're data centers. Data is the crux of every enterprise. And so I think the reason why our company exists and majority of the drive for our business today is because people are intuitively understanding if we're going to roll out AI systems safely without really sort of grinding to a halt because of governance, because of security, because of risk, because of compliance, because of regulatory, we have to get our data in order. So I think the reason why the company is called Bedrock is because you build your foundation on Bedrock. So the whole premise is, I believe you must get your data in order as much as possible, discover it, classify it, understand it, contextualize it, and then you can roll out AI systems off of it. Now, not everybody's doing it. I look forward to seeing what happens there, but I absolutely believe that it needs to be done and it can be done. It might not be done all the way, but it definitely can be done.
Dave Vellante
>> How much of this is a data problem versus a metadata problem? Do you think about those things differently? Do you think about those in a unified fashion?
Bruno Kurtic
>> Data is the source. Metadata is the derivative, right? In order to build the picture of your data, you have to understand the data to construct the metadata. Metadata ultimately is what we're going to operate on. We're going to operate on like, "Hey, this data here, should I allow my AI agent to touch it or no? Or which columns of this table should I allow my agent to touch?" So they are separate, but they depend on each other. Metadata is, I think, the layer that's missing. Everybody's got lots of data. Most enterprises don't have the metadata about that data sufficiently clear to be able to use their data effectively and safely.
Dave Vellante
>> So how do you handle that? Do you have a purpose-built metadata catalog? Do you have a knowledge graph? Take us inside the architecture.
Bruno Kurtic
>> So I spoke at theCUBE maybe a year or 18 months ago about this. So what we build is we build a metadata lake. That's sort of the foundation of our platform. I think of the metadata lake as a core component in data supply chain that does not exist to date. Like data catalogs, MDM vendors have done some of that work, but at scale, covering all structured, unstructured, semi-structured data across your on prem, cloud, SaaS is difficult. And so our job at Bedrock is to crawl the environments, peer into all data sets, data stores, understand each piece of data and construct this metadata lake that ultimately serves as a source of context to all downstream processes.
Christophe Bertrand
>> I have a quick follow up on that. I certainly understand the on prem and maybe controlled environments, whether private cloud. I get that. I'm less sure about the SaaS environments. They're all very different. They all have different APIs. Essentially, most SaaS applications are collection of big databases with a bunch of semi-structured data. Thinking of Salesforce, for example, I mean, anytime you try to get into Salesforce, you end up using calls, which go against your quota. So you end up with some issues with that potentially in consumption. So how do you overcome that, especially with the multiplicity of SaaS applications that have all sorts of different access APIs, if you will?
Bruno Kurtic
>> Yeah. SaaS and cloud are to some degree similar and some degree different, right? In the cloud, you operate on quota, you're charged for compute, you charge for network and all these things. In SaaS, it's different. Sometimes you're charged by user, by data volume, whatnot. Ultimately, majority of data sets in SaaS live in a handful of core systems. And you mentioned some of them. It's Microsoft 365, it's Google Drive, it's Box, it's Salesforce, it's Confluence and things like that. And so for our customers that we work with, they care about those systems where majority of the data lives. And it happens to be a handful of systems and then there's a long tail. They all have different APIs, they all have different ways of charging, but ultimately one has to get a picture of those documents. Ultimately, without understanding what's in each document, what is shared, what is not shared, what's sensitive, it's difficult to govern it. And so it's difficult to use DLP if you don't do that, all of these things. And so the long game for us is we have to have integrations with every system. We have to understand every API where sensitive data lives and we roll those out continuously. We know that that's the ground game. We have to integrate with all the systems where data lives.
Christophe Bertrand
>> And you can do that in a Salesforce without affecting governor limits, for example. I mean, that's really my question here. That's the biggest pain in the neck.
Bruno Kurtic
>> And Salesforce is one thing, then you've got things like Microsoft 365, which has severe throttling on the APIs. Why? Because that causes compute. So we have to operate such that within those boundaries and so limitations of those systems, we operate in such a way we don't impact customers' environments and we don't.
Dave Vellante
>> Snowflake participated in your last round. They made an investment in the company. I have a couple questions around that. So what is your relationship with Snowflake? You saw Ben Horowitz and Ali Ghodsi were here this week making noise about a thing they called Lake Watch. Always, if Snowflake does it, Databricks has to do it and vice versa. So curious as to what you think of that, how you partner with Snowflake, what kind of products you guys envision together, how you differentiate from others in the market who are kind of positioning as a next gen SIM. How do you position it? If you could help us understand all that.
Bruno Kurtic
>> Sure. Snowflake is an extremely important source of data for enterprise. It is in many situations the lake of all important enterprise data. And Snowflake's strategy involves being the core AI system for the enterprise, because if you sit on all of that data, you might as well put it to good use. And so the relationship between us and Snowflake is that we already help Snowflake customers at scale, manage security of that data, understand what's in that data, classify trigger masking policies, all of those things. Snowflake invested in Bedrock data because we are very complimentary to their customer base in a way where Snowflake's Cortex is their core AI platform. And in order for their customers to safely roll out AI applications on top of Cortex, they need to understand what's in their data. They need to govern it. And so the combination of Bedrock data in Snowflake is that we provide visibility into both the data and the agents and allow their Snowflake customers to build guardrails to safely roll out AI agents on top of Snowflake. And also to inform the Snowflake horizon catalog about datasets outside of Snowflake to give Snowflake the visibility into datasets that don't yet live in Snowflake that sort of live in other places in the enterprise.
Dave Vellante
>> Okay. So you're tightly integrated with Horizon. What about Polaris? What's your thoughts on the open source activity that's going on in the world of catalogs? Everybody seems to want it, but they don't really know how they're going to govern it and how it's going to be managed. It's sort of the wild west in the quest for open, which is really not an outcome. But anyway, I'd love your thoughts on that.
Bruno Kurtic
>> It's a deep question. We could spend a lot of time on that. I would say that our perspective on catalogs in general, our perspective on AI and data is that we currently coexist with a lot of catalogs in our customer bases, whether they're open source catalogs or proprietary catalogs. And our job in those enterprises is to inform those catalogs about data context they don't have autonomously, right? Find what they don't know, find what they don't see, understand the data at a deep level, and then power deeper knowledge into those catalogs so that enterprises that rely on them use them. Whether those are open source technologies or closed source technologies like Unity Catalog or others.
Dave Vellante
>> So just to follow up, because I mean, it seems to me the market is pressuring ... I mean, the beauty of Snowflake is you're inside of Snowflake and everything works. It's managed. It's safe. You go outside of Snowflake, it's like, okay, you're kind of on your own. Now, I've talked to Ben Watsonville a lot about this is that essentially they're bringing much of the capability of Horizon to their open source, but it seems to me that partners like you are critical in that regard to be able to maintain that richness, both inside and outside. I mean, it seems inevitable that that capability is going to migrate to open source. If you've effectively just said you have to work with all of them.
Bruno Kurtic
>> Whether it's open source or closed source, you have the same problem. You can deploy open source technology, in house, inside, outside Snowflake. Ultimately, data changes all the time rapidly at scale, developers copy production datasets into lower environments to do development. All of this stuff flows. There's ETL systems everywhere, agents, MCP servers, surface data everywhere. No matter what system you use, you have to understand what's on the ground.
Dave Vellante
>> The other part of my question was trying to understand how you differentiate from what I think others are calling a next gen SIM. I mean, that's what sort of Databricks is looking at. I mean, I think Elastic is looking at something similar. How do you position relative to some of those?
Bruno Kurtic
>> Great question. We actually, we're not a SIM. What we do for every SIM, including the SIM that I built at Sumo Logic, right? We had a SIM, we had a SOAR, we had all those technologies. One of the key problems for all of those technologies, including the new Databricks thing and other SIM tools out there is that they don't have the data risk context. The one big problem with any SIM is that it fires a lot of alerts and a statistic that I know from my previous company was that only 15% of alerts can be addressed by any SOC team. So what our job in this context of a SIM is to feed the SIM the context of, hey, this particular identity that you're detecting is generating some alert has a high blast radius access to sensitive data that gets prioritized. This piece of infrastructure where something malicious is happening, that houses sensitive data versus does not. So our job is to deliver that context that nobody else has so that every downstream tool does a better job.
Dave Vellante
>> Thank you.
Christophe Bertrand
>> So do you think that by leveraging your technology in combination with a data lake in general terms or just data in general, can you make agents compliant, meaning they cannot access data that they shouldn't? If there's PII, you can touch it, for example. Can you enforce rules like that?
Bruno Kurtic
>> Yeah. So the metadata lake that we build has multiple properties about this. One is we actually unpack the full entitlement chain. So we know which identities through which paths have access to which data sets. We also know which data sets are sensitive and as agents perform work on behalf of users. So agent can have a Uber identity can sort of take an identity of a user that they operate on behalf of. We have actually released our own MCP server for this very purpose. We want agents that are accessing data stores to be able to, through our MCP server, query and understand what they are allowed to use on behalf of the users and identities they operate under. So we are actually interested in proposing an industry way of governing agents in an autonomous way, because we have the context of what they should be able to do, right? Industry is not ready for that quite yet, right? But we are constructing a system that we believe will be an autonomous agent governance mechanism for it.
Christophe Bertrand
>> So literally a data cop?
Bruno Kurtic
>> Yeah, that's right. Ask me before you do.
Dave Vellante
>> You have this concept of an AI data bill of materials. What is an AI data BOM?
Bruno Kurtic
>> DBOM rifts off of SBOM, right? DBOM is essentially a deep inspection of a payload that AI systems can access. It could be your MAM365 SharePoint. It could be a RAG repository you built for yourself, but the job of DBOM, which is a special built capability for AI on top of our data platform, is to tell that the engineers and architects of that system what exactly agents can access inside of these payloads. And it doesn't just sit on the level of atomic classification. It's not just PII, PHI, and things like that. It also has a full taxonomy, meaning business context taxonomy. This is financial data. These are offer letters. These are financial transactions and also can tell the system whether there are things inside of the data that could bias the models, things like socioeconomic status and things like that, so that when you're building your AI systems, you really understand what risks you will incur when they go live.
Dave Vellante
>> Okay. So I guess my last question is, if you're an enterprise, you got to be able to know what my agents have access to, how they're getting there, what the entitlements are. Ultimately, these things are going to determine what my business risk is. Is it fair to say that if you're not governing AI this way, you're just kind of hoping that nothing breaks, and-
Bruno Kurtic
>> Hope's not a strategy.
Dave Vellante
>> Hope's not a strategy. And so what's your advice to enterprises to really sort of firm that up?
Bruno Kurtic
>> We just actually released a pretty strong point of view on this. We released an update to our ARGUS AI, which is our agentic technology on top of our data platform. And as part of that release, we defined what we think AI risk surface looks like. AIX risk surface really is a triad. It's data sensitivity, entitlements and exposure and agent capabilities. Those three things construct a formula that defines what is your level of AI risk surface. And each one of those points in a triangle can be managed to reduce that risk surface. So currently, my belief is that a lot of enterprises are building and rolling out AI on a shaky foundation and ultimately I think everybody needs to take a step back, put some brakes in a system so they can go faster. I do think that ultimately knowing and slowing down until you get the foundation right will allow you to move faster in the future.
Dave Vellante
>> Okay. I lied, was not my last question. So just did your series A. Customer traction, how would you describe that?
Bruno Kurtic
>> It's going great. Our customer base is growing. We just finished our first full year of selling. We've been out of stealth for about almost two years. I think it was RSA last year. We came out two years ago and we are adding customers in large enterprises. We just closed on Monday, a Fortune 150 enterprise, financial services, health tech, technology companies, large scale, multinational. That's kind of what our customer base looks like.
Dave Vellante
>> So you've got your product market fit. You've been probably living off of product led growth. Are you scaling go to market now?
Bruno Kurtic
>> We are. We have a whole full sort of framework for our go to market team. We have a head of sales, head of marketing, head of sales ops, head of business development, ahead of customer success. So we have a full organization now scaling. We have lots of sellers peppered around North America. And so yeah, we're growing. And if people are interested in joining us, come reach out to me.
Dave Vellante
>> That's great. What are you looking for? I mean, what's your ideal go to market pro look like?
Bruno Kurtic
>> We're looking for people who are excited to basically solve a problem for the enterprise that is very difficult, upon which rests the next decade of enterprise re-platforming into AI. And I think that's super exciting and I think there's a lot of demand for that today.
Dave Vellante
>> Excellent. Bruno, thanks so much for coming back in theCUBE. Love to have you back and track your progress. So good luck.
Bruno Kurtic
>> Appreciate it. Thank you very much.
Dave Vellante
>> You bet. All right. And thank you for watching. This is Dave Vellante for Christophe Bertrand. We're mid-morning here on day four RSAC 2026. Right back, you're watching theCUBE.