In this AI and Data Trust CUBE Conversations segment, theCUBE’s Rob Strechay sits down with Jay Limburn, chief product officer of Ataccama, to unpack the company’s just-launched Agentic Data Trust platform and why data trust is now the prerequisite for enterprise AI at scale. Limburn explains how Ataccama unifies catalog, data quality, lineage, pipeline observability and reference data into a single platform – and layers in a true agent (beyond a basic co-pilot) that plans and executes multi-step work from a simple prompt. The discussion walks through how the agent profiles and classifies tables, documents columns, applies quality checks, flags anomalies, corrects discrepancies and generates business-ready reports – while keeping human-in-the-loop oversight and governance guardrails front and center for regulated industries.
The conversation further explores how Ataccama positions data trust as the foundation for sustainable AI programs, including an MCP integration that lets enterprise agents (e.g., Claude, GPT) tap a “trust layer” so they operate only on explainable, high-quality data. Use cases span data-hungry sectors such as financial services, insurance, manufacturing and pharma, with examples like autonomous credit risk assessment, fraud claims detection and quality assurance. Echoing insights from Ataccama’s Data Trust Report, Limburn notes that organizations racing ahead with AI often do so on shaky governance foundations, making leadership buy-in, cross-functional alignment and culture change essential to avoid failure modes and realize autonomous data operations.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AI and Data Trust with Ataccama. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI and Data Trust with Ataccama
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI and Data Trust with Ataccama.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AI and Data Trust with Ataccama. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to AI and Data Trust with Ataccama
Please sign in with LinkedIn to continue to AI and Data Trust with Ataccama. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jay Limburn, Ataccama
In this AI and Data Trust CUBE Conversation, Larry Hunt, field chief data officer at Ataccama, joins theCUBE’s Rob Strechay to unpack why trusted data, not just tools or talent, is the critical path to real AI adoption. Citing Ataccama’s Data Trust Report and his financial services background, Hunt highlights the gap between ambition and outcomes: while ~99% of firms are piloting AI, only ~3–4% are seeing results, with data trust as a primary blocker. He explains how governance succeeds when it’s “compliance by design,” tied to CEO/board-level KPIs, and focused on enabling business outcomes, rather than “selling governance.” He also notes where leadership buy-in and cross-functional alignment matter most, and why 46% of leaders call out data quality as a top priority.
Hunt gets candid about today’s hybrid reality: legacy debt is worsening at large, federated institutions, making sustainability and scale the hardest challenges. He outlines how data products/domains can help de-risk modernization while balancing the CDO’s defensive mandate (regulatory compliance, risk) with offensive value creation (improving efficiency ratios). The takeaway: we may be in an AI hype cycle, but value will arrive faster than past waves – for organizations that ground their programs in trusted data and embed governance from the start.
In this AI and Data Trust CUBE Conversations segment, theCUBE’s Rob Strechay sits down with Jay Limburn, chief product officer of Ataccama, to unpack the company’s just-launched Agentic Data Trust platform and why data trust is now the prerequisite for enterprise AI at scale. Limburn explains how Ataccama unifies catalog, data quality, lineage, pipeline observability and reference data into a single platform – and layers in a true agent (beyond a basic co-pilot) that plans and executes multi-step work from a simple prompt. The discussion walks through how the...Read more
exploreKeep Exploring
What is the purpose and functionality of the Agentic Data Trust platform in relation to providing reliable data for AI?add
What are the key considerations for using AI agents in regulated industries?add
What are the benefits of integrating MCP with enterprise AI agents in terms of data quality and trust?add
What factors contribute to the success of companies in implementing AI initiatives across various use cases?add
>> Hello and welcome to this CUBE Conversation. I'm Rob Strechay, managing director and principal Analyst with theCUBE Research. Today we're diving into one of the most critical shifts in enterprise data, the rise of agentic systems. Ataccama just launched its Agentic Data Trust platform, which is designed to deliver trusted AI-ready data at scale autonomously. We will unpack how it works, why it matters now, and what it means for the future of autonomous data operations. To help me unpack this, I'd like to welcome Jay Limburn, who's the chief product officer for Ataccama. Hey, welcome in Jay.
Jay Limburn
>> Hey, Rob. Good to see you. How are you?
Rob Strechay
>> I'm good, I'm good. I think this is so exciting to me because I think data is, it's basically what makes AI go. I mean, it is like the most important, and there's nothing bigger than trust when it comes to data. So let's start with the big news here. What is the Agentic Data Trust platform, and how does it change the game for delivering trusted data to AI?
Jay Limburn
>> Yeah, I mean, you kind of said it, right? To do AI in this big push towards AI, it really starts with the data. And so with our agentic data trust platform, we've built an integrated platform that brings together all of the different parts that you need to provide trust to your data, so things like knowing where your data is through a catalog, the data quality to make sure you've got the most accurate, most understood data available to you, understand where it's come from, maybe with the lineage, understand how your pipelines are operating by being able to observe those pipelines, understand all that kind of maps into your reference data. So bringing together all these different kind of data management concepts into a unified platform with a purpose then to deliver trust into those AI initiatives, making those things kind of string together really, really well. And then on top of that, this is a complex space, right? Many people in this space have been doing this stuff a lot of years. It takes a long time to learn how to use the different technologies in this space. We've really tried to take advantage of this shift towards the use of agentic technologies. And so not only is it an integrated data trust platform using our own AI agentic capabilities, we've then built an AI agent that simplifies and automates a huge amount of the work across that platform, ultimately to make it simpler to use, to deliver more value more quickly, and then allow you to focus on delivering those higher value outcomes from those AI initiatives with that trusted data that you're now able to use.
Rob Strechay
>> Yeah. You hit the nail on the head of what I was going to ask next, which was the fact that agentic is a loaded term, I mean, to put it mildly. In your view, what makes Ataccama's approach truly agentic? And I say this in the kindest things to people out there, but it goes beyond just a smarter co-pilot, because that's to me not agentic. So why don't you kind of unravel that for the folks?
Jay Limburn
>> Yeah, I mean, there's so much confusion in the market on this one, right? In terms of co-pilots, agentics. Suddenly everything's agentic. Everything's an agent, just because effectively it's got an interface over an LLM. In reality though, those differences between the co-pilots and the agents are pretty dramatic when it comes to the level of automation, autonomy that can be offered. There's a lot of companies out there that are claiming that things are agentic, when in reality, they are just co-pilots. Co-pilots are great, right? First of all, they're great. They're a great kind of introduction to AI. They are essential to help humans perform a task faster. There's been huge amounts of implementations, really cool implementations of co-pilots that help to automate some of the behaviors, but it's still requires a lot of human interaction. It requires you to instruct the agent or the co-pilot how to perform. Agents are a step further beyond the entry into co-pilots. Agents are much more autonomous. They're able to break down a complex problem and then figure out autonomously the best steps to take to solve that problem and execute those steps on behalf of users. Co-pilots are great, but still requires you to instruct it and tell it how to solve the problem. What we've built in Ataccama is a true agentic platform whereby you can give it abstract commands, you can ask it to go and solve a complex problem, and it will then go through that thinking process, document and explain that thought process, and then deliver that task for you, which is really helpful when you're trying to deal with the complexities of dealing with data. It's hard, and the true agentic path really helps you to solve that.
Rob Strechay
>> Yeah, I couldn't agree more. I think having started life as DBA and got a lot of gray hair now, I can tell you that data has not gotten any simpler, but in fact, it's gotten a lot harder. And I really love how you're positioning your agent as a really autonomous co-worker. It's not really replacing it, but it's really that co-worker assistant that can really help. And let's break that down a little bit further. How exactly does it detect, fix, and continuously improve the data quality? And what are you seeing as the impact or what customers are seeing as the impact so far?
Jay Limburn
>> Yeah, I mean, like I said, it allows you to be a little bit more abstract with what it is you need to get done, the tasks that you need to deliver from your data, and then it will figure out that information, form that plan and execute those steps. But let's say we're looking at a large table, complex with multiple columns, and it's using basically the technical metadata for it. So you are just seeing the kind abstract column names of that information. That doesn't mean anything to most people outside of a DBA, right? But if we're looking at that data, we can actually use our agent and we can say, "Hey, describe this table to me, improve its quality, and generate me a report out of that data." That's all I need to do. I don't care that I'm just looking at technical metadata of this table. And so at that point, the agent will start to work as magic and it will figure out how to profile and classify what's in that data, so it will know what it is inside that data. The next part of the plan, it might say, "Right, based on the content in this, I'm going to document some of those columns so that we can visualize that in a business-friendly way." Maybe it will pull in information from the corporate glossary of terms if they exist. And then having figured out what the data is automatically, it'll then start to apply quality checks against that data and try and figure out, wow, these look like date of births, therefore we know what a valid date of birth should look like, so let's apply some rules against it. Where are the anomalies that exist inside that data? Can I start to correct some of those discrepancies and improve that? And where are the flags that I need to raise in this data where things don't look right, but I'm not sure? Maybe I need to raise them up. At the end of that process, the agent would've done all the hard work to take what was a technical description of a table of information, and then give me a well-curated clean set of quality data that I can say where the quality dimensions are for that data, is it, can I trust that data, and then generate a report or give me something that I can share across my business or put it into a dashboard. All of that based on effectively one prompt, which was describe this data to me and give me a report, and it's done all that automatically. And I think that's where the power of these true agents really comes in, because it truly does all automate how you can use that data and automate the tasks associated with it to get you to a higher value outcome so you can focus on delivering value at the other end of it.
Rob Strechay
>> Yeah. I like how you're describing it, because you're really describing it as a self-improving loop of data trust, which is so important. Can you explain how that loop works in the real world?
Jay Limburn
>> Yeah, look, a good thing about AI is it learns and improves, right? It's always getting smarter. We've seen this with machine learning for years. There's nothing new there. But the difference here is this is, rather than just a small piece of machine learning that does something to automatically classify data and improves over time, this is about chaining together multiple steps to deliver really, really complex tasks at the end, and then they improve over time. I think one of the important pieces of this though, is that everyone's scared about AI and AI being too autonomous. It's really important that you still make sure that you have that oversight in the data, in the human in the loop piece as well. The purpose of agents is to automate, but we believe at Ataccama is still very much important that you maintain that control around it, especially in some of the regulated industries that we work with. Many of our customers are in highly regulated industries, and so you still have to make sure there are guardrails in place around how you are applying agents against data so that you can ultimately maximize the value of the automation, but still make sure it operates in those guardrails. So that life cycle is great. It's about automating, it's about improving the use of AI, but you've still got to make sure you've got those guardrails in place from a governance and regulatory perspective.
Rob Strechay
>> Well, you brought it up, so let's dive in a little bit and understand what industries are really benefiting most now, and what use cases are you seeing the fastest ROI and why?
Jay Limburn
>> Yeah, I mean, the good thing about being in the data business is it kind of transcends across different industries. Any industry that works with data, which pretty much means it's uniform across all industries because everyone is working with data, right? But it is true to the data-hungry industries, financial services, insurance, manufacturing, pharmaceuticals, yeah, addressing use cases like autonomous credit risk assessments, doing more complex detection of fraudulent claims, maybe in the insurance space, quality assurance. We've kind of found that they're biggest, those data-hungry industries, and those key use cases in those industries where we're starting to see a lot more of that kind of agentic capability, relying on some of those trusted outcomes in data where it's really kind of take off. It takes time just to get started and take the steps to move forward with that, but once they've defined what level of quality they need to implement inside those use cases, it really does seem to take off and help those organizations move forward with really delivering AI.
Rob Strechay
>> Yeah, I couldn't agree more. I think, again, when you look at the data-hungry industries, there are so many, again, like you guys go horizontally across different industries, and I think because everybody's got data, and every industry is actually becoming more and more regulated, especially as AI becomes regulated around the world, but you also introduce another way that kind of helps simplify some of the interactions with the system, which is you introduced the MCP server to connect the governed data to models like Claude and GPT. How does this ensure those AI systems only operated on trusted explainable data?
Jay Limburn
>> Yeah, we are really excited about this one, because what this enables is it takes data quality out of the data teams and delivers the value directly to the business. So fast-forward a few years, we believe there's going to be this kind of battle that takes place for the consumer, the modern operating system, and it's going to be fueled by those AI, enterprise AI agents, Snowflake, Microsoft, Google. You've mentioned some of them. So there's going to be this big battle, but the outcomes from those agents are going to rely on data. No one's going to use those agents if they're hallucinating, if they're delivering results that are inaccurate, if they're not trusted. And so with our MCP integration, which is an agent to agent protocol, it actually allows those enterprise agents now to plug into our trust layer. And so that means that when the agents are asking questions from their data, with our trust layer there, we're able to expose our trust brain to ensure that the data they're operating against is the most accurate, most trusted data, the best data to answer those questions and respond to the questions and the interactions from those end users. And like I said, it allows us then to take data quality from the data domain actually to make it as a driver and an outcome on the effectiveness of the enterprise AI initiatives, which is obviously hugely exciting for us.
Rob Strechay
>> Yeah, I agree. I think that to me is huge, when you can use trusted data and you make those connections. MCP is definitely becoming the protocol lingua for all of these different agents and models to talk to each other and really be self-describing in that way and self-enabling. I love it. We talk about it a lot, but as we keep continuing down and take a step back, how does this really shift, how does this shift really reshape the future of data operations and what's next as we head into the world of fully autonomous systems going down that path?
Jay Limburn
>> Yeah, I mean, I often say this, that what a great time to be in data, right? It's data professionals that we all are. It's incredible, and there's so much amazing innovation that's happening around us. And the thing that I'm really excited about is all of this, the exciting stuff around AI quality and the trust in the data is fundamental to it. And so all data, whether it's structured data, unstructured data, we are right at the center now of those AI-ready data architectures. And increasingly, it's not about the companies asking the question of what data do we have? It's more of now about what data do we trust? And so being able to be at the heart of that is obviously really exciting for me. And we're increasingly getting to a world where agents are figuring out these complex problems and taking actions in areas like I mentioned, fraud, fraudulent transactions, monitoring critical infrastructure, improving supply chain resilience. These all need to operate on trusted data, and it's the companies that can trust the data in the right way that are going to be able to really roll out AI initiatives across those use cases. So it really is about the ones that are able to be successful there, and that requires them to make sure that they actually understand and can trust that data. They're the ones that are going to win, and the ones that don't take that approach, I think are going to struggle to be successful.
Rob Strechay
>> We violently agree on this. I can tell you, of course, if you're not taking advantage of your data, it's really, and the fact that it needs to be trusted is key. The lineage, everything, it's such a complex thing. And the toil for these data engineers, I love this. I love you bringing the, again, agentic, true agentic to data trust with this platform. I think it's really key, and I'm excited for this and excited to see how this keeps going. So thank you for coming on board, Jay, and thanks for introducing it.
Jay Limburn
>> Awesome. Thanks for having me, Rob.
Rob Strechay
>> And thank you for joining us on this CUBE Conversation on theCUBE, the leader in analysis and news.