In this keynote analysis from AWS re:Invent 2025, theCUBE’s John Furrier joins analysts Paul Nashawaty, Zeus Kerravala and Sarbjeet Johal to unpack how Amazon is redefining cloud infrastructure through the lens of agentic AI. The panel breaks down Matt Garman’s declaration that "agents are the new cloud," exploring key announcements surrounding the Nova model family, AgentCore and Amazon Bedrock. The discussion highlights AWS’ strategic pivot from merely abstracting infrastructure complexity to abstracting work itself, effectively bridging the gap between professional coders and "citizen developers" while unifying the experience for builders at every level.
The conversation digs deeper into the practical realities of enterprise AI adoption, emphasizing the critical role of security, governance and compliance in moving from proof-of-concept to production. Kerravala, Johal and Nashawaty analyze AWS’ vertically integrated approach – spanning from custom silicon like Trainium and Inferentia to the application layer – and how this full-stack strategy allows customers to train models on proprietary data with improved price-performance. The group also debates the evolving competitive landscape, noting how AWS is equipping organizations to build autonomous, long-running agents that function as teammates rather than just tools.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AWS re:Invent 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AWS re:Invent 2025
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AWS re:Invent 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AWS re:Invent 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to AWS re:Invent 2025
Please sign in with LinkedIn to continue to AWS re:Invent 2025. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Christine Yen, Honeycomb.io
In this keynote analysis from AWS re:Invent 2025, theCUBE’s John Furrier joins analysts Paul Nashawaty, Zeus Kerravala and Sarbjeet Johal to unpack how Amazon is redefining cloud infrastructure through the lens of agentic AI. The panel breaks down Matt Garman’s declaration that "agents are the new cloud," exploring key announcements surrounding the Nova model family, AgentCore and Amazon Bedrock. The discussion highlights AWS’ strategic pivot from merely abstracting infrastructure complexity to abstracting work itself, effectively bridging the gap between professional coders and "citizen developers" while unifying the experience for builders at every level.
The conversation digs deeper into the practical realities of enterprise AI adoption, emphasizing the critical role of security, governance and compliance in moving from proof-of-concept to production. Kerravala, Johal and Nashawaty analyze AWS’ vertically integrated approach – spanning from custom silicon like Trainium and Inferentia to the application layer – and how this full-stack strategy allows customers to train models on proprietary data with improved price-performance. The group also debates the evolving competitive landscape, noting how AWS is equipping organizations to build autonomous, long-running agents that function as teammates rather than just tools.
play_circle_outlineAddressing Modern Software Complexity: The Urgent Need for Enhanced Observability in System Monitoring and Real-Time Issue Resolution
replyShare Clip
play_circle_outlineThe term “observability” reflects the truth about software behavior, rather than specific technologies.
replyShare Clip
play_circle_outlineAI adoption is correlated with increased software release instability, necessitating stronger observability tools.
replyShare Clip
play_circle_outlineHoneycomb has launched a private cloud and is embracing open-telemetry standards for metrics.
replyShare Clip
play_circle_outlineThe Honeycomb Canvas tool simplifies user interactions with observability tools through natural language queries.
replyShare Clip
play_circle_outlineOrganizations often misinterpret observability as merely logging and metrics rather than comprehensive understanding.
In this interview during theCUBE's coverage of AWS re:Invent, Christine Yen, chief executive officer of Honeycomb.io, sits down with theCUBE’s Dave Vellante to discuss why observability is emerging as the critical "trust fabric" for AI-driven software development. Yen explains that as AI coding assistants accelerate development velocity and increase the volume of code in production, the potential for instability and "unknown unknowns" rises significantly. She argues that traditional monitoring, which often minimizes data collection to control costs, cannot ke...Read more
exploreKeep Exploring
What are the reasons for the growing importance of observability in modern software development?add
What is the significance of the term "observability" in the context of software performance and monitoring?add
What impact does the adoption of AI coding assistants have on the stability of the release process in software development?add
What is the recent announcement regarding Honeycomb's offerings?add
What are the key features and benefits of the Honeycomb Canvas experience in relation to observability?add
What is a common misconception about observability in systems management?add
>> The complexity of modern software has outpaced the limits of traditional monitoring. Estimates indicate that engineers spend 50% of their incident time trying to find breadcrumbs versus fixing problems. The old model simply can't keep up in an autonomous environment, and this is why observability has become increasingly important. Logs, metrics, traces, they shouldn't be stovepipe pillars, rather, they should be inputs to a real-time-analysis system that helps teams understand why things behave the way they do. As AI-generated code and agents enter the tech stack, the ability to explain system behavior becomes a difference between predictability and utter chaos. Industry data suggests that around 70% of outages involve distributed dependencies that monitoring just can't keep up with or even surface. High-fidelity telemetry is going to cut mean time to resolution by 5 or even up to 10X, but most legacy tools are still trying to minimize data collection just to control costs. So, teams that are using modern observability platforms tell us that they see 40% faster results for debugging and significantly reduced alert volume. So, this is the environment where Honeycomb thrives. Unified traces, high-cardinality data, OTel-native workflows, and sub-second speed to insight are the capabilities that the company touts as its differentiation, especially important as complexity and AI converge. Welcome to our special AWS re-invent Ecosystem program. Today, we're going to break down why observability is becoming a trust fabric for modern systems, what teams need to do to adopt it effectively, and how Honeycomb is positioning itself for the AI era. Let's get into it with Christine Yen, CEO of Honeycomb. Christine, welcome. Thanks for coming on theCUBE.
Christine Yen
>> Awesome. Thank you so much for having me.
Dave Vellante
>> You bet. All right. So, you heard my upfront spiel, how come observability and trust are now so important and prominently back in the spotlight?
Christine Yen
>> I loved your spiel. I think the simplest and first answer here is that with the advent of AI assistants and a lot of the code-generation capabilities that have burst on the scene over the last couple of years, as teams adopt those technologies, they realize, to put it simply, that the more code they have, the more problems they tend to have. As much as we would like to believe that code that we write is bulletproof and right the first time, inevitably, there are going to be interactions, user inputs, things that the engineers or AIs didn't predict. And as more code is being pushed to production, as more code is being pushed live and users have to interact with it, the need for that code to be reliable and predictable and basically do what the engineers expected, that problem just gets bigger and more important, not any less with AI's involvement.
Dave Vellante
>> So, when you have increasingly AI writing code, taking agency, what does trust actually even mean today?
Christine Yen
>> I mean, I think that's a great question. And I think every tool that claims to take some autonomy away from the human actors, that sounds very violent, but I think that there is a shift in who is making the decisions and where you draw that line. The determinant of where that line can be drawn is the level of trust in what decisions are you going to make, what code are you going to put out there, what edge conditions are you considering? And when the conversation even extends into topics like auto-investigation, auto-remediation, again, there's a question of trust around how much free rein am I going to give this agent? Do I trust that they have enough context on my system, on the parts or on the challenges that my software faces to be able to make the right decisions for my users?
Dave Vellante
>> So, if I think about the dimensions of trust... Well, first of all, I have to have visibility on the entire end-to-end flow. I want to take accountability, but I'm not sure who's responsible and I want to have a reliable and predictable system. So, how are these dimensions evolving with AI when AI is writing and even executing code?
Christine Yen
>> Honestly, whether AI is writing the code or not, whether it's humans or not, there needs to be feedback loops between what the code is doing in production, and frankly, what you think it is. Being able to validate a hypothesis that, "I believe the code should be doing this, and it is instead doing that," is the basis of any investigation, any of the underpinnings to that reliable and predictable systems. When AI is executing code or when that balance of autonomy has shifted and there's AI agents doing even more autonomous things than just generating code that is run, the importance of that paper trail being left, so that humans can go in later and validate the decisions that an AI made, the factors that go into future decisions, observability plays a part in all of this, in building up that trust via feedback loops that I think this change is going to have this impact, it did have this impact. Okay, I'm building up trust here that continuing on this loop will provide great results, or that if I change this parameter we'll get a different set of results.
Dave Vellante
>> So, the term observability seems to be standing the test of timing. I remember application performance management, APM was the buzzword. So, how should we think about observability? Should we think of it as a control plane for agentic software? Is that overstating it? How should we think about observability?
Christine Yen
>> Whenever folks like to put very concrete and technical terms on something, like application performance monitoring, like logging analytics, honestly, I feel like that runs the risk of getting caught up in implementation details, rather than the goal someone is trying to achieve. And when I think about observability, I think it is simply the truth about what your software is doing. That's what people are trying to do when they're reaching for their tools. They're trying to figure out, "Why is my code returning this value instead of that value? Why is this user having this experience instead of that experience?" And ultimately, that's why, I think, observability, the term, has stood the test of time because all of the previous labels were reflections of a particular technology choice or implementation, instead of being focused on the goal at hand.
Dave Vellante
>> So, I like that, Christine, it's outcome-based is essentially what you're saying. People might think though, "Okay. Well, AI is going to simplify things. It's going to reduce the need for observability tools, per se." So, can you explain why Ai-infused software development increases the need for observability, not reducing it?
Christine Yen
>> Yeah. Well, first you shouldn't just take my word for it. There are a number of really great industry reports. The DORA report for 2025 recently came out and they have data that backs up increased adoption of AI at coding assistants and decreased stability of their release process. So, for them, it's more code and decreased stability in the platform. On one level, it is as simple as it is hard to write perfect code. It's hard to write perfect code for all use cases. And so, inevitably, whether it's humans writing it or AI is writing it, there are going to be edge cases you didn't anticipate that will manifest as bugs. On a second level, even if there are not necessarily errors being pushed out, I'm sure there are engineering teams listening to this right now going, "Well, I write good code, so bugs aren't as much of a problem," and let's just say that that's the case. Even in that case, AI assistants increase the velocity of code hitting production, code going live. And with that increased velocity, then means a bigger blast radius when something lands, even if it was intentional. And so, I think that that increased velocity or increased instability in either case mean that you need to be more grounded in, "Okay, what just happened? Was it what I expected? If not, what do I do about it? How do I isolate it and how do I remediate it?"
Dave Vellante
>> So, what are the known unknowns when we're talking about AI generating big chunks of code and blind spots, if you will? And how should teams think about instrumenting for agentic?
Christine Yen
>> Yeah, I think that some known unknowns, this is a little bit abstract, but I like the analogy quite a bit. I wish I could attribute this directly. Unfortunately, I don't remember the source. I had heard recently someone say, "AI is like having an unlicensed 16-year-old behind a steering wheel, and observability is seatbelt and speed limits." You can still drive as fast as you want, but these are things that are meant to help keep you safe, to help you understand where the reasonable boundaries may be. And I think that when you're building either with AI, where you're leveraging coding assistants, or you're building on top of AI, where you're leveraging LLM capabilities in your product, the nature of the work is very similar. You're writing software, you're trying to achieve some user outcome. The form factor with which we're doing it is changing. And so, you talked about agentic workflows. When I think about that, I think about long-running background jobs or chat sessions. With that, you have a lot of deep work happening in sequence where it's almost like every unit of work is just that much deeper than before because you're interacting with LLMs, you're interacting with the apparatus of an AI system. I think that a lot of the practices that we have built up in the cloud era can evolve healthily in AI. In cloud, I really think of the last 10 years as being the brides of tracing, a way to complement logs and metrics as discrete data types to capture these more complex interactions that happen in distributed systems. When I think about a chat session or long-running background jobs, I think of a series of traces that are linked together. Again, we are just ratcheting up the complexity in our software systems and looking for tools and data types that can capture that complexity and reflect it back to us, so that we can figure out why something is misbehaving when it doesn't do what we expect.
Dave Vellante
>> So, your analogy I think is a good one, and it reminds me... When I talk to really hardcore AI researchers, there's always two sides. There's the optimist and then there's some AI concern, and sometimes people put those in the camp of Luddites. But the folks that I'm referring to will point out that things like autonomous driving, full self-driving, they're not learning systems, at least not yet. And there's a reason why we don't license humans until they're 16 years old because there's a learning process. So, my question relates to the human in the loop. I know a lot of people concerned about job displacement, but my real question is where do humans fit? What do they do that these systems necessarily can't do when it comes to whether it's debugging, or architectural intuition, or just human judgment? Where do humans fit?
Christine Yen
>> First, I think the answer there is going to evolve over time, as we an industry build up trust in these technologies that we are starting to play with and as the technologies get better. There's a post on the Honeycomb blog that actually really tries to dig into this question, not to drive your viewers to a different medium. But one of the studies that this blog post cited comes from the medical industry, where similarly, folks have been trying to automate, or at least augment, what doctors do for years, diagnosis and the entire corpus of medical knowledge. And one of the things that they found was that when doctors leaned on automation first and then audited results, they actually found that their skills of diagnosis began to atrophy. Whereas when a doctor had a hypothesis and then backed it up with all the automation and tooling at their disposal, they were able to just make better diagnoses without that atrophying effect. And it's a little bit of a stretch to tie medical diagnosis to triage of systems, but there are parallels. And I think that there will always be an element of context and judgment and intuition that a human brings because we, as an industry, still aren't perfect at making sure all the AIs have all the context. We're getting better at it. Every engineer is getting better at prompts. Every engineer is getting better at writing down assumptions and expectations before prompting whatever assistant to help write code, but there's always going to be dropped context. And I think humans are really good at holding all those pieces of knowledge in our brain and bringing them to bear.
Dave Vellante
>> And by the way, thank you for that. We're fine with driving people to your website. In fact, in prepping for this interview, I found some really good blogs. I think it was under blogs, it was under learning. And I think there was something in there that relates to my next question, which is the organizational one. Are teams thinking differently about how they should organize? In fact, I asked Marc Benioff of this. He wrote an article in the Wall Street Journal saying, "We're the last generation of managers who will be managing humans only." Well, what does that mean in your world? How should teams think differently about organizing around AI-assisted development workflows?
Christine Yen
>> I think this is a good question and one where we certainly have a hypothesis, but the world is changing so quickly that who knows where we'll be in another five years? I think that there is, with any automation, an examination of the work that is actually valuable and inherent in the role and something that a human can bring. And how much of the work can be automated? How much of the work can be turned into a system and turned into a process and made repeatable? One could argue that, in many of our roles, we are seeking, AI aside, for that repetition, readability and systematizing of our work. I do think that I continue to be a human optimist. I think that humans are going to continue bringing judgment and prioritization and curiosity that agents cannot. And so, as we all learn to bring agents into parts of our workflow and to identify the parts that can be extracted out, I don't know if I would agree with what Marc claimed about the overall composition of teams anytime soon.
Dave Vellante
>> I like to think of agents as really effective worker bees and that work-
Christine Yen
>> They're under a lot of guidance....
Dave Vellante
>> on my behalf. Yes, right? Well, let's get into some of the news. You got hard news around re:Invent. Take us through some of the latest announcements that you're excited about. What's new?
Christine Yen
>> Yeah, the big news is that Honeycomb is now available in a private cloud offering. When Honeycomb started, we knew that this would be a path we could go down. And being a really small team of engineers, we knew that we would rather solve the harder problem first, which is running a huge multi-tenant SAS offering and then figure out how to package it up for delivery in a private cloud version that that would ensure that what we delivered would be battle-tested at scale, and where we would really have to understand how to solve again, the hardest problem first. So, that's really exciting. I think it opens up the power of the Honeycomb platform to a number of folks who have challenges of adopting SaaS, either for governance compliance reasons or simply those of scale. Second big announcement is we are extending our platform to really embrace the open-telemetry standard for metrics data. Open telemetry, for anyone who isn't familiar, is an industry standard that has really come into its own over the last four or five years, where a number of observability vendors are coming together and agreeing on a standard format and specs and conventions around telemetry data. Metrics is one of the newer data types to be formalized as part of the open-telemetry spec. And we are, I believe, one of the earlier first vendors to fully embrace and be fully compatible with all of the pieces of supporting metrics in that platform. So, metrics are now first-class citizen in Honeycomb and we're really excited for folks to see what all of their telemetry together can look and feel like. And the last piece, one of our announcements is that our Honeycomb Canvas experience is now generally available. This is our copilot. Really, it's an incredible translation layer. If you think about observability being that truth about your software, when you're interacting with an observability tool, you're trying to get at that truth and you have a question in your head like, "Why are errors increasing?" or "Why is latency concentrated in this part of the system?" or, "What is this user really experiencing?"
And the incredible part about Canvas is it leverages all of the technologies that we've become accustomed to in our consumer lives and applies it to an observability tool, where this class of tooling historically has felt very technical, maybe overwhelming to someone new to a team. With Canvas, it is easy and natural to just phrase a question in English, or whatever language, honestly, and get back data answering your questions.
Dave Vellante
>> Okay, cool. So, Canvas just makes life a lot easier-
Christine Yen
>> A lot easier....
Dave Vellante
>> on the open telemetry? We have some data with our partner, ETR. They just did a recent survey on this and said roughly 80% of the sample said that they are either evaluating or using open telemetry. A small number, "Extensively." A very large number, about 40% saying, "Yeah, partially." And then, another huge chunk saying, "No, not today, but we're evaluating it." So, again, almost 80% saying, "We're in in some way, shape or form." And I had a follow-up on the private cloud piece. We're seeing a lot of interest in doing private, whether it's sovereign AI or bringing AI to the data. What are the details there? How are you implementing that private cloud for Honeycomb?
Christine Yen
>> Well, we actually have a couple different deployment options. One is, I believe self-hosted. The other is Honeycomb-managed, but in your cloud. So, this is really just trying to get at folks who again, want the SaaS experience, want these polish that comes when the software is being delivered to hundreds of customers and enormous scale, but really being able to ensure that they have control of the data flow and having it run in their own environments.
Dave Vellante
>> Let me ask you a two-part question. Are there misconceptions that people should know about observability? And what should organizations and teams do to prepare for this AI onslaught and AI-operated systems?
Christine Yen
>> Boy, there are so many misconceptions. I'll try to pick one. Here's my top misconception. I think people tend to focus on specific types of telemetry. I can't count the number of times I have heard someone say, "I have observability. I have logs and metrics and traces." And I understand how we got here. For many people, observability is just the blind merger of logging, monitoring and APM tools of the past. And I think that, again, if you reframe observability to be understanding the truth of what's happening in your systems, it allows you and your team to focus on the questions that you're really trying to ask and get answered. And by using this mindset of questions, I have found that it allows people to separate themselves from the constraints that they believe certain telemetry types are held to. "Oh, well we can't ask this type of question, the system will break. We can't ask that type of question, it'll be too slow."
A lot of the technologies have changed. A lot of the underpinning ways that we work with data have evolved over the last 10 or 20 years. And I think folks are selling themselves short when they focus on observability is what I'm used to. And can get a lot more value out of, "Well, I want to understand this customer experience," or, "I want to debug this type of situation," and phrasing their needs around that. The question that you asked about what teams should do to prepare for AI built and operated systems, honestly, it is making sure they can close these feedback loops quickly and efficiently before introducing AI. Maybe they've already introduced AI and it's too late, that's fine. But really, thinking about this as feedback loops, as ways to make sure your mental model matches the reality of production, using the truth about what's happening in production, that is a capability that your team feels confident in or does not feel confident in. It's not, "Do I have this type of telemetry or not?"
Dave Vellante
>> Yeah. Two great points. I would add, even, data is your friend. As I said upfront, a lot of organizations trying to limit the amount of data. Data can actually become a profit driver, as opposed to a cost in this whole equation. And to your point about being prepared, when you hear about the MIT study where 95% they say they aren't getting value. When you dig into those, it's because they haven't got their data act in order and observability is obviously a key part of that. Last question. re:Invent. Thoughts? What are you excited about? Thinking about past re:Invents, what should we look forward to this year?
Christine Yen
>> I mean, I am a huge re:Invents fan. I love the excitement and the crowds and the feeling that the landscape changes so quickly. The beautiful part about being in tech is that nothing stays quite the same and you always have to be on your toes. And so, I love being able to walk around and just see all the new capabilities and technologies that are available for us to play with. For Honeycomb in particular, really being able to bring these launches to the industry, being able to entertain conversations through Honeycomb Private Cloud, that maybe had stalled in the past, to really show what observability can look like in this new era, that's what I'm looking forward to.
Dave Vellante
>> Awesome. Well, thanks for spending some time with us. Definitely go to Honeycomb.io, hit Learn. There's a ton of stuff in there, case studies, blogs, some really cool, really useful information. So, Christine, thanks again. Really appreciate your time.
Christine Yen
>> Thanks so much. I loved being here.
Dave Vellante
>> And thank you for watching our re:Invent Ecosystem coverage and preview. You're watching theCUBE. I'm Dave Vellante. We'll see you next time.