In this keynote analysis from AWS re:Invent 2025, theCUBE’s John Furrier joins analysts Paul Nashawaty, Zeus Kerravala and Sarbjeet Johal to unpack how Amazon is redefining cloud infrastructure through the lens of agentic AI. The panel breaks down Matt Garman’s declaration that "agents are the new cloud," exploring key announcements surrounding the Nova model family, AgentCore and Amazon Bedrock. The discussion highlights AWS’ strategic pivot from merely abstracting infrastructure complexity to abstracting work itself, effectively bridging the gap between professional coders and "citizen developers" while unifying the experience for builders at every level.
The conversation digs deeper into the practical realities of enterprise AI adoption, emphasizing the critical role of security, governance and compliance in moving from proof-of-concept to production. Kerravala, Johal and Nashawaty analyze AWS’ vertically integrated approach – spanning from custom silicon like Trainium and Inferentia to the application layer – and how this full-stack strategy allows customers to train models on proprietary data with improved price-performance. The group also debates the evolving competitive landscape, noting how AWS is equipping organizations to build autonomous, long-running agents that function as teammates rather than just tools.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AWS re:Invent 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AWS re:Invent 2025
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AWS re:Invent 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AWS re:Invent 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to AWS re:Invent 2025
Please sign in with LinkedIn to continue to AWS re:Invent 2025. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Ken Exner, Elastic
In this keynote analysis from AWS re:Invent 2025, theCUBE’s John Furrier joins analysts Paul Nashawaty, Zeus Kerravala and Sarbjeet Johal to unpack how Amazon is redefining cloud infrastructure through the lens of agentic AI. The panel breaks down Matt Garman’s declaration that "agents are the new cloud," exploring key announcements surrounding the Nova model family, AgentCore and Amazon Bedrock. The discussion highlights AWS’ strategic pivot from merely abstracting infrastructure complexity to abstracting work itself, effectively bridging the gap between professional coders and "citizen developers" while unifying the experience for builders at every level.
The conversation digs deeper into the practical realities of enterprise AI adoption, emphasizing the critical role of security, governance and compliance in moving from proof-of-concept to production. Kerravala, Johal and Nashawaty analyze AWS’ vertically integrated approach – spanning from custom silicon like Trainium and Inferentia to the application layer – and how this full-stack strategy allows customers to train models on proprietary data with improved price-performance. The group also debates the evolving competitive landscape, noting how AWS is equipping organizations to build autonomous, long-running agents that function as teammates rather than just tools.
Ken Exner, chief product officer at Elastic, joins Jackie McGuire, principal analyst at theCUBE Research, for an in-depth conversation during AWS re:Invent 2025. The discussion centers around developments in artificial intelligence (AI), particularly Elastic's innovations and the vital role of context engineering in advancing AI applications.
With a significant industry tenure, including 16 years at AWS and a current position as chief product officer at Elastic, Exner details their journey and contributions at Elastic. The focus is on how Elastic uti...Read more
exploreKeep Exploring
What role does AI play in the product portfolio mentioned?add
What is the importance of evaluation in context engineering, and how is it analogous to unit tests and integration tests?add
What factors contributed to the acceleration of tool building in the context of AI agents?add
What is anticipated to be significant in the field of artificial intelligence in 2026?add
>> Hello, CUBE Community, and welcome to theCUBE's coverage of re:Invent 2025. I'm Jackie McGuire, the practice leading principal analyst for security, and I am very lucky to be joined today by Ken Exner. He's the Chief Product Officer of Elastic. Ken, welcome.
Ken Exner
>> Hey Jackie. Good to be here.
Jackie McGuire
>> Hey, so we have not gotten a chance to do an interview together, but I am really excited, because I work with Elastic quite a bit and you guys have a whole bunch of really cool stuff going on right now that we're going to get to in a bit. But what I thought would be fun to start with, as I usually do, is I would love to know more about you, Ken, and how you got to Elastic. What your background is, and as we were talking about before, your meandering path to where you are now.
Ken Exner
>> All right. Well, so I've been at Elastic for three and a half years, so I'm not new to Elastic. I've been here three and a half years. But before that, actually this is very fitting, because this is a re:Invent coverage show.
Jackie McGuire
>> I was actually going to make the pun of, can you tell me how you've reinvented yourself? You never know which way that could go.
Ken Exner
>> But I was at AWS for 16 years, so I've actually been to every single re:Invent. I was at AWS for 16 years, and if you do the math, it pretty much means I was there from the beginning of AWS. So I was one of the very first employees in AWS. I was actually the second product manager that was hired, so I was there for the entire ride from the beginning. So a lot of fun.
Jackie McGuire
>> Yeah, that's amazing. And so, now you're at Elastic and you're the Chief Product Officer, so you work on all kinds of things, but what is the most interesting thing that you're working on right now?
Ken Exner
>> What's not interesting? So as the Chief Product Officer, I'm responsible for product and engineering across the different portfolio that we have, which includes security products, which includes observability products, and of course search and AI products as well. I think it's hard not to talk about the influence of AI on our product portfolio. Not only of course for people that build on top of Elastic Search, they're often using us as a search engine for retrieval and helping build AI applications. But it's been fun seeing how we've been reinventing ourselves and reinventing the experience for observability, and security as well, using AI to fundamentally change the experience for security and observability practitioners as well. So I'm excited about how much AI is changing how we work and changing how we build software, and changing the experience that we deliver to our practitioners.
Jackie McGuire
>> And I imagine observability has become significantly more important in the age of AI and especially agentic AI when you're talking about the ability for one instruction to cause 100, 200, several hundred agents to start doing things. The ability to track those things in, I was going to say real time, but I'm not even sure real time is fast enough to describe how you need to achieve observability with agents. But is that something that's become more prevalent the last couple of years as these things have sped up?
Ken Exner
>> Yeah, you're talking about how do we start thinking about agents as systems, as entities that we need to observe and protect. So yeah, we've been introducing LLM observability, so that you can observe the applications that you build, AI applications. So if you are building an agent, how do you observe that? How do you protect that? So yeah, we've been making sure that we support all the different LLMs and have the ability to do cost and token tracking, the ability to look at query logs and metrics, and look at the tracing for an AI application. So all the things that we've typically done for traditional observability, traditional security, we're extending that into the age of AI and agentic AI as well.
Jackie McGuire
>> Yeah, and you talk about that in terms of context engineering. So can you talk a little bit more about what context engineering is and how it applies to this kind of use case we're talking about?
Ken Exner
>> Sure. So I think context engineering is, it's one of those very popular terms these days, and I think it's because people are starting to build AI applications, agentic AI applications. And what they're realizing is that the most important part of building this agentic AI application is making sure that the LLMs have the right data, have the right data to ground them on the right context, to scope that, to scope the actions of that agent. Now, traditionally there's been prompt engineering and prompt engineering evolved into RAG, retrieval augmented generation. But today there's a number of different techniques for how you can get the right data to an LLM, whether it's RAG through building tools and helping the LLM with tool selection or its memory systems. There's always different techniques for how you can get the right data to the LLM so that it makes the right decision, so that it gives the right answers, so that it has the right context. And this entire field of getting the right data to an LLM is increasingly being called Context Engineering. And I personally think it's the most important thing in building AI applications that work, that succeed, that do the right things. You have to have the right data. So I think you're going to hear this term context engineering a lot over the next year, because context engineering is what is vital to doing AI right.
Jackie McGuire
>> Yeah, and I think one of the things that has kind of occurred to me is that with context engineering, it's kind of like how different companies decide how to do their meetings. So you have some companies that have a daily stand-up, where they meet every day or they do it in Slack. You have some companies that don't talk to each other for a couple of months at a time. And so, I think everybody's kind of trying, to your point, to do that with these agents and figure out how much they need to do. But I think a lot of people, if you're not really deep in this world, don't realize that we're still talking about predictive text. I don't want to say it's just like the key-
Ken Exner
>> ....
Jackie McGuire
>> at the top of your department iPhone. Yeah, but if you don't give an agent the right context, they generally fill in the blanks, right?
Ken Exner
>> Yeah, an LLM is essentially a predictive system. It's predicting the next token, and it's doing this by a process of reasoning about what is the best answer for the next token. And in order to do that, it has to be scoped to the right data. So if you don't scope something, it's going to use its general foundational knowledge to provide an answer. But if you want to make sure that it's scoped to a particular corpus of data or that it understands who you are personally, understanding the personalization, the person it's talking to, you need to make sure it has that context. And you do this through context engineering, you give it the memory to understand this is the person I'm talking to, or you give it the grounding data in order to say, "This is the data I want you to base the answer on." All that is context engineering. It is giving context to an LLM, so that it can produce the right next token and a series of right next tokens.
Jackie McGuire
>> As the Chief Product Officer, when you think about building products, Elastic has Elastic Agent Builder, which is this whole suite of capabilities that really accelerate how fast you can build agents and things like that. So how do you incorporate context engineering into Elastic Agent Builder to help people build those systems with the context that they need?
Ken Exner
>> Yeah. Well, first, Agent Builder is kind of, as its name suggested, it helps you build agents, but it is a very easy built-in capability within Elastic that allows you to have take any data that Elastic has, either indexed or has access to through connectors, and it allows you to automatically build agents on top of that. So we actually provide you an out of the box conversational agent by default for any index in Elastic, which means that you can start having a conversation with your data. You can actually start having a conversational experience with your data, but it's a sample app. It's a sample app that takes a bunch of different MCP tools that we've built, takes a bunch of pre-built prompts that you can customize, and it presents all these things to the users, so that they can then build their own agents. So it is a very, very easy way to start having an agentic application on top of any data, any custom data, because we give you one out of the box by default and then allow you to customize it.
Jackie McGuire
>> So you don't have to try building your own and end up giving away a car for a dollar.
Ken Exner
>> A perfect example of why you need context engineering, so your agents don't make mistakes. You can give it the right context to understand what the right price is for a car.
Jackie McGuire
>> Yeah, and you can also use that context for answer validation. So once you've produced an answer, you can make sure that it falls within the guidelines of that context.
Ken Exner
>> Yeah, well, a critical part of doing context engineering is evaluation. It is the observability evaluation to make sure that you're producing the right results. So evals in context engineering are kind of like unit tests. They help you test the quality and efficacy of what you're doing, but you also have sort of the equivalent of an integration test, which is LLM as judge, which is trying to assess the entire answer and trying to make sure that you have the most relevant, the most accurate information. So this is a big part of what we do at Elastic. We're kind of known for relevance. We've always been in the business of relevance. Traditional search was always about getting you the right 10 links, but in the age of AI, it's about making sure that you have the one best answer or the one most accurate task that an agent is performing. And we do that through relevance. And relevance is partly about retrieval, but it's also partly about evaluation and testing, and making sure that you're making this a feedback loop.
Jackie McGuire
>> Yeah, and I wanted to pivot a little bit, you had mentioned model context, protocol. So one of the things that keeps coming up over and over and over as I talk to people about AI and specifically agentic AI, is this context of standards and how standards really, I think I heard it, Okta, Pete McKinnon says, "Standards make the internet work."
And so, how do you think about that? And this is just such a burgeoning field and we have so little clarity in a lot of areas, and everybody's running as fast as they can to keep up. Elastic is really in a pretty high profile dominant position, to your point, because you do have the ability to surface data in an incredibly accurate and relevant way. So how do you think about that in terms of standards and maybe leading the way with standards, or what kind of guidance you're looking for with regard to data?
Ken Exner
>> Yeah. Well, so Elastic started from Elastic Search, which is one of the most popular open source projects of all time. We care a lot about open source, we care a lot about open standards, we're one of the leading contributors to Open Telemetry. We care a lot about having open standards. I think in the world of AI, MCP has had a huge impact on the evolution of agents. I think there are two things that really led the way to this agentic boom that we had this year. One was the development of MCP, which created a standard interface, standard protocol for how you could express functions as an API. We call them tools. APIs and functions existed for a long time, but it wasn't until MCP sort of defined a common interface, a common protocol that you really started to see this acceleration of tool building. That was a huge part of this. The other part was LLMs got really good at reasoning. So the combination of really good reasoning, plus an open standard for tool definition led to this huge boom of agents. I think it matters a lot. And I think as you look forward, I think there's two camps I'm starting to see emerge. There's some of the LLMs that are starting to build walled gardens and trying to make sure that everything stays within their walled garden. That is one approach. It's kind of like the Apple approach or the AOL approach, which is I have my walled garden, AOL for people who don't, I guess I'm old.
Jackie McGuire
>> I was always on NetZero, because we were poor.
Ken Exner
>> Apple is a good example. It's a walled garden. Everything exists within their ecosystem. That is one approach. I like the open standards approach. I think MCP is a good example of this. It led to the proliferation of agents and agent-based technologies, because it created an open standard. I want to make sure, I want to bet on that. And I think in a world where you have open standards, I think we're well positioned as being sort of a standards and open source loving company that we work across all the CSPs, we work with all the LLMs. I think it's kind of in our ethos. I think we're well positioned to take advantage of a world where we're standards bloom.
Jackie McGuire
>> Yeah, and I would be remiss as the security analyst if I didn't. One of the other ways that Elastic is, I don't necessarily know if you've reinvented yourself, but one of the other areas where you really lead the charge is in security. With MCP, one of the things we saw pretty rapidly was, oh, we actually need to secure these things. So how do you think about building secure products, so that people can actually trust the data they're putting into these models, trust the agents, the responses they're getting trust, the MCP responses that they're getting from MCP servers? It seems to me, I like to interview Mike Nichols. He and I are real tight, and you guys really have kind of led the way on that in the last few years as well. So how do you think about that as you're building product?
Ken Exner
>> Yeah, Mike is on my team.
Jackie McGuire
>> Shout out Mike.
Ken Exner
>> Shout out to Mike. Mike is now the GM for our security solution. So congrats Mike on promotion to GM. But I think agents are kind of like a new asset in the list of assets that a CISO and an office of the CISO needs to think about. So we have a SIEM, we have a security analytics platform. We need to start thinking about agents as assets that you need to protect like anything else. So it's no longer just Kubernetes clusters and hosts and different systems. An agent is a system, is an entity that you need to protect. So we like to just view this as an extension of the security landscape that you already need to protect as an office of the CISO. You need to be thinking about all the different endpoints that you have in your business. You need to think about all the different hosts. You need to also be thinking about all the agents, all the AI-based applications as well. So to us, it's an extension of that portfolio.
Jackie McGuire
>> Awesome. And then let's say we're at re:Invent next year in 2026. Final question. How do you feel like Elastic will have reinvented itself again? So what things do you see coming for 2026 that you're excited about? How do you think Elastic is going to continue to contribute with Agent Builder and context engineering, and all of these different things? What do you think will be the big news? I know nothing public, of course, but what are you looking forward to in 2026?
Ken Exner
>> I think 2026 will be the year of context engineering. I really, really mean that. This year was the year of agents. Everyone started moving towards agents. I think as people start building agents, they're going to realize how critical, how vital context engineering is. It's what distinguishes successful AI projects from ones that fail. It distinguishes a well-functioning agent from one that operates on random data. So context, having relevant information is going to matter. So I think if I look forward a year from now, we will have gone through the year of context engineering and Elastic will emerge as the leader of relevance, as the leader in Context engineering. And we'll be helping people build agents and agentic applications on top of various LLMs, various CSPs in an open way, using open standards. So I'll be very happy to see more standards emerging, MCP maturing as a standard, starting to cover some of its gaps, like around auth and stuff, and maturing as a standard, so that it can support real production agent use cases using context engineering from Elastic.
Jackie McGuire
>> Thanks Ken for being here. We really appreciate it. This is re:Invent 2025, and you're watching theCUBE, your leading source for tech news and analysis.