In this keynote analysis from AWS re:Invent 2025, theCUBE’s John Furrier joins analysts Paul Nashawaty, Zeus Kerravala and Sarbjeet Johal to unpack how Amazon is redefining cloud infrastructure through the lens of agentic AI. The panel breaks down Matt Garman’s declaration that "agents are the new cloud," exploring key announcements surrounding the Nova model family, AgentCore and Amazon Bedrock. The discussion highlights AWS’ strategic pivot from merely abstracting infrastructure complexity to abstracting work itself, effectively bridging the gap between professional coders and "citizen developers" while unifying the experience for builders at every level.
The conversation digs deeper into the practical realities of enterprise AI adoption, emphasizing the critical role of security, governance and compliance in moving from proof-of-concept to production. Kerravala, Johal and Nashawaty analyze AWS’ vertically integrated approach – spanning from custom silicon like Trainium and Inferentia to the application layer – and how this full-stack strategy allows customers to train models on proprietary data with improved price-performance. The group also debates the evolving competitive landscape, noting how AWS is equipping organizations to build autonomous, long-running agents that function as teammates rather than just tools.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AWS re:Invent 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AWS re:Invent 2025
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AWS re:Invent 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AWS re:Invent 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to AWS re:Invent 2025
Please sign in with LinkedIn to continue to AWS re:Invent 2025. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Marc Brooker, AWS
In this keynote analysis from AWS re:Invent 2025, theCUBE’s John Furrier joins analysts Paul Nashawaty, Zeus Kerravala and Sarbjeet Johal to unpack how Amazon is redefining cloud infrastructure through the lens of agentic AI. The panel breaks down Matt Garman’s declaration that "agents are the new cloud," exploring key announcements surrounding the Nova model family, AgentCore and Amazon Bedrock. The discussion highlights AWS’ strategic pivot from merely abstracting infrastructure complexity to abstracting work itself, effectively bridging the gap between professional coders and "citizen developers" while unifying the experience for builders at every level.
The conversation digs deeper into the practical realities of enterprise AI adoption, emphasizing the critical role of security, governance and compliance in moving from proof-of-concept to production. Kerravala, Johal and Nashawaty analyze AWS’ vertically integrated approach – spanning from custom silicon like Trainium and Inferentia to the application layer – and how this full-stack strategy allows customers to train models on proprietary data with improved price-performance. The group also debates the evolving competitive landscape, noting how AWS is equipping organizations to build autonomous, long-running agents that function as teammates rather than just tools.
In this interview during theCUBE's coverage of AWS re:Invent, Marc Brooker, vice president and distinguished engineer of agentic AI at AWS, joins theCUBE’s John Furrier to explore the infrastructure strategies powering the next generation of autonomous software. Brooker breaks down the transition from building agents locally with tools like Kiro to deploying them at enterprise scale, highlighting the challenge of connecting these dynamic systems to vast corporate data estates. He details the architecture of AgentCore, AWS's comprehensive environment for runni...Read more
exploreKeep Exploring
What is the context of the interview taking place at AWS re:Invent?add
What are the current advancements and challenges in building and deploying agents for developers?add
What are the strategies for connecting agents to an enterprise's existing data and assets to enhance their effectiveness and accessibility?add
What is the evolution of abstraction in computing and how does it relate to current technologies like cloud computing and AI?add
>> Hello, I'm John Furrier with theCUBE. We are here in Seattle, the AWS headquarters at re:Invent, getting a preview for re:Invent and all the action and all the news. And we have Marc Brooker here, VP and distinguished engineer, worked on a lot of the core projects on AWS, all those core building blocks and higher level services, and has a unique perspective into the agentic future. And we'll do a little bit of preview of re:Invent. Marc, great to have you. Thanks for allowing me to come into the home office here. It's a home game for you and an away game from theCUBE.
Marc Brooker
>> Yeah. Well, great to meet you, and just super excited to be talking about this agentic stuff today.
John Furrier
>> You've had visibility into a lot of the core AWS services, Lambda and others, database. Swami has been on multiple times talking about some of the greatness around how it's evolved, and it's really kind of we're in a position now where the agentic wave and the hype is super high, but there is meat on the bone, there is low-hanging fruit, things are getting done. We're expecting some big news at re:Invent. But I want to get into some of the core things that you see that people should know about on the engineering and market side as connecting the dots between how all the cloud services connect to the agentic. We were talking about data, data feeding into high large-scale infrastructure, pumping out tokens, connecting workflows, Kiro's shipping, so there's a lot of things rolling out. What's your take on this agent wave relative to deploying them, scaling them?
Marc Brooker
>> Yeah, so there are two big things going on here. One of them is building agents locally, building an agent on my laptop has become very accessible, very accessible to normal developers. It's no longer something that requires a lot of science expertise. If you pick up an SDK like Strands, you build with an IDE like Kiro, building an agent is very accessible. And even for people without traditional software development skills, you can build an agent in this vibe coding mode of line by line, requirement by requirement. And then how do I get that into production? How do I get that into the cloud where I can run it at scale, I can run it across my whole enterprise, I can govern it, secure it, and so on? And then potentially most importantly, how do I get it connected to all of that data in my enterprise? All of those documents, all of those videos, all of this stuff that we've been building up. Our common data estate, this huge asset that we have, how do I attach agents to that so I can make that both agents more powerful and make that data more accessible? And so that's the second big story.
John Furrier
>> Demystify the agent, because certainly, there's people saying, "Hey, I'm building an agent," but is it really an agent? What is an agent? And demystify that piece because they're all being fed by the data. In some cases, there's an AI approach with models, models are involved. Sometimes they can just be coded. Demystify what is an agent in AWS, what does that mean?
Marc Brooker
>> Well, in some sense, it doesn't matter. If I'm getting things done for my business, I'm getting things done with technology, the definition isn't particularly important. When I think about AI agents, I think an AI agent is a piece of technology, a piece of code that I can give a goal to, and it can work with a set of tools, ways of accessing data and an AI model to work towards that goal. Discover the data it knows, learn the things it needs to do, have the effects on the world it needs to have, and then get to that goal that I explained to it. The difference there with fixed function software, like the old mode of building software is there, I would have to say, "Here's exactly how you get to that mode. Here's an explicit workflow. Here are the lines of code or the step functions or whatever." With agents, we're using AI models to do that planning, to figure out that path to achieving the goals. And that means you can give them more open-ended goals and you can give them more autonomy to go off and discover things and find facts and bring them together.
John Furrier
>> So back in the old days when I first used Amazon, 2007, when it just started, 2008, 2007, it was really easy. I didn't want to buy a server. They didn't have custom domains at that time. I think, let's talk to Dave Brown about it. I think it was just long strings that said EC2. It was very simple. I knew what to do. I didn't want to buy a server, I just put the stuff on EC2 and S3, and they had queuing. The basic building blocks were there. I kind of understood it. For agents, there's so much going on at AWS. What is the infrastructure playbook? Do I just go in and do EC2? Is there a console? How do I start doing agents? What's the metaphor equivalent of EC2 using storage and compute? Is there an infrastructure standup for people to start doing if they have stuff already out there?
Marc Brooker
>> So AgentCore is the core of that picture, and that's where we have the runtime. It's the compute, it's where you're running that code that you've built in your agent, whether you've built up with Strands or with LangChain or whatever SDK, you've written your own code. AgentCore runtime is the serverless compute that is the core of where you run your agents. And then there's AgentCore Gateway, which is, "Hey, I want to connect to my other services. I want to connect to my data estate. I want to connect in tools via MCP. I want to connect those all into a single point where I can monitor them and govern them and so on."
Then we have AgentCore Memory, which is this kind of short-lived state primitive and the place where agents remember facts about users and facts about the work that they're doing. The simplest version of that is user preferences. You say to the agent, "I like meetings in the early morning." It's going to write that down, it's going to remember it for you, so you don't have to tell it every time. There's AgentCore Observability that plugs into CloudWatch, it plugs into the rest of AWS's observability services to make it so you can monitor those agents, know what they're doing, trace their execution. And then first-party tools, the AgentCore web use tool where you can ... not everything is accessible as a nice API; some things are hidden behind websites. How do we do that? Well, we use another model to go and click on those websites, get the data, build automation around those through the web. And then the code interpreter tool, which is this really emerging pattern where instead of agents doing everything by going through the LLM every time, we can do this much more efficient path of get the agent to write some code, and then use that code to do the heavy lifting of calling tools and summarizing data and so on. And then when there are these open-ended flexible things to do, you can go back to the model and spend the tokens, you can spend the time and have the model make those higher-level decisions rather than the byte-by-byte decisions.
John Furrier
>> So the AgentCore is the console, basically, for agents?
Marc Brooker
>> AgentCore is the console for agents, it's the core building block, but then you can bring in anything else in AWS or anything else outside AWS. There are open protocols like MCP that allow you to connect to wherever your data is, whether it's in AWS, whether it's in one of our database services, whether it's in S3, or in a third-party software as a service.
John Furrier
>> So the things I wrote down to ask you about was the infrastructure, AgentCore. Thanks for mentioning that. The runtime and safety architecture, because everyone's chirping about safety. "It's off the rails, it's hallucinating. Agents have to be SLA-compliant." So there's a lot of governance built into this. Talk about the runtime, because essentially, agents are like apps, you're basically building like an app, if you will. And that's just not my view. Take me through the safety piece, the runtime. It's executing things.
Marc Brooker
>> It's executing code.
John Furrier
>> It's like, I don't want to go get root access somewhere, so there's all this ... I know you guys think a lot about this.
Marc Brooker
>> Yeah, we sure do. And you started the early days of EC2. We were building these virtual machines for customers, and then over time, we built those virtual machines, new ways that are smaller and finer-grained while still offering the same great security. And so if you look at AgentCore, every time you run your agent, every time you have a new session with an agent, a new conversation with an agent, it gets its own virtual machine, gets its own strong security boundary where that session with an agent runs. And what's great about that is that there's a huge amount of trust built in. You can say to that agent, "I'm going to give it my credentials. I'm going to give it that user's credentials. It can use those credentials to call those third-party tools. It doesn't have access to anything else. It doesn't have access to any data beyond what that user has. It doesn't have access to other tools. It can't write things down in a way that persists." And so you can build this box around your agent where using tools like the agent called Gateway really limit what an agent can do. Using tools like Bedrock Guardrails limit what an agent can say. And then once you have those, well, the code running inside the box, you don't really care about it as much. It can't have any side effects. And so what you have to worry about is instead of that security, we're taking care of that with virtualization and these very secure machine boundaries. Instead of that, you need to worry about what can this agent do and what can this agent say? And those are specific to your business. What can do, what data can it access, what tools can it access? Can it book air tickets? Yes or no? Those are the kinds of non infrastructure decisions that you build into something like AgentCore .
John Furrier
>> And you guys enable that, you have the capability to enable that?
Marc Brooker
>> We enable that, right.
John Furrier
>> As it gets smarter, it's going to get better. You've been at Amazon for how many years?
Marc Brooker
>> Since 2008, so 17 and a bit years.
John Furrier
>> We've been covering re:Invent, this is, I think, our 12th year maybe at re:Invent, it gets better every year. I want to ask you about the culture, because you mentioned EC2, that was one of the key products. As you look at the culture today at AWS, what you guys are building, how would you talk about it in the AI side of the way? I mean, in the pre-AI was building blocks, primitives, higher-level services, and then built on top ecosystem. Is it the same? How would you describe ...? Because in a way, you're almost abstracting away with the old AWS or the current AWS and building a whole nother level of services of infrastructure like capabilities, almost like a new stack on top of that, or how would you describe it? I'm kind of .
Marc Brooker
>> Well, maybe I'd go even further back. If you look at this whole arc of the history of computing, there's been this constantly rising level of abstraction. I used to worry about machine code and assembly language and physical addresses and bits and bytes of memory. Most developers aren't worrying about those things and haven't worried about those things for 30-plus years. And so then we had the cloud, this great abstraction. You don't build your own data centers anymore. You don't rack and stack your own servers. And that brought a huge amount of value. And then serverless, I'm not patching my own kernel anymore, I'm not configuring instances, I'm not managing threads anymore, and so that continuing growth, and AI is the next step there.
And so if you look at something like Kiro, I can build an application without getting into the code level necessarily. I can talk about my business requirements, I can talk about my specification, I can bring that into AgentCore and have all of the infrastructure behind that and .
John Furrier
>> You're a product manager at that point, you're a product manager at that point.
Marc Brooker
>> Yeah, kind of, more and more. But I think it is still this core engineering discipline, which has always been how do I solve problems for my customers and my business using technology? And now, well, it's exciting because suddenly I have more power to do that.
John Furrier
>> Okay, so question for you, because I mean, you put up two things I wanted to touch on, maybe we can probably come and do a follow-up on. But distributed computing paradigm has been around, and it continues to be a computer science principle, that's cloud and edge and everything in between. That will continue to abstract away. And then you mentioned engineering. Software engineering was the degree I got. It was called software engineering, not developer, it was engineering. So how would you describe the engineering work involved with Kiro and all these fun tools that make it easier to do crafty, cool, artisan-like coding to full engineering? What is the engineering discipline required in this era? Is it system thinking around consequences? How would you frame that? And really, no one's really talking about this, but we've seen the systems thinking coming back. I mean, we're in a systems world, it's just easy to code. Now, you assume there's the worker bees and the ages behind the scenes making it happen. What is the system architecture thinking that engineers will engineer? What's your view on that?
Marc Brooker
>> So if you think about what is the work of software engineering, it has always been a mix of the essential complexity of building things. What am I building? What are the requirements? What does correct mean? What does secure mean? And then this inessential complexity of what is the right syntax for that SQL query? It matters only because it's an implementation detail, but it doesn't really matter to the outcome. And so as we've gone up the levels with things like Kiro's spec-driven development, we're getting closer to that essential complexity of what does correct mean, what does it mean for this application to do the right thing?
John Furrier
>> Versus, say, syntax?
Marc Brooker
>> Versus syntax. Which ultimately, I mean, as-
John Furrier
>> In the old days, you get a compiler error up, didn't compile, go back and fix the code syntax error.
Marc Brooker
>> Which, I mean, fundamentally at some level doesn't matter. And what matters is the specification. And so you can see that with Kiro's spec-driven development, you can see that with property-based testing that came in the Kiro GA last week, where now Kiro can generate these tests that go off and generate huge numbers of their own test cases that can run against that specification and say, "Is this code right?" And then you get that systems thinking of not how do I move the bits and bytes to do this, but is this a database, is this a queue? How long should this data be there? How should this data be accessible? How should this data be access controlled? And so we bring up this level of abstraction, but the core engineering principles haven't changed. It's about solving problems and it's about .
John Furrier
>> And agentic brings in new levels of engineering tasks, like what's the workflow look like? It's now going to have multiple dimensions of connections. I mean, it's still a large-scale problem. In fact, all the AI examples I've been following, there's more complexity because there's more things being automated.
Marc Brooker
>> Right. And there's more dynamism. And that is something that we are all learning as an industry to handle.
John Furrier
>> And autonomous makes this even more challenging to run autonomous anything. I think everyone thinks about a car or a robotaxi, I mean, the complexity, that was engineered clearly. And autonomous is going to be part of agentic. What's your view on autonomous agents? That's an engineering challenge.
Marc Brooker
>> And it really is. And that's where you get into that what are the bounds of what this agent can do? "Hey, I want this agent to go off and book a trip for me." When you say that, you don't mean open-ended. You're going to give it a budget, you're going to give it a timeframe, you're going to have these requirements as an end user. And then the infrastructure can enforce that everything the agent does is within the box set by those requirements. If you say, "My budget is $1,000," well, we can enforce at the infrastructure level that agent doesn't spend more than $1,000 with these boundaries like the AgentCore Gateway. And then you can give the agents a ton of autonomy because you've said, "Don't go outside this box. But within this box, be as creative as possible."
John Furrier
>> Marc, thanks for spending the time and sharing all that data with us on theCUBE. Final question for folks, going to hear a lot at re:Invent on this agents and the news. Going into 2026, what do you think are the most important things people should pay attention to and some things that they should pay attention that no one will talk about? So well, most important stories and issues, and what's the outlier that might emerge? MCP came out of the woodwork, and that was the beautiful thing. What is going to be the top story, and then what's going to emerge?
Marc Brooker
>> I think there are two big things that are happening there, one of them is an emerging understanding of the importance of agent ops, this ability to understand all of the agents in my organization, what are they doing? How are they spending my money? How are they spending those tokens? What is the ROI there? Are they having success? And I think what we're going to see next year is that 10 more organizations getting good at that process. You've built a ton of agents, how should you think about how they run? And then the other one, and you talked about MCP and its sudden emergence. Well, MCP is great. It's caused this explosion in tool use in the agent ecosystem, but it's also not particularly scalable. If I have 100 databases or 100 tools, 1,000 tools, MCP doesn't scale super well. And so I think the other big trend next year is going to be ways for agents to access their environment that are more scalable than MCP, maybe more governable than MCP. And maybe we'll still call those things MCP, and they will be an evolution of the protocol that we know and love today.
John Furrier
>> And the scale is the key there, and intelligence, knowledge?
Marc Brooker
>> Right.
John Furrier
>> Thanks for coming on theCUBE, appreciate it. All right, great chat.
Marc Brooker
>> Thank you so much.
John Furrier
>> We'll do a deep dive another time. We can do an hour on this. I'm John Furrier here at theCUBE at AWS's headquarters at the re:Invent building on the re:Invent preview. Coming up, of course, agents are the future. They're going to build on top of the large scale infrastructure, the AI factories, and certainly at the edge. They're going to follow us all around and be part of our lives. Thanks for watching.