Ram Poornachandran of F5, vice president of artificial intelligence and architecture, and Gary Newe of F5, vice president of solution engineering, participate in a live conversation recorded at Google Cloud Next 2026. The discussion addresses agentic AI infrastructure, hybrid cloud deployments, security guardrails, observability and production readiness for enterprise environments.
They examine how organizations move from proof of concept to production for agentic AI, including AI-native process redesign, change management and the underlying substrate—compute, storage, networking and databases, containers and Kubernetes—that enables scalable agent deployments across heterogeneous hybrid cloud environments.
Poornachandran explains that businesses must redesign processes to be AI-native and invest in change management to achieve production readiness. They emphasize the need for robust guardrails, continuous agent testing and AI observability to capture intent, enforce access controls and provide auditable trails. Newe emphasizes integrating security, red teaming and AIOps into the stack and leveraging partner frameworks such as Google agent framework together with F5 Guardrails and Red Team solutions. They recommend a layered approach that combines platform capabilities, security automation and partner integrations to mitigate risk and accelerate deployment.
The conversation delivers practical guidance for enterprise architects, security leaders and platform teams responsible for cloud infrastructure, hybrid cloud strategy, Kubernetes operations and AI production readiness.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Ram Poornachandran & Gary Newe, F5
Ram Poornachandran of F5, vice president of artificial intelligence and architecture, and Gary Newe of F5, vice president of solution engineering, participate in a live conversation recorded at Google Cloud Next 2026. The discussion addresses agentic AI infrastructure, hybrid cloud deployments, security guardrails, observability and production readiness for enterprise environments.
They examine how organizations move from proof of concept to production for agentic AI, including AI-native process redesign, change management and the underlying substrate—compute, storage, networking and databases, containers and Kubernetes—that enables scalable agent deployments across heterogeneous hybrid cloud environments.
Poornachandran explains that businesses must redesign processes to be AI-native and invest in change management to achieve production readiness. They emphasize the need for robust guardrails, continuous agent testing and AI observability to capture intent, enforce access controls and provide auditable trails. Newe emphasizes integrating security, red teaming and AIOps into the stack and leveraging partner frameworks such as Google agent framework together with F5 Guardrails and Red Team solutions. They recommend a layered approach that combines platform capabilities, security automation and partner integrations to mitigate risk and accelerate deployment.
The conversation delivers practical guidance for enterprise architects, security leaders and platform teams responsible for cloud infrastructure, hybrid cloud strategy, Kubernetes operations and AI production readiness.
In this interview from Google Cloud Next 2026, Ram Poornachandran, vice president of AI and architecture at F5, joins Gary Newe, vice president of solutions engineering at F5, to talk with theCUBE's John Furrier and co-host Alison Kosik about moving agentic AI from proof of concept into production-ready enterprise deployment. Poornachandran draws a sharp distinction between personal and process productivity, explaining that while individuals are adopting AI tools daily, real enterprise gains require redesigning business processes from the ground up — not bolt...Read more
exploreKeep Exploring
How should organizations integrate AI into their business processes to achieve enterprise-level productivity gains while managing change and workforce concerns?add
How does F5 fit into Google Cloud’s agent/AI stack, and how do its Guardrails and AI Red Team solutions help protect enterprise data from prompt injection or model hijacking?add
How should organizations approach and measure adoption of agentic AI when aiming to run AI workloads at scale in production?add
What playbook should an organization follow to enable, govern, and secure agentic AI workloads while balancing speed and risk?add
>> Welcome back to Google Cloud Next 26. We are live from Las Vegas. I'm Alison Kosik, alongside John Furrier. And now we're going to get into sort of the enterprise leadership perspective. And why do you think this is an important piece of the puzzle?
John Furrier
>> Because the full stack that Google announced today, which has moved from tools to operating systems for agentic, is all powered by AI infrastructure. That involves a lot of different things, the holy trinity of compute storage and networking and database. All that stuff's going to be substrate that agents will build on. So there's a lot of stuff under the hood. And this segment's going to be great because we're going to expand on that and see how it all plays out.
Alison Kosik
>> All right. Let's lift the hood with Ram Poornachandran. I just butchered your name. One more time. Ram Poornachandran.
Ram Poornachandran
>> That is it.
John Furrier
>> Got it.
Alison Kosik
>> There we go. Yay. You're the VP of AI and architecture, and Gary Newe, you're the VP of solutions engineering, both from F5. Welcome to TheCUBE.
Gary Newe
>> Thank you.
Alison Kosik
>> So, let me start, and either of you can jump in to the first question. There's just a lot of hype about gen AI productivity gains. From an enterprise leadership perspective, where do you think these deployments typically fall short when they try to scale?
Ram Poornachandran
>> Sure. I can jump in. So generally when we think about AI, you have two pieces, right? It's your personal productivity and also process productivity. Everybody has incorporated AI into your day-to-day, but that doesn't translate into big gain for an enterprise. So where we see big gains is, how do you incorporate AI into existing processes? How do you bring... We are clearly in the agentic era today. As part of that is, how do you deploy agents and incorporate into your process? The other angle to that is, AI cannot be like a bolt-on. You have to completely redesign your business processes, whether it's front office, back office, as to talking about being AI native. So we went from cloud native to now being AI native. So in this world, you have to redefine your process as AI native. And change management is also part of this. How do you bring your workforce along? There's a genuine fear, right? Is AI taking the jobs, right? Bringing your employees along this journey, telling them, "Don't fear AI." Experiment. Learn. There's giving them a lot of opportunities to learn and experiment, and creating that environment where they can actually experiment and thrive, learn on their own is important.
John Furrier
>> Google announced the full stack, I mentioned that on my little intro there, that's enabling this, that's accelerated. It used to be disruptive enablement when someone would lose, but in this market, everybody's winning. So it's almost like disruption and acceleration enablement, because now there's so much going on, and hybrid cloud distributed computing has won the game. Heterogeneous environments. We've all been there. In networking., You guys know this.
Ram Poornachandran
>> Yeah.
John Furrier
>> This is now standard.
Ram Poornachandran
>> Yep.
John Furrier
>> Kubernetes is solidified. Containers, or roles of that is super important. Take us through your thoughts on this, because this is where the action is right now. The agentic enablements, it's almost that control plane. And underneath that control plane is a lot of stuff. Networking, compute. There's going to be agents in the DevSecOps area. So things like security and compliance all have been worked on. So what's your guys' reaction to that? What's your opinion?
Gary Newe
>> I think we're at the stage now where we're seeing a lot of people move from the proof of concept stage into production. And I think everything you've said there, the underlying infrastructure, the underlying security, the underlying compliance, we don't know if they're ready yet. And I think a lot of people are finding out that they are not ready in some cases, and that's kind of causing some delays. So we're at this kind of incubation stage with all these new ideas, like Ram mentioned, we have to reinvent some of our processes, our business processes, to take advantage of these things. So there's a lot of work to do, and we have to build a solid foundation, I think, to build upon these.
John Furrier
>> Remember the, not to throw back some history here, but remember in the cloud native way, which actually still happened early days, when shift left came out?
Gary Newe
>> Oh, yeah.
John Furrier
>> Remember shift left? Get some security into the CICD pipeline? Okay, that made a lot of sense. That's just evolution. Security in agents is huge, because you see OpenClaw. It's certainly not ready for the enterprise. There's malware everywhere. But it shows the future. Take us through how you guys see that, because as you rethink agents to run in production at scale, you got to nail security.
Gary Newe
>> You do.
John Furrier
>> You got to have delegation. It's not just APIs. Agents will go across boundaries. How hard is that? Scope the problem statement. Because I think this is where people get confused. "Oh, agents, they'll just solve everything."
Ram Poornachandran
>> It's incredibly hard problem, right? It's an incredibly hard problem. So when you think about enterprise applications, it's not just the data. It's not getting data to the agents. It's getting data to the agents and applying proper rollback access security to it. Is the agent working on my behalf and is able to see only the data that I'm allowed to access? So, how do you bring that control layer, and how do you then have the agent take an action on behalf of you, and have the proper audit trail to say, this agent took an action on behalf of me and I can have that proper trace back. If I need to go in front of an auditor or something, I can say, "This agent took action because of this. Here is the audit trail." And also the accountability saying. "This agent took this action because this is the model it was trained on, this is the data it was trained on." So having that layer is very important.
John Furrier
>> Jensen Wong said a year and a half ago, maybe two years ago, "IT will be the HR department." Well, what he kind of meant was, kind of a tongue in cheek comment, but he wasn't wrong, actually. You're going to have tokens. They're going to be for work. The agents are doing work. But there's going to be a lot of, did they do the job? So HR would say, "Hey, performance review." That's basically getting at observability, and observability is not a new concept. But can you guys share your thoughts on this, because it's the same paradigm, but applied to a different situation. What is that thread? How do you connect the dots between observability that we know in networking and cloud native, and how does that translate into the agents?
Gary Newe
>> Well, I think that the lens you have to observe it through has to be a little bit different, because we can't look at the packets or the API calls. Because these systems are non-deterministic, you have to look at the intent, you have to look at how that intent changes based on the context, and you have to be able to build a picture around that. Like Ram said, you have to be able to audit why the agent made the decision, when the agent made a decision, and what the agent actually did. And to do that, then you need, like you said, you need this thread that connects everything together and puts some guardrails around it. And we believe that these guardrails are best suited at the inference there. Where the actual agents are making these decisions and interpreting this data, that's where we believe is kind of the place to secure this at scale.
John Furrier
>> And Ram, real time, too. This is not like, hey, not postmortem.
Ram Poornachandran
>> And the other important aspect is evaluations, right? So there's the whole thing of evaluations, it's agent testing, we call. It's just not a one-time activity. If you take an agent and you want to put it into production, it's not a one-time activity. You need to continuously monitor, is the agent trying to do the same thing? You don't want drift in the system, new data coming in, new models coming in, and the agent could not necessarily produce the effectiveness as when you start building it, can drift over a period of time. So that's why observability is important to say, is agent doing every day. And if it drifts, who does the agent call?
Alison Kosik
>> Sorry, he had me at intent. Stop for a second. Hold on one second. Seriously? Intent?
Gary Newe
>> Intent.
Alison Kosik
>> This is something that can be built in?
Gary Newe
>> Yeah.
Alison Kosik
>> Talk me through that. Sorry. This is, I'm just walking in on this conversation here.
Gary Newe
>> But you have to be able to understand what the goal is. So if in this new world for IT or HR, employees are going to be directors of agents, and they will have their agents, and then you have to ask an agent what you want it to do. The agent has to interpret your intent. It's not enough if you say, "Agent, create a PowerPoint." It has to say, "I need to create a PowerPoint based on Gary. Gary has this role. Gary uses this data in context. So I know he said this, but his intent is this. I'm going to do it based on this information."
Alison Kosik
>> How do you put up the guardrails, though? I mean, it seems like that's open to just tons of liability.
Gary Newe
>> It is. It is. So there's two things that I think are key, from what we're seeing from our customers, and even internally, ourselves. The first thing is continuous testing, and to be able to test these agents. And not just network testing, but again, test their intent, test the outcomes. Really, I guess we call it agentic warfare, but really kind of penetrate what they want to do, understand their thinking. Ask them the same thing five different ways so you understand then, roughly what you're going to get back. And then to take that knowledge and then put guardrails around where these decisions are being made. And the guardrails then will allow you to kind of really define, well, what can I ask this agent? What can this agent do within the context that it has? And what is this agent allow to output? Because if we don't do that, we'll have agents... There was...
John Furrier
>> Hitting databases, HR, payroll, root access to the servers.
Alison Kosik
>> Yeah.
Gary Newe
>> Or you see two agents argue against each other and run up a massive cloud bill, right?
Ram Poornachandran
>> And the other thing is-
John Furrier
>> Or agents that are adversary.
Gary Newe
>> Yes.
John Furrier
>> You can have malicious agents, espionage coming into the agent organization.
Alison Kosik
>> You just blew my mind.
John Furrier
>> I mean, warfare, I like the warfare methodology, because there is going to be fleets of agents, workers. So really kind of training them like an employee is really kind of like what you're kind of getting at.
Ram Poornachandran
>> And even when the agent, apart from the agent, actually from an employee perspective also, there's a lot of prompts that are happening, right? How do you actually protect your enterprise data from prompt injection or model hijacking, right? That's how our products come in really play into the ecosystem of AI security. We are super excited to announce the F5 Guardrails in our agent ecosystem, that announced today, as AI gateway ecosystem, right?
Gary Newe
>> Yeah. It's Google's Agent framework. Their Agent Cloud framework.
John Furrier
>> And what does it do?
Gary Newe
>> So, Google have built a framework that allows companies like F5 to insert their solutions in to make it easier for our customers. Because let's face it, look around, this is not an easy solution, and it takes an ecosystem to solve these problems. And so, working with partners like Google and a lot of other vendors around, we can come together, because we have to as an industry, at this time of massive evolution, come together and just help our customers really get the benefit of what we can see coming down the road.
John Furrier
>> Paint the picture of that, because I was going to ask that question. You jumped ahead of me on that one. Where does F5 fit into the puzzle piece of Google Cloud? Because you guys have a lot of different things. You got the announcement. Just paint the picture for us, where you guys fit in the stack, what role do you play, and why?
Gary Newe
>> So, our role in the stack with Google Cloud and with our customers who have hybrid cloud, because that's pretty much kind of what we see globally right now, is we help our customers secure and deliver their applications, and secure and deliver their agents and their AI-based applications. And we do that through integrations with Google Cloud natively and through Google Cloud Marketplace. We also do that, and we're doing this ourselves with our own implementation of Gemini, is using our own F5 AI Guardrail solution to enforce that control around what these agents and these systems can do, and to use F5 AI Red Team, which is the other solution, to continually test these solutions. And working with Google, we're making these available in the marketplace for our customers to be able to one click deploy and just use them very, very easily.
Ram Poornachandran
>> Yeah. And then we have the full stack, right? So when Gary talked about Red Teaming, we have Red Teaming and we have Remediate, and that actually flows into our Guardrails product. So it's a full integrated AI observability, or like protecting.
John Furrier
>> Yeah. It's like a trial run. It's like, okay, let's go figure out what they can do...
Gary Newe
>> Yeah, and see what happens.
John Furrier
>> And see what happens. It's the Red Teaming. Okay, let's simulate. Simulation's a huge concept right now. Well, we just had a great conversation with McKinsey on digital twins. On all levels of the stack. You got low levels. Okay, what's the hottest area for you guys right now, in terms of under the hood? Because obviously, you got all kinds of connectivity and connective tissue that's not just networking, pure networking, but it's networking paradigms. What's hot for F5 right now?
Ram Poornachandran
>> So internally, we started the agentic AI journey. So we actually started being a Google Gemini customer now. And we believe, like you said, how do we plan for our work source to actually embrace this new AI native way of working? That's all on the enterprise side. On the product side is, how can we be best customer internally, right? How can we take our products, Red Teaming, Guardrails, and integrate that with Gemini, and show this model way of being customer zero and enabling the organization. We have 6,500 employees. How do we make everybody a 2X person, right?
John Furrier
>> Yeah. And AI workloads are going to be running that. The number one question I get, or not question, but comment from practitioners, is our north star is to run AI workloads at scale in production.
Ram Poornachandran
>> Yeah.
John Furrier
>> Just, that's a north star. Everyone strives for that goal. What would be your advice to that? Because now we're starting in the first inning, whatever you want to call it, early days of agentic.
Ram Poornachandran
>> So, the goal is, how many agents each employee is going to have, right? And that is, it's an important metric to have. And once you have the agents for you, how many times the agents are getting activated? That tells you how often you're using it, how effectively you're using it. Then we can get into the outcome measurements. But right now in this agentic world, it's more about adoption first. Let's adopt it, right? How many agents can you build by yourself? How many agents the organization can build? We are in this race to build agents.
John Furrier
>> It sounds like a budget meeting. Okay, I got headcount over here. Well, your agents aren't really doing much work. Let's move them over to Gary's department.
Gary Newe
>> Retire these agents, right?
John Furrier
>> Get rid of those agents, kick them out. All kidding aside, but this is the new reality.
Ram Poornachandran
>> Yeah. How many agents? How many agents are we going to build?
Gary Newe
>> I think it's important to understand as well, you have to have a framework in place. And what we're seeing with our customers is that it's not just about how many agents do you deploy? Is my infrastructure up to it? Agents don't sleep. Workers take time off to take vacation. Agents go 24/7 all the time. Agents have access to all of your data. Can they get the data fast enough to make them productive? So there's underlying infrastructure questions, there's underlying policy questions, there's underlying kind of new tools questions that we have to answer as an industry. And we're working with a lot of our customers on these problems right now, and we're seeing tremendous growth in these areas.
Ram Poornachandran
>> Governance is super important. Like Gary said, governance is super important. We got to get that correct.
John Furrier
>> What's going to separate the leaders, final question, from the laggards? Because the old days was, let's see how it plays out. But the model shifted. You got to get in the game. You guys got the guardrails, great announcement, good to see that. So certainly, we see that happening, but what's the template? What's the best practice for not missing the boat, so to speak? At the same time, keeping up to speed on the velocity. I mean, I've heard people say, "I locked in on this model and all of a sudden a new model came out." So it's definitely a systems game, for sure.
Ram Poornachandran
>> So, the playbook is going to be, we got to have guardrails, right? So you have to have, sometimes go slow to go fast. We got to pave path for agent workloads. If you use these tools and technologies that are approved inside the corporation, you get like fast lane, right? If you want to play outside of those technologies that are not approved, you got to take a ticket, we got to do the evaluation, make sure security is good, we go through all the nine yards, and then enable those workloads. The other pieces, security, AI security and AI ops, is going to be foundational. Just like when we went from DevOps world to now this new agentic world, AI ops is going to be critical.
John Furrier
>> It's like AppSec reviews back in the old days, just applications reviewed, push it into production. That got faster.
Ram Poornachandran
>> Yep. We got to apply the same lessons to agentic world, too.
John Furrier
>> Yeah. Well guys, congratulations. And what do you think about the show?
Gary Newe
>> It's fantastic.
Ram Poornachandran
>> Fantastic.
John Furrier
>> Yeah. I mean, it's crowded, lines get in, lines around the corner.
Gary Newe
>> Yeah.
John Furrier
>> Google's Cloud Next is booming.
Ram Poornachandran
>> It is. It is. Yeah. It's gone... I've walked through the floor.
John Furrier
>> Yeah, it's awesome.
Ram Poornachandran
>> It takes multiple hours to walk through the floor.
Alison Kosik
>> Oh, it does. Yes.
John Furrier
>> Well, thanks for coming up. We appreciate it.
Alison Kosik
>> Thanks so much for your time.
Gary Newe
>> Thank you.
Ram Poornachandran
>> Thank you for having us.
Gary Newe
>> Thank you.
Alison Kosik
>> Got it. All right. You're watching TheCUBE, the leader in live technology coverage. We'll be right back.