In this interview from Google Cloud Next 2026, Waqas Ahmed, vice president of AI Engineering at OpenText, joins Yemi Falokun, global ISV partner solutions lead at Google Cloud, to talk with theCUBE's John Furrier about how decades of structured enterprise content are becoming the contextual intelligence layer that unlocks production-ready agentic AI at scale. Ahmed positions OpenText not as a storage platform but as the data context layer that makes autonomous enterprise workflows possible — feeding AI agents the right information at the right time while enforcing security policies, access controls and audit trails that make outcomes both reliable and explainable. Falokun underscores how Google Cloud's full-stack platform, from infrastructure to the Gemini Enterprise Agent Platform, provides the governed foundation that gives joint customers a secure environment in which to deploy those workflows at speed.
The conversation also explores how OpenText's Content Aviator and Aviator Studio are now integrated with the Gemini Enterprise Agent Platform through both A2A and MCP protocols, enabling customers to connect managed enterprise content to broader agent ecosystems without writing a line of code. Ahmed outlines a guiding principle — bring AI to the data, not data to the AI — explaining how first-party agents available on Google Cloud Marketplace enforce local governance and permissions while contributing intelligent context to multi-agent orchestrations. Falokun and Ahmed address private AI and data sovereignty, detailing how the partnership ensures all processing can remain within sovereign boundaries, with customer-managed encryption and regional data residency built in by default. From maturing interoperability protocols to agentic fleets operating in production, both guests provide a roadmap for how enterprises can move beyond experimentation to measurable productivity gains and demonstrable ROI.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Waqas Ahmed, OpenText & Yemi Falokun, Google Cloud
In this interview from Google Cloud Next 2026, Waqas Ahmed, vice president of AI Engineering at OpenText, joins Yemi Falokun, global ISV partner solutions lead at Google Cloud, to talk with theCUBE's John Furrier about how decades of structured enterprise content are becoming the contextual intelligence layer that unlocks production-ready agentic AI at scale. Ahmed positions OpenText not as a storage platform but as the data context layer that makes autonomous enterprise workflows possible — feeding AI agents the right information at the right time while enforcing security policies, access controls and audit trails that make outcomes both reliable and explainable. Falokun underscores how Google Cloud's full-stack platform, from infrastructure to the Gemini Enterprise Agent Platform, provides the governed foundation that gives joint customers a secure environment in which to deploy those workflows at speed.
The conversation also explores how OpenText's Content Aviator and Aviator Studio are now integrated with the Gemini Enterprise Agent Platform through both A2A and MCP protocols, enabling customers to connect managed enterprise content to broader agent ecosystems without writing a line of code. Ahmed outlines a guiding principle — bring AI to the data, not data to the AI — explaining how first-party agents available on Google Cloud Marketplace enforce local governance and permissions while contributing intelligent context to multi-agent orchestrations. Falokun and Ahmed address private AI and data sovereignty, detailing how the partnership ensures all processing can remain within sovereign boundaries, with customer-managed encryption and regional data residency built in by default. From maturing interoperability protocols to agentic fleets operating in production, both guests provide a roadmap for how enterprises can move beyond experimentation to measurable productivity gains and demonstrable ROI.
Waqas Ahmed, OpenText & Yemi Falokun, Google Cloud
In this interview from Google Cloud Next 2026, Waqas Ahmed, vice president of AI Engineering at OpenText, joins Yemi Falokun, global ISV partner solutions lead at Google Cloud, to talk with theCUBE's John Furrier about how decades of structured enterprise content are becoming the contextual intelligence layer that unlocks production-ready agentic AI at scale. Ahmed positions OpenText not as a storage platform but as the data context layer that makes autonomous enterprise workflows possible — feeding AI agents the right information at the right time while enfo...Read more
Waqas Ahmed
VP, AI EngineeringOpenText
Yemi Falokun
Global AI/ML Partner Solution ArchitectGoogle Cloud
In this interview from Google Cloud Next 2026, Waqas Ahmed, vice president of AI Engineering at OpenText, joins Yemi Falokun, global ISV partner solutions lead at Google Cloud, to talk with theCUBE's John Furrier about how decades of structured enterprise content are becoming the contextual intelligence layer that unlocks production-ready agentic AI at scale. Ahmed positions OpenText not as a storage platform but as the data context layer that makes autonomous enterprise workflows possible — feeding AI agents the right information at the right time while enfo...Read more
exploreKeep Exploring
What is OpenText's role in enterprise information management, and how does it approach applying AI to enterprise data and systems?add
How can enterprise information be organized, governed, and fed into large language models and AI providers to enable reliable, efficient, and secure AI-driven business processes, and what role does OpenText play in that integration?add
Can you explain the co‑engineering and co‑innovation between your organization and Google Cloud (including work with OpenText) — what that collaboration involves and how it helps make data into the contextual memory/holistic brain for agentic conversations?add
How are non‑functional requirements (such as security, governance, auditability) addressed when deploying agentic AI platforms that can take actions, and what role does Google Cloud's end‑to‑end stack play in that?add
How is MCP being integrated with Content Aviator, and how does that integration tie into the Aviator AI agent‑build strategy?add
How do you address data sovereignty and privacy concerns in private AI deployments, and how does your partnership with Google enable local hosting, encryption, and controls to keep customer data local and prevent its use for training models?add
Waqas Ahmed, OpenText & Yemi Falokun, Google Cloud
search
John Furrier
>> Welcome back to the Google Cloud Next live coverage with theCUBE. I'm John Furrier, your host. You got Google Next there. Coverage has been all about Google's and its ecosystem poised to be the first integrated agentic enterprise stack for applications, enterprises where the infrastructure is optimized for agents, not just models. Data becomes the contextual memory and Gemini is the orchestration across everything. Of course, developers and the enterprises will be jumping in and accelerating their gen AI efforts. We've got a great conversation here with Waqas Ahmed, VP of AI Engineering at OpenText and Yemi Falokun, who's the Global ISV Partner Solutions Lead at Google Cloud. Gentlemen, thank you for coming on. Again, great show. You're starting to see the fruit coming off the tree here at Google Next, Google Cloud Next. Yemi, it's been amazing. Waqas, thanks for coming on. You guys have quite the journey there. Really in good position and you guys have been accelerating the GenAI piece. You've been leading that. But first, I got a lot of topics, but set the table. Give us the new definition of OpenText. What are you guys doing? Where'd you come from and where are you today?
Waqas Ahmed
>> Thank you, John. Happy to be here. Thanks for the opportunity to share our experiences with you. OpenText is the global leader in information management. For decades, we have been helping the world's largest enterprises create, govern, secure, and exchange their information. We are probably best known for our document and content management systems, but we also help the organization with information that flows between companies, information they use to secure their enterprise against threats, information they use to create customer experiences and information that IT organizations use to manage the internal enterprise. And pretty much any type of information and OpenText is there is to manage for you in a secure manner. Our AI journey has been interesting. From the get go, we identified that the world's most powerful models were going to be insufficient on their own in the enterprise environment. What they need is the right context and the right data to get the right outcomes. And that's where OpenText comes into the picture. We have the context, we have the metadata, we have the information, and we help assemble and structure that in such a way that AI agents can make use of that and produce real outcomes from our customers.
John Furrier
>> Yeah, we talk about the partnership. I said on the open, the infrastructures continue to be invested in. We're seeing more and more news on faster, much more integrated systems, but they're optimized for agents, not just the models. And data becomes the contextual memory. As Waqas was saying, this has been the theme of the show. Talk about the relationship between OpenText and Google Cloud, because this is where the magic has been happening. We've been covering the marketplace all year, agents in action, and the people who have been integrating together have been leaders. Talk about the relationship with OpenText.
Yemi Falokun
>> Yeah, yeah. Absolutely. Thank you for having me. Yeah. Great to be here as well. So I want to say that the generative AI journey started around May 2023 where OpenText have been using a vertical AI platform, now Gemini Enterprise Agent Platform to innovate and bring solutions to the market. A simple example is that the Content Aviator, I think Waqas talked about decades of information, that's the key piece, decades of information where users can just use their Content Aviator to look at documents in a trusted and secured environment. You can take a simple insurance claim, for example, where you can review all the information on a claim compared to the customer's policy and provide next steps that's targeted to that user, but also be able to do it in their local language. That's just a simple use case. Now, fast-forward to where we are today. What they're now doing is now taking the foundation they've built from 2023 are now taking into the AI agentic era. What does that mean? That now means that we're now deeply integrating the Gemini Enterprise Agent Platform so that we can allow our joint customers to deploy, like you mentioned, secure autonomous solution at scale, leveraging those decades of information that they store on behalf of their users.
John Furrier
>> This is really a great example in a point in time in history where the ecosystem definition has evolved where the integration and the collaboration, it's not just API calls, there's a lot going on. The first topic I want to get to is agentic workflows. Okay. We've been talking about agents for a year, GenAI for a year in the enterprise, and there's been no strategy debate. Everyone wants to infuse GenAI and agents into their workforce. They see the value, and now they go, "Okay, how do I get this done?"
Okay. So you guys have that data. You've been doing this. You have the workflows. You guys have the data, you've been trusted with that at scale, now with the agentic workflows. Okay. Take us through how that plays out and what are you seeing? Because when you take the content management piece with Vertex, it's almost like the perfect intersection, the confluence of now scale at a whole nother level, you're going to have thousands of agents. Take us through the workflows and why they're so important in this enterprise content.
Waqas Ahmed
>> Absolutely, John. And enterprise information is not just files on a drive. It is organized, governed, tagged with context and tagged with metadata and integrated with the business applications and customer processes. So to wire that into the AI providers and LLMs, you have to be able to build that context so you are not flooding the LLMs with extra information, but you're giving them the right information at the right time and orchestrating across those processes, applications so that your outcomes are both reliable as well as efficient and cheap to execute. And that's where OpenText comes in. We are building the data context layer, because we have been sitting on all these decades of information that customers have developed in our solutions. And we can build that context and wire that into the LLM engines to actually build those end-to-end solutions. And that is the intersection, as you called out, John, where world's best models and platform providers like Google intersect with our ability to understand, govern and secure the data and find the right data set and feed that to the agentic solutions and choreograph that across the processes to achieve that success. So we are at a perfect intersection of this partnership to take the next step forward and put this into execution what has been theory for now.
John Furrier
>> Yeah. And I love that example of the context. And we've seen workflows before. People would statically put workflows together, we'd code against them, but now context is non-deterministic. There's some determinism in there. Okay, fine. I'll give you that. But when you look at intelligence, what does the role of context mean? When asked, why is context so important? And what happens when you don't have context?
Waqas Ahmed
>> Yeah, absolutely. Imagine you have an HR dataset in your system and you are implementing an automatic onboarding process, it is not enough for you to simply publish all the data to that agent and say, "Take action on it." You have to make sure that the security policies that the companies have been implementing for all these decades is not violated. The actions that are taken are in accordance with the company policies. The actions that are taken are deterministic enough based on the guardrails that the customers can replay the same conditions again and get the same outcomes. And there is visibility and audit tracking in the execution of those processes. And without context, an AI engine would have no way to reliably and consistently perform those actions. So that's where not just aggregation of this data, but finding the right information in a secure way that's permitted to be available to that particular workflow at runtime, depending on whose behalf the workflow is running, identifying those identities, all that information is what context engineering deals with. And without that, what you're going to get is answers that are either inaccurate or unreliable that you cannot reproduce them or unexplainable where you cannot trace how those actions were taken. And in the enterprise environment, as you would agree, John, you cannot have actions that do not conform to policies or cannot be traced back to actual drivers.
John Furrier
>> That's why I love the theme. I mentioned up top, data becomes the contextual memory, and that's a big topic in the agentic conversations. If you don't have the context, you're kind of dumbed down a bit. So the data becomes a holistic brain. You got short-term memory, long-term memory. It's kind of like a human brain, but it's got to work. Now, this brings up the theme here at Google, it seemed, because it's a better together story because what you guys are bringing to the table is key because you guys work with OpenText. So take us through the co-engineering, co-innovation with Google Cloud, and what does that turn into? Take us through that.
Waqas Ahmed
>> Sure. We started ... Sorry, go ahead, Yemi.
Yemi Falokun
>> Yeah, I was going to say, at Google Cloud, as you know, Waqas talked about some of what I would call the non-functional requirements, especially with a non-deterministic technology. So that's why Google Cloud, we have probably one of the only few vendors that has the full optimized stack from the infrastructure to the models, to the tools, to agentic platform. And this is what OpenText is using to implement that agentic platform and also provide the security, the governance, all the non-functional requirements that we need to provide for our joint customers. I sometimes call it the plumbing that is required so that our customers know that, to your point, they can build this agent that will take action in a secure environment that can be governed and audited within the platform that OpenText is bringing to the market using our services.
John Furrier
>> Waqas, weigh in on this because I said in the opening too, it's from tools to operating systems. When I hear words like orchestration, scheduling, coordination, it sounds like an operation. So take us through the co-engineering, the co-innovation with Google Cloud from your perspective, because you've got the data, you get the aviator studio from OpenText. How does it all come together?
Waqas Ahmed
>> Yep, absolutely. One key point that we often talk about is that to make AI successful in the enterprise environment, there is a lot of software architecture work that you have to do. You have to bring the AI intelligence, feed it the data, make sure that you can run this in a secure manner, make sure that you have observability systems where you can track the progress, make sure that you have the necessary structures to explain the actions taken that you can govern, make sure that you can keep it all secure. And Google has been able to provide us not just the intelligence in the form of the models, but also the underlying platform where we can use all these services to build an end to end functional system that is not only executing an agentic flow, but is also meeting the needs of audit and scale and visibility and explainability.
And using those capabilities, we are of course ... We have built our own aviator studio platform, which allows our customer to not just take advantage of the agents we provide within our own applications, but they can come in and extend our capabilities by creating agents and agentic workflows of their own. And that allows them to take our agents, add their own, extend to their own enterprise and build end to end multi-application processes that they need within their enterprise environment.
John Furrier
>> So you allow them to work across the environment, not just be siloed or specific. That's what you're referring to.
Waqas Ahmed
>> Absolutely. And that's what we believe is going to be the agentic enterprise destination, because if you keep the agents locked up within their individual applications, all you have done is automated the existing applications for some efficiency, but that's not the true power of the agentic enterprise. The true power is in the choreography and orchestration across silos, across applications where data provides the intelligence and AI executes the actions to provide productivity, insights, and competitive advantage.
John Furrier
>> One of the themes in the show, it's been also in the industry, it's beyond connectors, it's about intelligence, but MCP is a connector for many services across boundaries. So MCP has been great organic innovation. How is the MCP connected for the Content Aviator going? How does that tie in to the agent build strategy of the aviator AI strategy?
Waqas Ahmed
>> We have been a very strong proponent of that. We are going to make our agentic and AI offerings compatible with every ecosystem that we need to work in. And we have partnered up with Google and made our Content Aviator and other ones are down the pipeline available within the Gemini Enterprise environment and Google Cloud Workspace offered through both A2A for agent to agent integration, as well as MCP based data integration where a customer of Google, for example, Gemini Enterprise can interact with their content that OpenText hosts for them through the agents and MCP integration that we have enabled for them without doing any coding themselves. They can simply enable that connector and there you go. They have a richer ecosystem that OpenText participates in.
John Furrier
>> And that's going to help with developer choice too. I mean, this is one of the themes Google's been in play, even going back to the cloud native days, it's still one of the first principles of Google Cloud.
Waqas Ahmed
>> Exactly.
Yemi Falokun
>> Absolutely. Yeah, absolutely. I mean when we say our Google is, we are model-agnostic, framework-agnostic. We even have platforms that allow you to run all the other frameworks and run all the other models as well, because we just believe fundamentally that there isn't one model to rule them out. There are going to site models that are useful for specific use cases and they work for that use cases, so go use the models. But what we do is within our platform, the Gemini Enterprise Agent Platform, we provide access to those models so you can use them in a secure environment and be able to implement your use cases on top of that.
John Furrier
>> Well, the next topic I want to get into builds on that because the Google Cloud Marketplace has been very successful and very biased on marketplaces for over a decade. Now, they're AI marketplaces. So the notion of first-party agents on Google Cloud Marketplace has been discussed. So first, define what is a first-party agent? What does that mean and how does that change the game for folks who want to deploy agents? Well, because you mentioned it earlier, you provide agents so people can build more agents, multi-tiered, multi-party agents. What does first-party agent mean? What does it do to change the game?
Yemi Falokun
>> Yeah. I mean, when I talk about first-party agents, one of the things that I want to talk about people is, we've been on this journey for about three and a half years, right? And we've gone through the RAG, we've gone through semantic layer, but fundamentally, the folks that understand how to access the data that they've been storing for decades like OpenText. OpenText, This data is still structured, for lack of a better word. So they know how they study data, they know how to retrieve the data. So they are the ones that are now being a better position to build that semantic layer, provide the first-party agents that work with their data. And what we do with the marketplace, as you talked about, is to enable those first-party agents so that our customers can discover them and use them in the least amount of friction as possible. For example, you will see that even within Gemini Enterprise, you can bring up the agent gallery, you can see agents that are first-party agents from our partners, you can request them and that will go through some kind of provisioning process to make sure, again, there is a secured controlled way to enable these agents for the business users that need them for their productivity activities.
John Furrier
>> Guys, way in on the first-party agents because retrieval is a great core competence to come from. Certainly RAG was a low hanging fruit two years ago that happened. Now, you're getting into not just retrieval, you're getting into memory, you're getting into reasoning, you're getting into efficiencies that are coming in, you're seeing these agents. It's not about getting the first answer, it's about getting the right answer.
Waqas Ahmed
>> Exactly.
John Furrier
>> Take us through your view on this first-party agent and this idea of retrievals shift to a reasoning brain.
Waqas Ahmed
>> Yep. I think this has opened up an opportunity for us in the industry that didn't exist before. We believe that we should bring AI to the data and not data to the AI. Because when you bring AI to the data, it means that you can be still in control of the permissions, the governance, the security, the access, the relationship between the data, and the local intelligence can orchestrate across those entities, those stores to find the right answer, which can then be fed back to a larger choreography or a partner agent in that conversation. So bringing the first-party agents allows us to bring this intelligent context and not raw data in a secure manner to a larger ecosystem. This is something that had not been possible until now because integrations were static and integration was also unable to traverse a larger data set without compromising security or guardrails. In this case, our intelligent agents can do the local enforcement of governance and policies and yet be able to, in a dynamic way, interact with other ecosystem partners.
John Furrier
>> Yeah. And that brings the intelligence in. Well, I would be remiss if I didn't bring up private AI and specifically sovereignty, data sovereignty. We've moved past the cloud sovereignty conversation around GDPR and privacy. Now, we're looking at both privacy and as well as governance, but also there's monetary advantages in these sovereign areas, whether it's a country or region, you can actually create value, not just protect data. So it's become a very nuanced point. So let's unpack private AI and data sovereignty. Waqas, let's start with you first. You get the data.
Waqas Ahmed
>> We have a lot of customers who are very sensitive to privacy and sovereign boundaries. And we make sure that we are able to serve them where they are, once again, bringing AI to the data and we support with our Google partnership, of course, the ability to host the AI engines locally adjacent to our applications. So the entire processing lifecycle can take place within their sovereign environments and no data leaves their premises. And I believe that this also allows us to leverage our data strengths to be able to offer to our customers solutions that are tuned to their specific domain and their specific environments without compromising the sensitivity that their domain data is leaving their environment. And Google has been a great partner in us pursuing that particular implementation pattern.
John Furrier
>> Yeah. Yemi, you want to weigh in here because Google has that footprint? This is an advantage.
Yemi Falokun
>> Absolutely. And from day one, we've always said we do not use OpenText data or the end customer's data to train our model. That's been grounded in the way we work and the platforms that we have that we offer to our customers. But more importantly, we have controls in our platform. For example, you can look at the customer encryption feature where you can encrypt the data yourself just to give you that extra level of peace of mind. Everything we do is encrypted by default, but you can do that as well. But what we also do is we extend those controls to OpenText so they can also provide services to their customer. We provide regional services as Waqas mentioned, ensuring that the personal information is local to that region because there are compliance and regulatory reasons for our joint customers. And then we've always collaborated with OpenText, understanding the requirements of their customer and figuring out how can we implement solutions to meet those customers, not only just implementing, but also prove to the customers that we have controls in place that they can see to understand that we have that private AI and the data sovereignty they're looking for.
John Furrier
>> Well, gentlemen, it's been a great show so far. We had a lot of data on the performance at the AI infrastructure level from the chips, the software around it that's enabling a very rapid acceleration in the enterprise of agents and agentic infrastructure because the AI native coding's going to come in where we're seeing the code coming out. Everyone's programming now, it's great. Even I'm coding away, forgot all the syntax, but I just have the agent handle everything. So it's all good. The number one question, this is the final point I want to get both of you to weigh in on is we're hearing words like fleets of agents, an army of agents. And there's a reference to here in the US, the Navy SEALs, it's the most elite fighting force. And their simple paradigm is performance and trust and performance on the field, and then trust what you're doing when you're not on the field. And so it kind of applies to agents. What's going on when the agents aren't working? What are they paying attention to? And are they performance? So talk about this dynamic of performance and trust, because the North Star is agents at scale and production. How important is performance and trust when it comes to agentic fleets and armies of agents?
Waqas Ahmed
>> It allows me to emphasize our strength again in the enterprise space again, where we bring the data, which is necessary to both achieve the reliable results, because not having the right data also leads to customers often feeding too much data to the AI. And that not only degrades the quality of your AI outcomes, it also affects the performance and cost of your AI outcomes. So this context engineering and context as a service that we are looking to build out and provide to our customers helps in that regard. Secondly, I want to call out that we believe that agentic enterprise is not a single product. It's not a single solution. It's a new layer in the enterprise ecosystem. And we need to partner with other companies like Google to ensure that we can provide solutions that bring ROI, but also performing at the right levels and can provide the audit tracking and visibility where our partnership with Google certainly allows us to provide that end stack of capabilities in the enterprise environment.
John Furrier
>> Great. And when you have confidence, trust comes into play that's altogether. Yemi, weigh in on this whole performance trust equation.
Yemi Falokun
>> Yeah, absolutely. You mentioned it, right? We need incredible model performance and they're going to keep getting better, especially for agents and absolute data trust, which OpenText brings to the system. But more importantly, what we're doing is we are integrating OpenText Aviator Studio with Gemini Enterprise Agent Platform, providing some of those plumbing services that they need to implement the security that we talked about, be able to provide the trust, be able to prove that there's trust in what they're providing from an agentic point of view. But what I want to say is kind of looking forward and, like you said, the most are going to get better, more tooling will be available. The protocols will mature. You talked about a Navy SEAL of agents, they got to talk to each other. That's where the protocols that MCP and 808 that just went production ready last month are going to be key for that. And my hope is that by providing the plumbing, the services to allow our customers to start deploying it, like you mentioned at the beginning, production ready agent at scale, that this time next year we'll be talking about showcasing customer success stories where they're seeing the productivity gains, they'd be able to prove that these things are trustworthy in delivering it. And more importantly, they're delivering value added services to their end customers. That's my hope.
John Furrier
>> Well, we love this conversation on theCUBE. Obviously, when developers and deep tech come together with GenAI and agentic, the C-suite notices as top line revenue opportunities, productivity, that's all now coming together. All three of those areas are really exploding in value. Again, all part of an integrated, co-design, co-innovation plan. Gentlemen, thank you so much for the coverage content. We really appreciate it here in theCUBE>
Waqas Ahmed
>> Thank you, John. Thank you for the opportunity.
John Furrier
>> I'm John Furrier with theCUBE. We are covering Google Cloud Next, 2026. Thanks for watching.