At Google Cloud Next '26, Benjamin Kennedy of Striim, cloud solutions architect, and Vinod Ramachandran of Google Cloud, senior product manager, join theCUBE Research hosts Alison Kosik of theCUBE Research and John Furrier of theCUBE Research to examine modern agent-ready data architectures. Kennedy and Ramachandran discuss streaming ingestion into BigQuery, AlloyDB and Iceberg-formatted lakehouses, real-time replication from operational databases and how open formats enable cross-cloud access. They frame technical patterns for feeding artificial intelligence, AI agents at scale.
The conversation explores streaming ingestion pipelines and lakehouse interoperability, open formats such as Apache Iceberg, agent-ready infrastructure, and the operational requirements to support agentic AI workflows. Topics include real-time replication strategies for operational databases, design patterns for sub-second data movement, and approaches for enabling analytics across multiple clouds and systems.
Key takeaways include concrete recommendations for architecture and execution. Ramachandran notes that adopting open formats such as Apache Iceberg avoids vendor lock-in and enables immediate access across analytic systems. They further emphasize interoperability as a foundation for multi-cloud analytics. Kennedy emphasizes partnering with real-time replication platforms such as Striim to achieve sub-second ingestion and enterprise resilience. They recommend beginning proofs of concept, POCs now, removing paralysis and prioritizing architectures that deliver agent-scale real-time decisioning.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
Vinod Ramachandran, Google Cloud & Benjamin Kennady
Vinod Ramachandran
Senior Product ManagerGoogle Cloud
Benjamin Kennady
Cloud Solutions ArchitectStriim
In this interview from Google Cloud Next 2026, Benjamin Kennady, cloud solutions architect at Striim, joins Vinod Ramachandran, senior product manager at Google Cloud, to talk with theCUBE's John Furrier and co-host Alison Kosik about how real-time data pipelines are becoming the essential foundation for agentic AI at enterprise scale. Ramachandran explains how modern architectures on Google Cloud funnel high-throughput streaming data directly into analytical systems like BigQuery and AlloyDB, enabling agents to convert insights into actions at scale. Kennady...Read more
exploreKeep Exploring
- What does a modern real-time data architecture on Google Cloud look like at a high level?
- What does the architecture for the modern AI/agent layer look like, and what architectural changes are needed so cloud-native agents can be fed real‑time data?add
What are the key recent changes in enterprise data and database technologies—particularly around real-time data replication and the increasing centrality of databases for analytics and agent-based decision-making?add
How do you work with Databricks and other partners—are you collaborators or competitors, and what does that mean for customers who want a heterogeneous, multivendor, cross‑cloud (hybrid) AI lakehouse that relies on open formats like Apache Iceberg?add
Why is real-time access to data important, and how is the analytics market shifting toward real-time analytics?add
Vinod Ramachandran, Google Cloud & Benjamin Kennady
search
Alison Kosik
>> Welcome back to Google Cloud Next '26. We are streaming live here in Las Vegas. I'm Alison Kosik alongside John Furrier, and we're about to kind of enter the conversation about how real-time data isn't just speeding things up, it's turning data pipelines into kind of decision engines.
John Furrier
>> Yeah. Data feeds AI and everyone knows that. The platform, data cloud, Google's been great, expanding. But the key thing is having the connections into the compute and agents is going to be super critical. There's a lot of great work going on there. There's some big announcements here at Google Next. So this next session's going to talk about the plumbing, getting that data into the AI as fast as possible.
Alison Kosik
>> All right. Let's bring in our guess. We've got Benjamin Kennedy, Cloud Solutions Architect with Striim. Welcome to theCUBE.
Benjamin Kennady
>> Thank you.
Alison Kosik
>> And Vinod Ramachandran.
Vinod Ramachandran
>> Yep.
Alison Kosik
>> You're the senior product manager with Google Cloud.
Vinod Ramachandran
>> Yes.
Alison Kosik
>> Welcome to theCUBE as well. So let me start with this question. At a high level, what does a modern real-time data architecture on Google Cloud actually look like?
Vinod Ramachandran
>> Yeah, sure. Think about it, right? You're getting in streaming and data rapid pace. So real-time data, like time series data, any of the data, it's coming at a very high throughput. It's immediately available through streaming ingestion systems on key analytical systems, such as BigQuery or AlloyDB and so on and so forth. And customers can immediately get insights on that in their agentic systems, turning insights into actions. And we are able to do it at agent scale today at Google.
Alison Kosik
>> Amazing.
John Furrier
>> So real time is a big focus. Data lakes, love data lakes. But now you have data lakes and other things.
Vinod Ramachandran
>> Correct.
John Furrier
>> What's the architecture look like for the modern AI layer? Because the agents need to have all the cloud-native goodness with Kubernetes, containers, but now this agenetic layer's emerging, the control plane. They need to get fed the data. What are some of the architectural changes that are new and what does it do?
Vinod Ramachandran
>> Yeah, great. So what customers don't have to do, they have existing ingestion pipelines. They have disparate data sources. Think about Oracle, MySQL Server, and so on and so forth. These systems already can feed data into an object, so like cloud storage. But if they pick open formats such as Iceberg, they can immediately be able to get this access to data in BigQuery or AlloyDB and so on and so forth. So what this really means is that they don't have to make big deltas or big changes. What they have to do is make sure that agents can immediately access them, and that's the fundamental pivot. And with partners such as Striim, they can actually do this today. And so Ben can talk to you about key ingestion sources that enable you here.
John Furrier
>> Ben, take us through some of the key changes.
Benjamin Kennady
>> Yeah. So with Striim, and Striim really enables you to do that real-time data replication at scale. So we're designed to do that ingestion from your Oracle and your SQL server and your operational databases, and then replicate that data in real time with sub-second or second latency into your analytic systems, so that those agents can then be used to actually make those real-time decisions.
John Furrier
>> You know, this is our 17th year doing theCUBE. We've had many deep-tech conversations or enterprise conversations. I'd say over the past couple of years, the word database comes up here and there. This year, databases are front and center because agents need to access all the data. So you got data lakes. I mean, database, we've covered databases for sure. Who doesn't love databases? But it hasn't been the central thing. Now you've mentioned Alloy, you got Spanner.
Benjamin Kennady
>>
John Furrier
>> The databases are out there. You got the pipelines. How does the architecture change? Now that dream scenario of many databases is actually coming to fruition.
Benjamin Kennady
>> Yeah. You want to use the right database for the right use case here. And so that some of the challenges are that you want to replicate from all these different operational systems into all of those different database targets, so writing into Spanner or AlloyDB and your Iceberg tables all at the same time, all in real time. And Striim allows you to do that and allows you to do that replication with a simple, straightforward methodology and pipeline.
John Furrier
>> Data gravity comes up a lot with ... I mean, since the dupe days, not to date myself, but data does have gravity, but data pipelines are out there too, and some are confused by the fact that, "Okay, I have pipelines. I rely on them. They've been doing a good job, but now all this new stuff's coming out there." So there's I won't say fear and certainty and doubt, but it's more confusion. What is the impact of the gen AI stack, full stack, to the pipelines? Is it a win? Is it a challenge? How should people be thinking about their data pipelines? Because generating pipelines on the fly, to me, is a feature.
Vinod Ramachandran
>> Of course it is. It's an absolute win.
John Furrier
>> It's a huge win. So clarify the impact, the data pipeline, what it means to the architect, the developers, and the infrastructure folks.
Vinod Ramachandran
>> Yeah. So think about the full stack, right? So you have the compute, but then you have the differentiated storage on, say, some system like cloud storage. Your automated pipeline can now just ride straight into open formats like Iceberg, and it's immediately available in all these analytical systems. So it's an absolute win. So analysts or system by engineers building these pipelines can just make this tweak, use open formats and see it immediately accessible so it's actually a feature, not a bug.
John Furrier
>> Yeah, yeah. So open formats is key?
Vinod Ramachandran
>> Yes.
John Furrier
>> All right. So there's no issue. So the answer is go open formats-
Vinod Ramachandran
>> Without compromise....
John Furrier
>> and magic happens.
Vinod Ramachandran
>> Yes.
John Furrier
>> All right. What's your take on that? You obviously would agree, right?
Benjamin Kennady
>> No, absolutely. And really riding into all these different open formats and being able to do that in real time unlocks all of your agentic workflows. So you can build agents on top of it and they can make the best decision and the right decision immediately and get that value.
John Furrier
>> All right. So I got to ask the lakehouse question to the PM on lakehouse. Congratulations, by the way.
Vinod Ramachandran
>> Thank you.
John Furrier
>> We've been covering Databricks and Snowflake. Databricks, obviously here is a partner. How's that impact the customer? People want heterogeneous, multivendor distributed computing, hybrid cloud with AI. What's the role of Databricks? Do you guys partner? Are you guys frenemies? How does that work?
Vinod Ramachandran
>> Look, we are partners. So we actually announced our partnership in this conference. So we have a very collaborative ecosystem. Our lakehouse is an AI-native cross-cloud lakehouse. So this cross-cloud lakehouse supports open formats such as Apache Iceberg, but we also support the Apache Iceberg REST Catalog, and we collaborate with partners such as Databricks. So we are able to immediately access a Iceberg table from Databricks cross-cloud, and we demonstrated it in the keynote and different con talks, and this session as well. So we are partners and our goal is to unlock and enable customers using open formats without compromise across clouds.
John Furrier
>> Benjamin, what's your take on the do-it-yourself pipelining versus managed services? Because you're going to have a lot more interactions with the data lakes across multiple databases, open formats are popping.
Benjamin Kennady
>> Yeah. There's a lot of different solutions out there and you can always build that solution yourself, but the challenge that you always run into there is that it becomes very fragile and it becomes a lot to manage. And so whenever you need that solution that's going to work at your scale in really, really large, complex organizations, you don't want to add any more complexity, but you need that simplified solution that you know is going to work and always just going to function and has all those kind of enterprise self-recovery and those kind of features on top of it. And so that's whenever you'd want to work with a partner like Striim to help you solve and resolve that problem.
John Furrier
>> Talk about the customer proof points because ... Put it into action. Give us an example. How are people rolling this out? How do you see your product evolving? Start with some customer examples.
Vinod Ramachandran
>> So we have a story with UPS, so Ben, you want to cover that?
Benjamin Kennady
>> Oh, yeah, absolutely. So we've worked directly with UPS. So during COVID, they ran into an issue where they had a lot more fraud and risk and package theft, and so they used Striim along with Google to help resolve and solve that problem at scale. And so their current architecture and their current solutions were failing to keep up with that scale. So by integrating and using Striim to pull from SQL Server and structured and all these unstructured data, they were then able to flow and in real time replicate that data into BigQuery along with GCS, and then build agentic models on top of that to reduce their fraud risk and then drive that revenue and reduce their fraud as well as package detection impact.
John Furrier
>> Everyone's experienced package lifting. Five-finger discounts we used to say. What was the blocker, access to the data? Or what was their challenge? What specifically was the issue?
Benjamin Kennady
>> Yeah. So there were really two main challenges there. The first was scale, so being able to replicate that data in a simple fashion, as well as the real-time aspect. Package detection and fraud risk is happening in real time and therefore you need that data in real time for your agentic workflows and to build these kind of agentic models on top of it.
Vinod Ramachandran
>> And the other thing is the entire ingestion system itself. They had disparate data sources, structured data, unstructured data, multimodal data, all ingested into one system to come up with this insight. So I think that's the key too. Interesting all these disparate sources return insights into actions in near real time.
John Furrier
>> What was the outcome of that? Was it just they were taking what, images? Was it delivery? What was the business function?
Benjamin Kennady
>> Yeah. So it was using images and analytic real-time data, as well as data from emails, and integrating all that together into one platform in real time and they use Striim to do that and accomplish that.
John Furrier
>> Can I connect my Ring Camera to it and send it?
Alison Kosik
>> Yeah, I was just going to say. The pictures are spot-on, by the way.
John Furrier
>> I mean, we all have seen that, "Here's your picture." It's on ... Service delivery's all going to multimodal.
Benjamin Kennady
>> Yeah, and so-
Vinod Ramachandran
>> Go ahead.
Benjamin Kennady
>> And UPS use Striim along with Google to solve and resolve that problem and to reduce that package theft at scale across the whole country.
John Furrier
>> Awesome. How can other customers ... Obviously big enterprise have a lot of legacy. What's the roadmap there? What's the prescription to the enterprise?
Vinod Ramachandran
>> Yeah. So if you standardize your locus with different sources, wherever your disparate data, you can just bring it into a lakehouse architecture and then build it ground-up then. Immediately you can then access it in all these agents. And then the most important thing, customers were doing this batch analysis. Now you can do it real-time and turn the insights into immediate actions. Agent take actions, and that's where you prevent things like fraud, threat detection, and so on and so forth. There's a gamut of use cases across industries where they can use this.
John Furrier
>> Yeah. Now, you guys are feeding the agents. This is the requisite requirements to get that data-
Vinod Ramachandran
>> Correct, correct....
John Furrier
>> in open formats. Okay, it's there. Where does it go to the ... How does the agents interact with this? How should people think about this from an enablement standpoint?
Vinod Ramachandran
>> Yeah. So, for example, take about a retail use case. You're trying to do returns, and once you find the agents are able to access it, earlier on you would run SQL queries, things like that to figure out what's happening, what's the return rate? The marketing team will build a dashboard. Now you can just run a few prompts. "Tell me about this data. Tell me why this return's happening." It gives you an immediate histogram and it'll say, "Okay, all right, so this segment, this is happening." Okay. So you now, "Oh, all right," and it could be three reasons. Seasonality, like, "This product, there's an issue with sizes."
So the chief marketing officer can immediately see this in a back dashboard and they say, "Okay, I'm going to change this for this region. I'm going to reallocate the products this way and take that action." And that's their core value. We are enabling customers to really get time to market much faster.
John Furrier
>> You're bringing intelligence into the business logic and the interactions at the point of query.
Vinod Ramachandran
>> Yeah. Moving from human scale to agent scale, that's the key.
John Furrier
>> All right.
Alison Kosik
>> From a C-suite perspective, what are the biggest risks in this?
Vinod Ramachandran
>> I think it's just a win. As long as they're on the right architecture with open formats without compromise, and with the right partner, such as Google, I think they're betting very good.
John Furrier
>> First of all, the CFO's going to love it. If it get returns and you got the fraud, that's a metric. Starting to see business engineering going on. It's almost like how we deal with coding and C-suite could just say, "Solve that problem."
Vinod Ramachandran
>> Yep. No.
John Furrier
>> "Talk to the AI. Oh, it's good enablement."
Okay, what's on the roadmap? Give us a taste of what's next because we're going to see a lot more things come to the table in the next six to 12 months. I mean, it's changing so fast. Velocity is a huge concern, not concern, but more of opportunity to manage. You lock in on a model or you lock into an approach, you don't want to have to pivot.
Vinod Ramachandran
>> That is true.
John Furrier
>> We want to actually build on a trajectory. How do you guys see that evolving?
Vinod Ramachandran
>> Yeah. So we have a ecosystem that has no lock-in. So you can use our first part, an open format system without compromise. We offer differentiated capabilities on our first-party products such as BigQuery, managed Spark, AlloyDB, and so on and so forth, but we do not have any lock-in. So we are bringing advanced differentiation from BigQuery. For example, we talked about ingestion real-time streaming. So we solve these problems at scale, bringing Google's history over 10 years onto these products. But if you're on these open systems, you have no lock-in. So you're getting all the best goodies of Google without having any lock-in.
Benjamin Kennady
>> Yeah, and really, your end state of data is real time. You want the most up-to-date data to make the right decisions always. And the only way to do that really at scale is to work with a partner like Striim to grab all these different data sources and replicate them into your open formats. And that way, you can have and make the correct insight always across all your data.
John Furrier
>> It's interesting how the real time is such a great feature and super important. Dave Vellante and I have a joke on theCUBE, and this came out when digital news started coming because we're digital-first. New York Times, Wall Street Journal, that's yesterday's news. And that's a lot of the dashboard world we live in, postmortem analysis. Yeah, great to send reports to people who want to just catch up, but for real time, you want real-time access. You want what's happening now, not what happened yesterday.
Vinod Ramachandran
>> Correct.
John Furrier
>> This is where the analytics market's shifting and they're data hounds too. They love data. I mean, you look at Procter & Gamble, CPG companies, we had a few of them on. Some of these companies are leaning in and have done a lot of data, regulated industry like healthcare. I mean, massive market. They've done a lot of data work.
Vinod Ramachandran
>> That is correct.
John Furrier
>> And so now they're up-leveled.
Vinod Ramachandran
>> Very true. And then basically they can now take actions very quickly and I know that really enables and creates value for users and customers across these segments.
John Furrier
>> We're going to have to add that to our Agents and Action series, which has been very popular. People want-
Vinod Ramachandran
>> Oh, very nice.
John Furrier
>> And that Google sponsor, thank you very much. People want to see the execution.
Vinod Ramachandran
>> That is correct.
John Furrier
>> Because there's no strategy risk at this point. Everyone's saying, "Take AI and infuse it in all aspects of the business." Engineering, deep tech, C-suite, all happening at the same time. Where's the execution? So that's the number one question we get here on theCUBE. What's the execution risk?
Vinod Ramachandran
>> I would say execution risk is like people should be in this right now. This is the space to be. Have your teams start building POCs and engagements. We at Google are here to serve you. So if you have an interesting use case, talk to us and then we can really land it for you.
John Furrier
>> The execution risk is don't get paralyzed.
Vinod Ramachandran
>> Yeah, take action.
John Furrier
>> Take action. Get in. That's the execution. Guys, thanks so much. Appreciate it.
Vinod Ramachandran
>> Yeah.
Benjamin Kennady
>> Yeah, thank you very much.
Alison Kosik
>> Great conversation. Thank you. And you've been watching theCUBE, the leader in live technology coverage. We'll be right back.