In this interview from Google Cloud Next 2026, Paul Lewis, chief technical officer of Pythian, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the critical shift from AI experimentation to production-ready, AI-native operations across the enterprise. Lewis, a four-time theCUBE guest at Google Cloud Next, argues that last year's AI momentum stalled because enterprises confused building with operating — most pilots never reached production. He outlines the two core failure modes: selecting the wrong use case and lacking the operational discipline to sustain an AI system once deployed. To frame ROI more precisely, Lewis introduces the "five minutes versus two week rule": saving one person two weeks per quarter far outweighs distributing five-minute savings across a thousand employees.
The conversation also explores the structural gaps standing between enterprises and production AI. Lewis details Pythian's service model — a field CTO advisory practice, an end-user center of excellence built around Workspace and Gemini, an Agentic COE and a managed XOps layer — designed to close the distance between keynote demos and real deployment. He highlights a persistent and underappreciated barrier: data readiness. A significant share of enterprises, Lewis notes, need to fix siloed databases and broken data foundations before any AI investment makes sense. The discussion surfaces Google's cross-cloud lakehouse as a breakthrough for federated data access, while Lewis cautions that migrating away from legacy data pipelines requires recreating years of embedded business logic — a task agents can assist with but cannot fully automate. From C-suite education workshops to the principle that replaceability should be every AI project's primary non-functional requirement, Lewis provides a clear-eyed roadmap for enterprises navigating the gap between AI ambition and operational reality.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Paul Lewis, Pythian
In this interview from Google Cloud Next 2026, Paul Lewis, chief technical officer of Pythian, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the critical shift from AI experimentation to production-ready, AI-native operations across the enterprise. Lewis, a four-time theCUBE guest at Google Cloud Next, argues that last year's AI momentum stalled because enterprises confused building with operating — most pilots never reached production. He outlines the two core failure modes: selecting the wrong use case and lacking the operational discipline to sustain an AI system once deployed. To frame ROI more precisely, Lewis introduces the "five minutes versus two week rule": saving one person two weeks per quarter far outweighs distributing five-minute savings across a thousand employees.
The conversation also explores the structural gaps standing between enterprises and production AI. Lewis details Pythian's service model — a field CTO advisory practice, an end-user center of excellence built around Workspace and Gemini, an Agentic COE and a managed XOps layer — designed to close the distance between keynote demos and real deployment. He highlights a persistent and underappreciated barrier: data readiness. A significant share of enterprises, Lewis notes, need to fix siloed databases and broken data foundations before any AI investment makes sense. The discussion surfaces Google's cross-cloud lakehouse as a breakthrough for federated data access, while Lewis cautions that migrating away from legacy data pipelines requires recreating years of embedded business logic — a task agents can assist with but cannot fully automate. From C-suite education workshops to the principle that replaceability should be every AI project's primary non-functional requirement, Lewis provides a clear-eyed roadmap for enterprises navigating the gap between AI ambition and operational reality.
In this interview from Google Cloud Next 2026, Paul Lewis, chief technical officer of Pythian, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the critical shift from AI experimentation to production-ready, AI-native operations across the enterprise. Lewis, a four-time theCUBE guest at Google Cloud Next, argues that last year's AI momentum stalled because enterprises confused building with operating — most pilots never reached production. He outlines the two core failure modes: selecting the wrong use case and lacking the operational discipli...Read more
Paul Lewis
Chief Technical OfficerPythian
In this interview from Google Cloud Next 2026, Paul Lewis, chief technical officer of Pythian, joins theCUBE's John Furrier and co-host Alison Kosik to discuss the critical shift from AI experimentation to production-ready, AI-native operations across the enterprise. Lewis, a four-time theCUBE guest at Google Cloud Next, argues that last year's AI momentum stalled because enterprises confused building with operating — most pilots never reached production. He outlines the two core failure modes: selecting the wrong use case and lacking the operational discipli...Read more
exploreKeep Exploring
What has changed in Google's focus and product strategy recently (as reflected in the keynote), and from a technical/product perspective what were the standout announcements or innovations they revealed?add
What technically and product-wise stood out in Google's recent announcements—what key capabilities or "jewels" have they built (beyond the Gemini headlines)?add
Where did you start when creating your AI business?add
How significant is the work involved in existing data pipelines (including embedded business logic), and can AI agents effectively inspect, convert, and recreate that logic when migrating to zero-copy or semantic-layer architectures?add
Does the rapid pace of change in AI create a challenge that leads to organizational paralysis, and if so, what does that paralysis look like?add
>> Welcome back to Google Cloud Next '26. I'm Alison Kosik alongside John Furrier. And we have got a treat right now. Someone who has been to this conference not once, not twice, not three times, but how many times?
John Furrier
>> Four times on theCube. Every year, fellow traveler. He's seen the journey. He's like a historical analyst at this point for theCube. He's seen it all.
Alison Kosik
>> Yeah. Yeah. Let's bring in Paul Lewis, the chief technology officer at Pythian. Welcome back to The Cube.
Paul Lewis
>> Thank you very much. I would argue this is my favorite thing I do at Google Next. It is an entertaining, fun, intriguing experience.
Alison Kosik
>> Well, that's great because I think we're going to see you again in the future at other events.
Paul Lewis
>> Yeah, I'll see you next year at the...
John Furrier
>> We have Atlassian coming up.
Paul Lewis
>> True.
John Furrier
>> We'll be there.
Paul Lewis
>> Couple weeks.
Alison Kosik
>> Yeah. Yep. We'll see you there. All right. Let's talk about then the fact that you've been to this conference consecutively for many, many years and you've seen the changes. Talk about that kind of perspective and just how mind-blowing it is to see how the changes have sort of catapulted ahead.
Paul Lewis
>> It is fair to say it has grown substantially. It took me an hour to get into the keynote today. That is a different level of crowd that we've ever seen before. And everybody's clamoring to go because all of the good content comes with the keynote. In fact, almost the entire week is based on the announcements you see at the keynote. But it's also fair to say over these five years, it's gone from workspace versus Google Cloud, versus automation, to this is a Gemini agentic world. And everything, every category that Google provides has that agentic mentioned to it. In fact, almost everything is called Gemini Enterprise or Agentic something. And therefore, you know that the embedded features and functions of agents becomes their primary architectural purpose.
John Furrier
>> Yeah. And also you look at the announcements, I mean, the top announcements, Agentic Taskforce, Gemini Enterprise App, Gemini Enterprise Agent Platform, Agentic Defense, Agentic Data Cloud, AI infrastructure at the bottom. I mean, the control plane, I wrote a post on this, went viral on LinkedIn. It's not tools, it's going to an operating system. And it's really about the orchestration and having the stack, and they call it a full stack here, Google. This is key to the agent's success. Everything's got to work in concert to make agents go. We talked last year a bit about, oh, agents, the year before about data and data warehousing and role of the DBAs going back to 2023. The world is completely different. Technically in a product basis, what is jumping out at you at Google? What are some of the jewels that they've built? Obviously Gemini is getting the top headlines.
Paul Lewis
>> Last year was all about build. Google and others, but it was all about build. Find use cases to build either people productivity or process productivity use cases. Unfortunately, the vast majority of those never actually went to production, for two main reasons. Main reason, number one, you picked the wrong use case. You're either picking the nickel and dimes, the five and 10 minutes, or you're picking the ones that were so difficult, that required so many connectors that may or may not be ready to do. So you ended up doing a lot of internal education, a lot of pilots. The second reason, and the reason why we've changed Google this year, is you didn't know what to do with it once it was in production. You didn't know how to operate it. It has a living being, not unlike a database, a VM, a firewall. It has care and feeding. And if you don't know how to care and feed an LLM or an ML model or a data model, then you don't really know... It's not really in production. So the releases we saw this keynote was, here's a catalog to help you. Here's an agent repository to help you. Here's a marketplace of agents, internal and external to help you. That's the difference, is how do I deal with production? I think that's a pretty big difference between build and operate.
John Furrier
>> And I think the operation side of it, I mentioned operating system, it has all the buzzwords that smell like an operating system. Scheduler, orchestration, operate, run times. So you start to see that concept weave in, the persona we're seeing with the superpower, and I'd love to get your reaction on AI organizations that are thriving or looking for people. It's the triple thread of, I can build, I can operate and invest. Because the investment piece now is critical because you have to look at the, not ROI, the productivity, revenue contribution. This is a new skill. What is your reaction to this kind of triple threat kind of character?
Paul Lewis
>> So two interesting things. The interesting thing number one, we did about 50 workshops with customers last year, and it wasn't entirely clear where the maturity was. It was 50 different answers from, I've never heard of it to, I've invested a billion dollars. There is no consistent pattern of maturity, which means everybody is starting in a different place. The other side of that is, how do I pick the right use case and which one has the highest potential for me? It's very easy to pick a use case that sounds like it has a return, but very difficult to pick the one that actually produces a return. And you're right, spot on with outcome. It's not necessarily about savings. Sometimes it's about velocity. Can I do that one thing faster? And does faster give me some sort of strategic advantage? Can I win a deal because I'm doing some things faster? I like to think of it as the five minutes versus two week rule. I could save five minutes for a thousand people, and that's give or take, two FTE weeks, two full-time equivalents. Or I could save one person two weeks every single quarter. The bigger value is the two-week one, the one person one. So it's not about enabling everybody. It's about funding that use case that actually produce a marketable result measured in days, not quarters.
John Furrier
>> You bring up the holy grail question, which is how do I quantify the value creation? Do I do some statistical calculation? We save minutes here to, okay, did it produce revenue? How much? What are ways that you're seeing quantification? I mean, we all know the cliche, and we say it on theCube, get wins early and show value and then get more money.
Paul Lewis
>> Well, not all wins are equivalent, right?
John Furrier
>> So talk about the quantifications. What does value look like?
Paul Lewis
>> So we used to go to a customer and say, "Tell me about all your problems and I'll discover AI opportunities." Sometimes they would give us a list of 400 and we'd find 350 of them were MIS. I need a list... Not really AI centric. Or they would have ones that are really out of the box machine learning, segmentation and classification. But we'd find the 20 that actually are the valuable ones. 15 of those they couldn't afford. Multimillion dollars of ventures and they have $100,000 AI budget. It just doesn't work. So we had to switch that completely and to say, "This isn't going to help." While you have interesting ideas, you can't really tell where the productivity's coming from. So let me tell you what the 15 patterns I mostly see. And I'm going to describe a pattern and I bet it implies 20 times. So I'll give you an example. I will take a document, I will scan it. I will use computer vision and find 10 fields. I will give context. Is this a customer? Is this a transaction? I'll insert it into a system of record, then I'll notify. As soon as I describe that pattern, they will say, "Yes, here, here, this department, this department, this thing." And those are the most valuable case studies. And then they can easily tell the impact. This saves two weeks for Susan. This one I can completely remove an administrative staff. This one I could stop 30 people from re-keying back to a system of record. And this one I can stop monitoring for emails, that email gets pushed. And then it starts to see measurements. So sometimes it's velocity. Can I do the thing faster? Sometimes it's less people to do work, or sometimes it's just a differentiator. I can call myself AI-enabled X, AI-enabled cookie provider, and that creates some sort of advantage to me.
John Furrier
>> What's the best use case you like? Because we see people trying to do that and it revolves around friction. It's almost like you almost got to do an audit. Watch the kids on the rug playing with the blocks. Okay. There's inefficiencies there. That's the approach you're taking. What's the best approach for an average enterprise out there today? Because is there a sweet spot to land in? Have you found that pattern?
Paul Lewis
>> We find that the business almost entirely sides with business operation use cases. What can I do in the daily life of a procurement administrator to make that faster? What we have found is the biggest bang for their buck is actually design time. What can we do to make it easier and faster to build a dashboard? What can I do to make it easier and faster to build features in an application? What can I do to ensure that the 100,000 incidents I get every month is only one incident per month and save the MTTR from hours to minutes. Those are the real winners.
John Furrier
>> So building foundational things.
Paul Lewis
>> Right. Inside IT or inside the operation of the business versus customer experience stuff.
Alison Kosik
>> What are some of the common pitfalls that cause projects to fail though?
Paul Lewis
>> Well, it requires budget. Those are the pitfalls, right? It is very rare that the IT, the CIO says, "I need another several million dollars and I can make this faster." They kind of have to prove that to be true. Sometimes it's simply quality. So if you're looking at incident management to say, "Of these 100,000 tickets, the reason why I have so many production problems is because I push to production so often, if I can fix that, I will get less tickets, less production problems, higher uptime and therefore less impact to the business. That has a price tag to it. And that price tag gets me budget.
Alison Kosik
>> Yeah.
John Furrier
>> Talk about some of the business momentum. You said a lot of workshops. You've identified things you kind of switch in real time where the action is. What's going on in the business? How about some of the momentum and things that you're working on and optimizing for?
Paul Lewis
>> So five piece parts. I'm often asked where did you start in order to sort of create your AI business? We said, "You can't start at the beginning, you can't start at the end." So we built a field CTO practice. Think of that as advisory, past CIOs, past CTOs, past CDOs, who have a like-for-like peer-to-peer relationship in order to build a strategy. And by that I mean, how much money are you willing to spend? Who gets to make the decision? And of the 52 cards in your deck, where are you going to spread them out? The next thing we do is, because we manage a lot of productivity tools, that's like Workspace, because we enable and change management and train on workspace, extend that to Gemini, extend that to Gemini Enterprise. It's part of what we call the end user center of excellence for AI. Because that's only people productivity side. We also created the Agentic COE that's building those Agentic workflows, that's creating the high ROI projects. And then finally, once it goes in production, nobody really knows how to manage it, especially at our customers. So we manage it on their behalf. What we would generally call X ops. So managing models, managing data pipelines, machine learning pipelines, the agents themselves. Because as you know, I can put an agent production that's 70% accurate. You kind of want it in the 90s. Well, that takes prompt change and model change and data source change. They have lifespans just like an application does and that requires a team.
John Furrier
>> Well, I guess my question would be, are clients really ready for AI? Is their data ready? I mentioned the control plane's going to be the battleground. Google's looking good. They're even talking about in the keynote forward deployed engineering. Again, a buzzword, hot buzzword. Some people think it's contentious. I mean, to me, it's just put people where the domain experts are. What's your take on all that?
Paul Lewis
>> A not insignificant portion of the time, they're not ready for AI. They will have amazing use cases. We'll even discover and document to the higher ROI and we'll say, "This one that's a million dollar savings, your data state's not in order. You have databases all over the place or they're locked into SaaS or-
Alison Kosik
>> So then how do they get ready for AI then?
Paul Lewis
>> Then we stop. We pause AI and said, "Let's fix this. Let's make all your databases current. Let's move the databases closer to where the AI algorithms are going to be. Don't migrate the data. Don't move the data, but move the actual databases. Or you don't have a data platform. You don't have a data warehouse, or data lakes. You don't have anything bringing those things together, or you don't have the skill sets to support that." So we enable those functions because as you know, 15 years ago, we couldn't do BI because data was messy and siloed and all over the place. I couldn't do ML because data was messy, siloed and all over the place. And I can't do AI because data is messy and all over the place. I have to fix that. That's a data problem at source. I've got to fix the source.
John Furrier
>> Yeah. We've had great conversations in the past in theCube about the data readiness. So I have to ask you about Google Cloud. How are they looking? I mean, we've been there, it's our fourth year together. Well, we've been since the beginning when it was called App Engine back in the day, for us old historians. But they're looking good. They've been banging on the product every year, significant improvements up and down the stack. The Gemini agent platform on the enterprise looks very robust. They got a skills registry, tools registry, agent registry, unified governance layer, and then they got a unified intelligence layer feeding into all different CLIs and things of that nature. Gemini, of course, workspace, you mentioned that. Does that stack work for you? Are you happy with that direction?
Paul Lewis
>> The cross cloud lakehouse is the only way to access data that exists in somewhere other than Google. The cross cloud lakehouse is what's going to look at all of my federated data and allow me to do natural language querying across all of that information set versus asking customer to find a way to get all that data into Google. That's the best part of the conversation.
John Furrier
>> Across cloud is multi-cloud, or we used to call supercloud.
Paul Lewis
>> Supercloud. Yeah.
John Furrier
>> Across cloud is not as good as supercloud, in my opinion. Of course, we came up with that buzzword. It didn't hang, but that's the whole idea of crossing domains.
Paul Lewis
>> Right. Tell me more about my customers and discover across my clouds what that answer might look like versus forcing myself to build a data pipeline to get that data into Google in order to ask that same question.
John Furrier
>> Do you think the data pipelining work is significant? I mean, a lot of people spend a lot of time in pipelining and this is where agents could thrive.
Paul Lewis
>> Here's the gap, and I'm glad you mentioned it. The gap is, there are millions of data pipelines that were built. A not insignificant portion of them has transformation. All transformation is business logic. I made a decision. This five dropdown, this six dropdown is now five dropdown. This three address field, this five address field is three address field. Decisions were made. And therefore, if I move to a zero copy environment, all of those indecisions are now in the data warehouse, or I've lost the business decision. So if I'm moving away from data pipelines to, let's say zero copy, I now have to recreate those business decisions, which means I have to look at and inspect every single one of those pipelines.
John Furrier
>> Yeah. It's almost data pipeline gravity.
Paul Lewis
>> Yeah.
John Furrier
>> I mean, because those things were hard coded statically.
Paul Lewis
>> Somebody made a decision at some point, and it may have been 20 years ago they made a decision that I have to now recreate in my-
John Furrier
>> All right. I will just wave my AI hands and say, "Hey, AI can handle that with agents, right, or no?"
Paul Lewis
>> If you're talking about conversion agents, I'm going to have to take data pipelines, inspect data pipelines, recreate data pipelines into Python or zero code copies, or create a new semantic layer in things like BigQuery to say, "A customer equals this, a transaction equals this, even though I have unskilled loaded data into my platform."
John Furrier
>> Either way, humans got to be involved.
Paul Lewis
>> Yeah. I would say the biggest gap between sort of practical implementation or production AI versus what we saw today, the keynote, was enterprise friction. So we saw an amazing five-minute demo of what you can do in the data cloud environment and they said, "Isn't this great that I can do in five minutes?" What they may not have mentioned is it took four months to build and design and implement that. That was a great five-minute demo. It was not a five-minute use case. So that distinction between the two really needs to be discussed to say, not only is an individual person not responsible for all those activities, there's actually departments, there's actually approvals, there are actually forms you got to fill. Enterprise friction is a real thing and it just doesn't go away because of AI. It has to be enabled because of AI.
John Furrier
>> Alison and I were talking at lunch just a few minutes ago about the C-suite and that a lot of this stuff doesn't translate up to the C-suite. They won't even know what a data pipeline is. They just want to know that we have data. So the question is, what is the C-suite message to them? What is the state of the progress? How would you report to the C-suite about the opportunity and momentum? Is it ready? How ready is it? What's the status bar? How would you communicate that to the C-suite?
Paul Lewis
>> I often have board level or C-suite workshops. We do two hours at a time because that's as much attention as you can get.
Alison Kosik
>> And how do those go?
Paul Lewis
>> The very first thing I start with is, "Here's a pre-warning. AI is the most academically challenging topic you will go through in your career. I'm going to say words and you're not going to understand those words, right?"
Alison Kosik
>> Yeah, I get it.
Paul Lewis
>> Yeah. My first hour is defining things, right? Because you've heard them and you might have an interpretation of them, but I'm going to tell you what they really are. So in other words, the difference between accuracy and probability. People would see them as the same thing, but then I'll say, "Well, what would you rather have?" A 100% of the time the answer's the same, but a 1% correct answer, or 70% of the answer is the same, 70% of the time it's correct. I'd still go with the second option, right? So probability and accuracy are different, and an executive needs to know that those are different things and we have to walk through that statement. So the first half is, "Here's what words mean." And the second is, "Let's have some intellectual honesty. Where really is the maturity? Where's AI in the hype cycle? How far does it have to go? What is the true winning sort of AI technology, which of course is computer vision." Vision is the most mature, most advanced. So maybe start use cases that use vision.
John Furrier
>> So one thing that came up yesterday was this idea that it changes so fast that even the people that are practicing it, nevermind the end customer are blown away. Just when you learn something, it's changed. So there's a double-edged sword to that. You can either ride with a velocity like you're whitewater rafting at the highest level or you get paralysis. And paralysis often hides in plain sight, not like, "Hey, we stuck there." It's like there's certain things that happen. Projects are floating out there, destined to fail. So do you believe that the velocity is a challenge and does it cost paralysis? And what does paralysis look like if you believe that?
Paul Lewis
>> Depending on the size of the organization and their maturity in AI, it absolutely is a paralysis. So much so they might choose to wait. Ironically, the quarter from now is significantly different than the quarter that you have. My always answer 100% of the time is whatever you implement, your only non-functional requirement that matters is replaceability. Ensure that anything you do, the tool can be replaced, the model can be replaced, the team can be replaced, the expertise can be replaced because I guarantee you within weeks, not months, not quarters, not years, it will be different. So just you have to know that to be true.
John Furrier
>> So understand the interchangeability of things, switching costs.
Paul Lewis
>> Correct.
John Furrier
>> Build that right. That's an architectural challenge.
Paul Lewis
>> That's right. And know that you could make an architectural decision that costs you way more than you expect because you think control is important. You want to roll your own model and you want to retrain your own model. That's measured in millions of dollars. Do you really want to invest in that? Sure, you'll get lots of control and very, very private, but I doubt you have that kind of operational expense at your disposal.
John Furrier
>> Well, great to have you on. Again, as usual, did not disappoint.
Alison Kosik
>> Enjoy talking with you. Yeah.
Paul Lewis
>> Thank you.
John Furrier
>> Thanks for coming on.
Paul Lewis
>> Thank you.
Alison Kosik
>> Thanks for watching theCube. We will be back right after this.