We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Google Cloud Next 2025
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2025
Please sign in with LinkedIn to continue to Google Cloud Next 2025. Signing in with LinkedIn ensures a professional environment.
Exploring AI's Next Frontier with Paul Lewis at Google Cloud Next 2025
Paul Lewis, Chief Technology Officer of Pythian, joins theCUBE Research team live at the Google Cloud Next 2025 event in Las Vegas. Known for deep expertise in cloud infrastructure and artificial intelligence (AI) innovations, Lewis shares crucial insights into the evolving landscape of AI and data management, discussing key themes emerging from the event.
In this engaging session with hosts Savannah Peterson and Dave Vellante from theCUBE, Lewis delves into the exciting ...Read more
exploreKeep Exploring
What are some examples of agents in tools that enhance productivity?add
What are the five main features or capabilities discussed at the keynote?add
What are the two recent announcements made by the company?add
What are the ways in which Agentspace can benefit organizations in terms of change management and learning, particularly in relation to improving search skills and prompting users to ask good questions?add
>> Good morning, CUBE community, and welcome back to Google Cloud Next. We're here midway through day one of our three days of live coverage on theCUBE from the show floor in Las Vegas, Nevada. My name's Savannah Peterson here with Dave Vellante. Pumped to be talking about data, AI and some hot takes in this next one.
Dave Vellante
>> Yeah, another keynote analysis.
Savannah Peterson
>> I know, we get a bonus. I love this. I love that we get all the most brilliant minds with us. Speaking of CUBE alum, VIP, one of our favorite guests to have on the show every time at Google Cloud Next, and other shows for that matter, Paul Lewis, thank you for coming to hang out.
Paul Lewis
>> Thank you. This is absolutely my favorite thing to do.
Dave Vellante
>> We want to get some pithy analysis now.
Paul Lewis
>> Pithy analysis, all right, cool.
Savannah Peterson
>> You mentioned it is day one. You've been here since Sunday though, so you're abreast of all the announcements, you're involved in some of them. Give us your out-the-gate hot takes to get us started.
Paul Lewis
>> Out-the-gate hot takes, very interesting new philosophy on agents. So they talked about agents last year in terms of a developer agent, a customer agent, a data agent, a security agent, but now it's way more real. These agents exist in the tools that we've been using this entire time. They exist as an Agentspace. They exist in BigQuery. They exist as Gemini, the app and workspace. These are actual day-to-day productivity enhancing tools right out of the box. That's just the end user side. The infrastructure side with new TPUs and this new access to the Google data infrastructure around the world, there are 2 million miles of fiber, that's just amazing.
Savannah Peterson
>> Just a couple miles there. I'm glad that you brought up, that we've matured on beyond the use cases, looking back at the footage from us hanging out last year, that's what we were talking about. We were talking about those very initial use cases, some of those productivity prototypes that we're seeing. But what you just discussed means it's crossing a lot of the activities and business processes that we do across organizations, not just productivity. Is there anything or use cases you can share with us specifically that you found particularly exciting?
Paul Lewis
>> It feels very AI-first, and I'd put it into two buckets. So bucket number one, people productivity. Bucket number two, process productivity. So people productivity is the easy one. So at Workspace, Gemini or even Agentspace, how can I draft an email? How can I do one or two things in a search, look for data that I know exists, but I don't know where, or give me a full debriefing on a topic? Tell me more about this customer. Tell me more about this product. Tell me more about this transaction and pull it all together for me. That's the people productivity. So I could waste hours searching or I could just ask a question and get the results. The process productivity is all the agents that are embedded in the systems. Now BigQuery, I can use natural language to generate code, to use data science, to discover interesting insights I wouldn't be able to discover, or I could just use these platforms as my application development environment. That's very different productivity regimes.
Dave Vellante
>> Okay, so let's break down some of the implications for the C-level. So if I'm the chief, the head technical person, the architect, or whatever you want to call that individual, I'm looking at an environment that's been SaaS-ified. So I got business logic and metadata and process and rules that are all buried within inside of all these apps. I've got my data, which is largely owned, if I can use that term, by a data analytics pipeline team where I've pulled stuff out of all those systems and has stuffed them into an analytic warehouse, BigQuery or Snowflake or lake houses or whatever, so I got that. So I'm looking at that, and I got infrastructure that's all over the place. I got some stuff in cloud. I got my hybrid. Okay, now you've got agents everywhere, AI infused everywhere, what's my action if I'm the head technical person?
Paul Lewis
>> Here's the really interesting, odd fact from an enterprise IT perspective. 2010, my problem with doing analytics was my data was messy and siloed all over the place. 2015, I want to do machine learning, my data was messy and siloed all over the place. 2025, I want to do AI, my data is messy and siloed and all over the place. So from an enterprise CTO perspective, that's what I have to manage first. What are those prerequisites? Exactly as you said, some of that data is stuck in my SaaS. Some of that's stuck in my core systems that I can't get out. Some of it's I need API access to. They're all over the place and I need to figure that out. So my first step is let's build a data warehouse. Let's make sure I have all my repositories in order. Let's make sure I have the governance in place to manage that because it's easy to become poor after another year from now. And then the second part of that is integrations. So if I get my data house in order, how do I make sure it's easy to access all of those sources? Do I have APIs? Do I have connectors? Can I import that data into my system? That becomes my second concern all while at the same time still implementing AI. There's things I could do that isn't data-centric, but still productivity-centric.
Dave Vellante
>> So I've got my CTO, my chief data officer, my chief digital officer. I've got my chief AI officer. Okay, what do I do here? So CTO, you're responsible for the technical architecture. See chief data officer, maybe I say, "Okay, you've got to make sure we've got compliance right."
Paul Lewis
>> Compliance and insights.
Dave Vellante
>> Yeah, but now I start to bleed into some other swim lanes, but okay. Chief digital officer, it's like, okay, you got to drive revenue, and so all you guys have got to work together.
Paul Lewis
>> Right.
Dave Vellante
>> What about the chief information officer? What's her or his role here? What's their action?
Paul Lewis
>> The CIO needs to make sure that the skill sets are there. In fact, one of the bigger gaps that exist, not only is the potential bad news of being in the trough of disillusionment, is that so I've risked my data, I've risked my process, but really my risk is not having the expertise, not having the people to have the skillset to do these interesting new thing.
Dave Vellante
>> Because they just put everything in the cloud.
Paul Lewis
>> Well, it's not just in the cloud, but they're application developers of the 30-year application they built. They don't necessarily appreciate the difference between data engineering and machine-learning engineering. They don't really know the ins and outs of an LLM algorithm versus an algorithm they produce for a website, a UI. So those distinct skill sets require really what we generally refer to as an ecosystem of partners. You need partners who understand technology, partners who understand information, and partners who understand skillset, actually the things you need to know to make this successful.
Dave Vellante
>> The skillset, there's alignment with the business. If I'm going to re-architect my business, the CIO obviously has to be involved in that. And there's change management, which falls on the CIO.
Paul Lewis
>> Spot on, spot on. In fact, our very first AI workshop we do with the customers, we use what's referred to as an AI readiness toolkit, which is really just five points. How much money are you willing to spend in your strategy? Is this a hundred dollars or a million dollars? What should be the policy in place to prevent you from using tools that you shouldn't use because of strategic risk? And then of my hundred dollars, do I spend it more on educating my team, because if you don't know how to prompt, you'll never know how to implement AI? Or do I spend my money or/and on building out AI features on the tools that I built, software that I built? Or do I put all my money in embedded functions? So I had to put all of my hundred dollars on using Gemini for Workspace to create employee people productivity because that's where the biggest bang for my buck is? You've got to make that decision. And then the readiness part is, okay, but are you ready? Is your data house in order? Is your skillset in order? And of course, do you have the tools ready to do that?
Dave Vellante
>> Just listening to you talk, another thing I would build into my architecture of my business is what happens when we screw it up? Because we're going to make mistakes. It's a guaranteed you're going to make mistakes.
Savannah Peterson
>> Absolutely.
Dave Vellante
>> We're going to have to recover from that. We're going to have the mindset that we know we're going to... And maybe it's different than fail fast.
Savannah Peterson
>> You don't have to fail fast and break things.
Dave Vellante
>> I think it's different from that, right?
Savannah Peterson
>> I totally agree. I think it's being able to extract the learning from said failure and say, "Okay, was this a failure of the infrastructure or the toolkit that we were using? Was this a failure of the human side? We don't know how to implement this yet or we don't have the right data environment for this to be successful. Or was it strategically maybe not where we should be sprinkling a little AI on it or transforming our business," and I can imagine there's a lot of conversations there. One of my questions just to zero in a little bit on the skills gap side of it, how are you upskilling teams or helping them prepare for this momentous shift without alienating folks who are using more of these, we've been shying away from legacy systems lately, heritage systems?
Paul Lewis
>> We use traditional.
Savannah Peterson
>> Yes, traditional. There we go. But yes, how are you handling that from a cultural perspective because that's also one of the X factors in that tech stack to a degree?
Paul Lewis
>> Excellent question. Two answers I guess there. So answer number one, most of the tool sets we use every day will have AI embedded, therefore they will naturally start to use that. So you're in Workspace, you create an email, draft a new email, you're in Docs, create strategy. So that'll just be a natural part of their habit, and that will create expertise over time, at least on the prompting side. But you were spot on in terms of the R&D side. It's not really about failing fast in a high-risk adventure like AI. It's really about determining what the best use cases are. So very frequently we'll do a workshop and we'll have 400 interesting ideas, which is great. 350 of them are analytics. 350 of them aren't AI at all. It sits, give me a list, dot, dot, dot. So of the remaining 50, 30 of them are out-of-the-box machine learning, segmentation, classification, draw the best line. And then 20 of them are actually AI, things I should build and implement. 15 of them are not helpful for you. They're grandiose. They're multi-million dollars. They're such high risk that the only thing I can do is POC, which actually doesn't get any institutional value. So we actually hone down on the five to say, "These are the five that I can implement in weeks, I can put in production, that I can tell right now before I even start what the ROI is." So the easiest one out of the box are thing like computer vision. It's the most advanced AI. Scan document, find fields, put in system. And if I used to re-key that, we have a customer that used to re-key bill of ladings, it had drivers sitting at a distribution center for two hours, now it's five minutes.
Savannah Peterson
>> Whoa.
Paul Lewis
>> So imagine hundreds of drivers now saving almost two hours of wait time. The ROI was measured in, what, 30 minutes? It's shocking.
Dave Vellante
>> Before you go on to the next one-
Savannah Peterson
>> I just want to sit there for a second. Sorry. I'm going to-
Dave Vellante
>> I want to sit here too.
Savannah Peterson
>> Okay.
Dave Vellante
>> Go ahead.
Savannah Peterson
>> We can hang out. So I just want to call that out because one of the things we discuss a lot on the show is the time to realizing ROI. What you just said was 30 minutes. So I think there's a lot of myth that these programs take even 90 days or months and months and months to show that. That shows you traction right there to double down on that investment and carry it forward.
Paul Lewis
>> That's right. It's choosing the right one. I could have chose the multi-million dollar, multi-year initiative and never really known whether I need it or not. Or you can choose the one to say obvious, in production, and I can tell that this person doesn't have to do this work going forward.
Dave Vellante
>> So my stay on this topic was what if I've invested in RPA, like UiPath, their whole thing was computer vision, and I've made some innovations that I can do some of those things in that use case that you talked about. Do I throw that out and replace it with AI or do I have agents actually invoke the software robots?
Paul Lewis
>> It's an evolution. The likelihood you'll be having an agentic AI implementation is very, very high, and you would generally implement that with something like a dialogue flow where you've already predetermined the workflow. And for things that are always repetitive, always the same, you'd keep that. But things that I need to make a decision on that a human needs to be involved in the workflow, you had to stop it so that somebody could look at a queue and say yes or no, that's where the agentic comes in. Because you're going to have an LLM that say, "Well, based on all of the history, I can tell it's 95% yes, so we're going to make it yes and perform that action."
Dave Vellante
>> And an agent will learn from the reasoning traces of the human from that exception. A software robot won't.
Paul Lewis
>> And it will retrain itself and based on the grounded data of the core enterprise. It has access to the CRM and the ERP and the MES so it can learn over time your physical actions.
Dave Vellante
>> Yeah, okay, and hopefully it has access to those things. We saw Celonis just sued SAP because they can't get access to that stuff. But anyway, we'll see where that goes.
Savannah Peterson
>> I'm curious about that one.
Dave Vellante
>> So computer vision was the first and best use case for a starting point. What's next?
Paul Lewis
>> Code creation, code conversion, out-of-the-box simple. Because LLMs, mathematics. Code, shockingly, mathematics at the end of the day. So if you had a, as we did with one of our e-commerce customers, Wayfair, a decades-old monolithic applications to which for growth requires creating services. Let's have a multinational deployment. Let's create microservices, and that's 47 years worth of work. So how can we use code conversion AI to say, "Let's look at the database schema, let's look at the database SQL, let's look at the application language and say, "Evaluate, rewrite, implement, new service, new schema,"" 47 years becomes two years. That's out-of-the-box obvious ROI.
Savannah Peterson
>> I think most people would agree with you there.
Dave Vellante
>> Others that you'd highlight?
Savannah Peterson
>> You've got a compelling argument.
Paul Lewis
>> Enterprise search is an easy AI out-of-the-box. So you heard at the keynote Agentspace, right?
Savannah Peterson
>> Yeah.
Paul Lewis
>> So imagine a world where you're a grocery store chain and you bought a bunch of other grocery store chains. So now you have 10,000 stores and therefore you have multiple systems with that knowledge. You have a store list in lots of different places. What you need to be able to say is, "Hey, who's the manager of store 123," without having to discover yourself manually where that store list might actually be in order to give you that answer. So a grocery store chain like that would say, "Let me use enterprise search to do one of two things. I know there's an answer somewhere. Explore through my integrations where that answer might be and surface it." Or, "Tell me more about the productivity of this one store, double-click on that. Go to my ERP, go to my purchasing system, go to my finance system, go to my HR system and give me conclusive evidence on what this store's behavior and health looks like." That's the two big values, and that's we see enterprise search as being the leading edge sometimes of these AI use cases.
Dave Vellante
>> Okay, that's three. I think there were five. I'm waiting for Salesforce automation and contact center, but maybe not, maybe they didn't make the shortlist.
Paul Lewis
>> We don't do a lot of contact center innovation, but certainly we'll call it multimodal interface that we saw at the keynote becomes incredibly important.
Dave Vellante
>> Okay.
Paul Lewis
>> So instead of having a bunch of agents that will take over a chat as it is the normal practice, you can have a multimodal agent to say, "You're going to use voice to interact with me and I will understand you. You will turn on the camera, I will see your product to determine whether I have like products that match it, and I'm going to automatically fulfill for you. I'm going to perform the action. You're already an existing customer, I will charge the existing account, and I will send it to the existing address that's already there." It takes a relatively frustrating phone call or chat experience into a five-minute buying experience because the faster you can get to the submit, the better for the company.
Dave Vellante
>> Okay, so we got computer vision, code creation, enterprise search, multimodal. What's the fifth one?
Paul Lewis
>> A good one is chat with your data. So GigaOm, I talked about just before we started. So GigaOm, an analyst firm, as you'd imagine most analyst firms, create thousands of analyst reports, but their key document is called the Radar Report. Unfortunately, it's a very long, hundreds of page document detailing products and features and vendors. It's not easy necessarily to digest. So we helped them build a chat system to say, "Let me engage with my problem set and the reports that I see them. Let me determine how I can look at my RFP in a smaller form on what products and services might be valuable to me."
But of course as an analyst firm, they can't play favorites. So we had to put the constraints in place too. I can't ask, "What should I buy?" It'll only come back with, "Here's the criteria you should use to determine what I should buy," because of course, they're all customers. So that new engagement is important, especially since Radar Reports change every year, and therefore I need to know what's the trend over time when I purchase something, not just what the current state is. It might be the top Radar Report is of vendor X, but their new top report. I want to know that the second is actually 10 years in and they've been growing steadily over time to become number two. That's probably my purchase as a CIO.
Dave Vellante
>> Ok, so that's cool. So it's a RAG-based chatbot into the corpus of your home data. It's like theCUBE AI. Go to thecubeai.com.
Savannah Peterson
>> I was going to say we know all about those.
Paul Lewis
>> And we can train that entire model with all of the Radars that have ever been produced across all of their analysts and it's ready to go.
Dave Vellante
>> Very cool.
Savannah Peterson
>> That makes a lot of sense. I mean, you're essentially making legacy, historical institutional knowledge the most accessible it can be to create the best outcome, whatever that might be, whether that's customer-facing or internal.
Paul Lewis
>> Exactly. And every time you get an answer that prompts new services. Now I know what to buy, let me talk to an analyst. Let me go directly to the source that I know can answer this question.
Savannah Peterson
>> All right, Paul, two final questions for you, one real quick. I know you've got some announcements, you're breaking some news that you get to talk about a little bit later today. Can you give us a preview?
Paul Lewis
>> So big announcement is our Agentspace QuickStart. So this is getting up enterprise search for both people productivity and process productivity four weeks. We're implemented, connected to four of your everyday services, your Google Drives, your Gmail, even your AlloyDBs and BigQuerys, third parties too. Let's get it up. Let's get it running. Let's get all your employees readily highly productive, having access to both, finding something they know where it is but don't know where it is, or just giving topic-based conclusions. That's number one. And number two, we announced last week, which is great, which is the new GigaOm and Pythian partnership. So we came together to build the GigaOm AI maturity model. And I described the readiness model, the Pythium, we'll call that step one. Step two is a maturity model to say, "Here's the six levels of maturity, the 10 dimensions of what's important, ethics, governance, tool sets, and then let's help you assess where you are now and where you want to be." And of course, you can't be five on everything because you can't afford to be five on everything, so we help determine where on the squiggly line you need to be as part of the AI workshop series.
Dave Vellante
>> Oh, cool. Based on how-
Paul Lewis
>> The fundamental go-to-market....
Dave Vellante
>> important that attribute is to you and you weight it.
Savannah Peterson
>> Yeah, yeah, yeah. It let's you weigh those and see where you fit within the model, which is obviously mission-critical. Not surprising so many people relying on you all to do that. Congratulations-
Paul Lewis
>> Thank you....
Savannah Peterson
>> on the big announcement.
Dave Vellante
>> Very pithy analysis, thank you.
Savannah Peterson
>> You were dying to say that, weren't you? Yeah, yeah, it's great. It's good. I'm glad we'll be here all week, folks-
Dave Vellante
>> Too much?...
Savannah Peterson
>> in case... No, no, it's great. It's great. We'll keep the audience on their toes here, to keep them on their toes. Last question, since this has now become a tradition for you and I in particular at Google Cloud Next, no offense to Dave. We'll bring him along next time as well. What do you hope to be able to say when we sit down next year that you can't yet say today?
Paul Lewis
>> I want to be able to say that everybody's using Agentspace because it's an obvious way to push both the change management and the learning required for things like prompting. End users need to learn how to ask good questions, and that takes time and energy and practice, and enterprise search is the obvious practice there. So if we could get everybody to use that kind of thing from using Google to find a list, to using Google to describing the actual result, this is the win for everybody. And as you know, chairman of the board to administrative staff is demanding it. It's not an option. You have to implement it.
Savannah Peterson
>> You're absolutely right, Paul. I know we both agree with you. Thank you for the time, for the hot takes, for the announcements.
Paul Lewis
>> I appreciate it.
Savannah Peterson
>> And frankly, just the fun, as usual. We genuinely do. And thank you, Dave. Always a joy. I hope you keep bringing those dad jokes this entire week long. We'll keep everyone in their seats at home. And thank all of you for tuning in wherever you might be on this beautiful rock. We're here in sunny Las Vegas, Nevada. At Google Cloud Next, my name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.