In this interview from Google Cloud Next 2026, Sarbjeet Johal, founder and chief executive officer of Stackpane, joins theCUBE's John Furrier to discuss Google's full-stack AI momentum and what enterprises must prioritize as the industry shifts from AI experimentation to agentic execution. Johal ties Google's sharply improved market standing — stock up 111% over the past year — to its vertical integration advantage: by building its own TPUs, the company avoids steep GPU margins paid to third-party suppliers, giving it structurally better AI economics than most cloud peers. He details two new TPU generations announced at the show, one for training with 2.7x price-to-performance gains and one for inference delivering a 5x latency improvement, underscoring Google's push to own the full hardware stack from silicon to Gemini.
Additionally, Johal shares a three-part framework for matching AI to the right system type — systems of record, engagement and innovation — arguing that generative AI fits most naturally where language, not deterministic outcomes, drives value. The discussion unpacks the rising cost reality of production AI, including hidden token expenditures triggered every time a model is swapped and retested at scale. Furrier highlights open table formats like Iceberg and the data lakehouse as the single highest-leverage point for unifying data feeds and letting agents operate at full speed. Both analysts identify agent governance as the defining wave ahead — mirroring how DevSecOps unlocked enterprise cloud adoption — and flag change management, not technology, as the primary barrier slowing organizations down. From mapping the nascent primitives of an agentic AI stack to developing the "triple threat" skillset of build, operate and invest, the conversation charts a clear-eyed path through the execution risks of moving too slowly — or too fast — in the current AI cycle.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jim Anderson & Gina Fratarcangeli, Google Cloud & Matt Hobbs, PwC
In this interview from Google Cloud Next 2026, Sarbjeet Johal, founder and chief executive officer of Stackpane, joins theCUBE's John Furrier to discuss Google's full-stack AI momentum and what enterprises must prioritize as the industry shifts from AI experimentation to agentic execution. Johal ties Google's sharply improved market standing — stock up 111% over the past year — to its vertical integration advantage: by building its own TPUs, the company avoids steep GPU margins paid to third-party suppliers, giving it structurally better AI economics than most cloud peers. He details two new TPU generations announced at the show, one for training with 2.7x price-to-performance gains and one for inference delivering a 5x latency improvement, underscoring Google's push to own the full hardware stack from silicon to Gemini.
Additionally, Johal shares a three-part framework for matching AI to the right system type — systems of record, engagement and innovation — arguing that generative AI fits most naturally where language, not deterministic outcomes, drives value. The discussion unpacks the rising cost reality of production AI, including hidden token expenditures triggered every time a model is swapped and retested at scale. Furrier highlights open table formats like Iceberg and the data lakehouse as the single highest-leverage point for unifying data feeds and letting agents operate at full speed. Both analysts identify agent governance as the defining wave ahead — mirroring how DevSecOps unlocked enterprise cloud adoption — and flag change management, not technology, as the primary barrier slowing organizations down. From mapping the nascent primitives of an agentic AI stack to developing the "triple threat" skillset of build, operate and invest, the conversation charts a clear-eyed path through the execution risks of moving too slowly — or too fast — in the current AI cycle.
play_circle_outlineGoogle Cloud Next: AI, Partners and the Democratization of Engineering
replyShare Clip
play_circle_outlineChange management and organizational adoption seen as primary AI ROI barrier
replyShare Clip
play_circle_outlineNon-deterministic AI versus deterministic legacy IT systems—software evolution, not death
replyShare Clip
play_circle_outlineStart with easy-to-adopt internal use cases to drive volume and familiarity
replyShare Clip
play_circle_outlineAgentic AI for Retail: AI-Native Delivery, Agentic Harnesses, Human Review Guardrails, and Scalable Dashboards for Staffing & Inventory
Jim Anderson & Gina Fratarcangeli, Google Cloud & Matt Hobbs, PwC
In this interview from Google Cloud Next 2026, Matt Hobbs, principal engineering and AI leader at PwC, joins Gina Fratarcangeli, managing director at Google Cloud, and Jim Anderson, vice president of North American ecosystem at Google Cloud, to talk with theCUBE's John Furrier about why organizational adoption — not technology readiness — has become the defining barrier to enterprise AI at scale. Hobbs frames the core tension sharply: innovation is outpacing adoption, and the paralysis it creates isn't obvious — it looks like endless pilot programs, tool deba...Read more
Jim Anderson
Vice President, NA Partner Ecosystem & ChannelsGoogle Cloud
Matt Hobbs
US and Global Head of PwC’s Cloud, Engineering, Data, and AIPwC
Gina Fratarcangeli
Managing Director, Midwest Sales LeaderAccenture
In this interview from Google Cloud Next 2026, Matt Hobbs, principal engineering and AI leader at PwC, joins Gina Fratarcangeli, managing director at Google Cloud, and Jim Anderson, vice president of North American ecosystem at Google Cloud, to talk with theCUBE's John Furrier about why organizational adoption — not technology readiness — has become the defining barrier to enterprise AI at scale. Hobbs frames the core tension sharply: innovation is outpacing adoption, and the paralysis it creates isn't obvious — it looks like endless pilot programs, tool deba...Read more
exploreKeep Exploring
How is the rise of AI agents and forward‑deployed engineering changing business culture and operations, and how should organizations combine domain expertise and change management to deliver better customer outcomes while reducing risk?add
Why aren't enterprises realizing returns from AI investments, and how should they approach adopting and scaling AI?add
How is the rise of non-deterministic AI changing software (and the role of SaaS), and what should organizations prioritize to achieve adoption and ROI when selecting use cases?add
How should organizations prioritize and roll out AI use cases to maximize adoption and impact?add
What challenges do companies face when scaling AI solutions, and how did Google and PwC use AI to give managers of a large retailer actionable, at-scale insights and dashboards?add
Jim Anderson & Gina Fratarcangeli, Google Cloud & Matt Hobbs, PwC
search
John Furrier
>> Welcome back to theCUBE. I'm John Furrier, host. We are here at Google Cloud Next, part of the partner summit. We've got three great guests breaking down all the actions in the ecosystem. Matt Hobbs, principal engineering and AI leader at PWC.
Matt Hobbs
>> Absolutely.
John Furrier
>> Welcome to theCUBE.
Matt Hobbs
>> Thanks for having me.
John Furrier
>> Yeah, a special conversation. And Gina Fratarcangeli is here.
Gina Fratarcangeli
>> Good morning. Yes.
John Furrier
>> Managing director, Google Cloud. Thanks for coming on. Good to see you again. Jim Anderson, Vice President of North American Ecosystem. Great to see you too.
Jim Anderson
>> It's always great to see you, John.
John Furrier
>> Yeah. I mean, the conversation at Google Next coming in, the preamble was really heavy on agents. Forward deployed engineering's been a hot trend. That's now forward deploying everybody because with things like Claude and Gemini, all the coding is coming out. So we're seeing kind of an engineering culture, systems culture coming into business. We had a great session with people who do change management. Jim, this has been a big theme of almost the democratization, but that's kind of been overplayed. It's really kind of like needle moving moment for the industry because now you have engineering mindset in business culture everywhere.
Jim Anderson
>> Yes. We all become solution engineers with regards to it. So you think about the challenge of how you leverage the innovation that's changing fast. You combine that with the domain expertise of partners like PwC, deep industry expertise that they can bring to it, along with change management to really focus in on customer outcomes that we couldn't arrive to before. And that's really the dynamic that we're faced with and how we reduce the risk of that journey is really critical moving forward for our customers.
John Furrier
>> Matt, you mentioned PwC. We've been tracking, you guys been doing a lot of great work. The world is kind of fragmented into three kind of intersecting worlds, developers. We've seen that AI native. It's a frothy environment. You can't look anywhere here. Claw this. Claw bought that. I mean, just everywhere you're seeing a lot of activity. It's almost like the summer of love for developers these days. And then everything else is like deep tech. So deep tech is now in the mainstream in the IT world. And then the C-suite are trying to figure out how to manage token budgets. And so you have kind of like everything coming together and people are re-engineering their businesses. What's your view on this? And do you agree with that, what I just said, and what would be your reaction?
Matt Hobbs
>> Yeah, I agree on it. I think I'd kind of frame a couple pieces in there. One is I think you're seeing a shift now where it's away from, is the technology ready and is the potential there. And a lot of that kind of direct views, if you mentioned things like OpenBot or some of the other advancements that have come, that wasn't necessarily the right long-term scaling solution for an enterprise, but it showed a moment as to what was possible. And so you think of every boardroom, every CEO, they're now saying, "Well, I know it's possible. It's not a technology issue that's a blocker. It's an adoption issue." And why when I'm spending money and redirecting resources to go and drive an advancement, am I not seeing the return? And largely where that lands to is the definition of the actual problem you're trying to solve isn't necessarily clear. And so that gets to where do you actually think you're going to get the biggest return? You can analyze it. I wouldn't over analyze it. You can also use AI to help analyze it. And then you go and focus on, I'm going to go reimagine this process, machine first, human second. And that's going to land with a nonlinear approach to execution, which is a very different mindset.
John Furrier
>> Yeah. And also the word deterministic and non-deterministic. I think I haven't really heard those words mentioned hundreds of times in a month since college, but you start to see that we think non-determinist as humans and AI thinks the same way, but everything we built in IT has been deterministic. So you're starting to see people throwing out phrases like SaaS is dead, software's dead, but really it's just changing. I mean, bad software always dies. That's kind of a general rule of my philosophy. But what's your guys' take on this? Because I think this is an important cultural frame because software's actually outpacing its levels before. It's just more software everywhere because of the coding. It's not that software's going away. It's that it's changing.
Gina Fratarcangeli
>> Yeah. I think we're thinking about it in terms of tools, right? There's so many tools being created and available in the space and it's infinite and it's changing day by day. I think Matt made one interesting comment about the influx of change at the organizational level. And we're really seeing this change is so crazy fast, but as they look at the change and the potential change in the use cases, what's really shaped up in the last couple of months for me, just in talking to the hundreds of customers that I do is there's a million options for use cases and maybe starting with the highest ROI use case, but highest difficulty to deploy is not the best place. Right now, I'm feeling a lot of energy and momentum around we need adoption. And you mentioned that at the beginning, how do we get the organization to get comfortable with the level of change that's coming? And we're seeing a little bit of a shift right now in, did a whole bunch of pilots, got a whole bunch of pilots started. Some of those are taking off, many are not. As you're aware, the data McKinsey and lots of folks spew out data about which pilots go into production. But seeing almost a reflection point of, is it maybe better to start with some of those easy to adopt use cases that can get volumes of people starting to use the tools, thinking about building their own agents. So while we want the big design, massive impact transformation use cases to be developing and be thought through, organizational adoption, I believe is the critical barrier right now to getting the most impact out of AI possible. And therefore maybe starting with some of those internal back office use cases is a quicker way to get more people exposed, more people using the tools, and then those people will start creating their own agents for their groups. I don't know if you're seeing that too, Matt.
Matt Hobbs
>> Matt. So I'm seeing it. I think it depends on which stage someone's at. And so the way that we think through it is you actually have to, most of these start with executive dialogues, ends up being, you have to do both. And a lot of what it ends up being is centered around, there's a paralysis of the way that I phrase it in these conversations is innovation is crushing adoption. You could freeze all innovation. You could stop coding everything in Google Cloud right now, and we'd spend the next 50 years rolling the true potential through every industry, but it's not stopping. It's not slowing down. It's accelerating. So that creates this paralysis. So you have the paralysis in combination with, I need my people to get hands on and hopefully they can come up with something. But also, well, we've done that in some cases. We've had debates and continue to have debates around tools rather than focusing on the fact that we need to build the capabilities and broad adoption drives certain value. Absolutely. From an executive perspective, they're not seeing the financial returns. And that's where it ends up being we expect to do both at the same time, drive broad adoption, get people familiar, get them ready for the potential and for the change that's coming, and let's also force the future by picking the areas that we know we need to actually drive forward.
John Furrier
>> I like how you frame that, Matt. I like how you frame that because the innovation almost is a challenge in the sense that it's going too fast. So just when I learned this, this came out, I've lived that well been there. And then just overall adoption is just the change management is ... It's so hard that when you fix one change management problem everything else, then what are people doing? And what are you saying to them? What is the best practice for that? Because this is a cultural thing. I mean, change management's been around for years. Now you have it on mass acceleration mode. It's like Mach 100. It's huge.
Matt Hobbs
>> So as you mentioned, I'm deep in engineering. I'm less on the change management side, but I can actually point out what I see on the change management side, which is, you see this point where there's a resistance to actually, it's not any better than what I do, it's not anything different. And then it takes some time where you actually kind of have to force the use of a particular tool. You have to force it. You get past a 10, a 20-hour boundary, then you start to see the potential. Then you start to see the transition to, I'm not trying to go research how to use AI, I'm actually engaging with AI to teach me new skills. And that starts to open up the idea of, well, it's actually there to guide me through everything that I do. So if I'm stuck, I just say I'm stuck. What should I do next? And I think that boundary in the early stage is still a boundary that a lot of folks haven't got over. I think we all live in a world where we're actively using these things every day, but it's not the majority of the population still.
John Furrier
>> So how do they get through that? Is that just a ballpark figure, 20, 30 hours? Is that a benchmark? Does that seem to be the state of the art in terms of kind of the learning curve? Because I've heard breakthrough. Once you break through the learning curve, it's mass acceleration.
Matt Hobbs
>> So I don't have the science on this, but what I've seen ballpark is initially, if I go back a year, it was about a 10-hour boundary, but that was also before things like deep research, right? So now you look at that and what I'm even seeing now is that's a continuous thing. So for our folks that are deepest, they're not working less because they're getting more productivity. They're working more because they feel like they have a thirst for knowledge and they're continuing to learn new things. They're continuing to push the boundary. As soon as they complete something, they realize, well, things have now advanced, I could probably do that even better next time. And so that continuous piece of it is the mode that you see our most advanced folks in. Those are the ones that I truly say that are running in an AI native way and how they service our clients.
Jim Anderson
>> Yeah. It's a process of continual learning, right? A couple of the concepts there. I say sometimes people try and make the mistake of saying AI is the product and I like to say actually the transformation is the product, not AI type of thing. And I had a great experience when I was talking to a customer and I say, "Hey, Jim, love this stuff, but I'm actually too busy to learn how to make myself more efficient." So you think about that. We're all like too busy to learn something new because it's a process of change management, those types of things. So trying to figure out how we help people on that journey is becoming extremely critical because as both of you mentioned, it is about the adoption. You can't get the ROI if you don't have the adoption type of thing.
Gina Fratarcangeli
>> Yeah. And we get asked often, I think now more than ever, partners like PwC are so critical because it's going to be ongoing change management. It's going to be ongoing learning and the volume of new is coming so fast that unless you have a consulting partner who's in it in the weeds daily, it's, I would say almost impossible for companies to learn all they need to do and all the things that are coming out that are going to impact their business. So we're seeing just tremendous value from the likes of PwC really supporting that ongoing change management process. And it's the change management and business transformation that they've been doing for other was the move to BPO, ITO, cloud. Now this is just the next wave and those partners are experts at it and can really help customers accelerate that whole process and have kind of a safety net as they're going through it.
John Furrier
>> Matt, I'm expecting this year to be at the year of enterprise AI because the agents are becoming better and it's better AI than was a year ago and was more clarity around some of the things. What's it take to run AI native engineering at scale? And what do you think about the idea of forward deployed engineering? Because that was a concept that was, the market demanded that the engineers go to the front lines and get close to the business problems. What's it take to get at scale and what's your opinion of the forward deployed engineer?
Matt Hobbs
>> So I'll maybe start with the last one because I think this is a, it's a vibrant one for me. So the original concept of forward deployed engineering was you take the engineers who are building the actual product or the platform and you deploy them so that they get firsthand experience with customers and also help adoption. I think now it's a term where it's, you could point to anything and say the entirety of what PwC does in an engineering capacity is now classified as the term of forward deployed engineering. There's an ideological debate on the use of the term, but I think the point being you tape folks who have been there, done that experience on the particular tools that are needed, platforms that are needed, AI topics that are needed, and you deploy them at the center of the problem to help people learn to execute different. That piece I think is, there's going to be continued kind of market buzzwords and conversation around it. I think the point is you have to have that engineering capacity embedded with the functional expertise, with the business expertise in order to actually deliver the outcome now. The side benefit is everyone who's there now sees the potential and everyone who's there, who wasn't necessarily technical before, can pick up a higher level of what they can actually execute and run themselves. The other piece-
John Furrier
>> Well, I just want to chime in there because I think I have an opinion on that. I'll share that with you because I think it's been, depending on who you talk to that work can be taken out of context. But the issue was with cloud and now AI, the domain expertise is in the front lines in the area where the business is. So it's accounting, FP&A or whatever, that's the logic. The horizontal scale is still in the cloud and AI. So I think to me, the forward deploy engineer was a short-term measure of, hey, put the engineering with the domain experts are so they can tell you what to code, but now they can code.
Matt Hobbs
>> Some of these are kind of ... Well, and that there's a piece on the coding we should actually hit, but just because it's the new conversation around a Ford deployed engineering and put those folks there is, this was a concept of fusion teams that you had before. You'd put business in engineering together to drive an output for a product. So it's at a different scale because again, of the pace of innovation and the potential and ability to actually drive those things forward.
John Furrier
>> And so my first question or the second question to this answer was, how do you get an AI native engineering capability at scale? In the past, we heard production workloads at scale. That was a north star. The new north star is AI enabled applications at scale, which now includes, by definition, agentic and other things.
Matt Hobbs
>> You can push it kind of, you can push to whatever boundary of potential when you look out and say real time applications built by AI, dashboards built by AI, every other portion there when you get to that. I think there's very few companies who are even at that stage right now. There's a lot of startups trying it and trying to see if they can disrupt. But a lot of it is, to your earlier point, I have 50 years of technical debt. How do I actually bring that forward and resolve my technical debt, put it in a cloud native capacity, access the data, and deliver the business outcome that I want? And so if I go back a little more than a year ago, we started this, but again, the tooling keeps advancing. And so in there it was, okay, if we're going to go and build in an AI native way I challenged our teams, you have to come back and show me the difference. We started with let's repurpose old projects. Let's take the same exact scope definition. So we had a side by side comparison as to whether or not we could deliver the same quality. We still did the same code reviews and everything else that you need to feel comfortable under our brand, what we deliver for a client. And so that it proved very, very quickly, within a matter of weeks.
John Furrier
>> It's like an A/B test.
Matt Hobbs
>> Scaling it is where it became the problem. Scaling it is where you have to have what's the consistent delivery methodology now in an AI native way. What's the consistent choice around tools and tool options based on how you deliver? What's the agentic harness? Where are you going to mandate that you still have human reviews with been there, done that experience to know what good looks like? That's the issue with anyone can code, is anyone can code, but they can also accelerate damage if they don't know what good looks like. They aren't using the right agentic harnesses.
Gina Fratarcangeli
>> For example, one of the things Google and PwC did together was a great use case with a large retailer. I think AI and the agentic landscape is allowing firms to solve kind of what were unsolvable problems. Case in point, they have managers who cover huge areas, huge volume of stores. There's no way possible that those managers could control, aggregate the volume of data and use that data to drive next best action and actionable insights from it because of the scale that they were operating. And as you know, their margins are so tight. So we collectively used AI to create just a tremendous amount of reporting information, insight creation and dashboards for those store managers. So they would wake up, be able to see their dashboard and the AI agents were working in the background to generate the information. Here's areas you need to be concerned about, here's overstock, here's employ staffing issues and creating a tool to help them be more effective. So age-old problem, really difficult to solve pre AI, creating AI agents and delivering a solution.
John Furrier
>> I love the AI scale and I love the examples. Let's tie in the Google Cloud partnership because AI makes everyone a superpower. So I'm sure PwC has gotten some superpowers, you mentioned some great use cases and walking through some of those examples. Where does that fit in with Google Cloud and what do you guys can you share how you guys are working together? I heard Pathways earlier before we came on camera. I know you guys got pathways and all kinds of technical stuff happening. Share some of the integration and conversations as a partner. What do you guys work on together?
Jim Anderson
>> Yeah. I think in general, what we look at is how do we come together? I talk about this concept, sort of Princeton Map one plus one actually equals one, right? How do we bring both of our expertise together to really address customer complex issues, right? We bring technology, we're innovators in technology, they bring deep industry expertise, right, change management expertise, and that's that system that we bring together to customers to really address things in ways neither of us could do by ourselves. And that's the true value of the partnership sort of moving forward.
John Furrier
>> Man, I weigh in on like the speed game because orchestration will be a big topic this week here at Google Next. Gemini does a good job there. Where are you tapping the jewels and the Google Cloud to make your life better?
Matt Hobbs
>> I mean, so you heard one of the examples around kind of what we did. It's also kind of the raw models, the flexibility of what we can deliver. We have a lot of clients who have chosen Google Cloud as their in destination. And by and large, we support our clients on the journey of the destination that they're on. So a lot of that ends up being either delivering AI capabilities or using Google Cloud and Gemini in the way that we deliver every offering that we deliver for our clients. And so it's the way that we think about us becoming AI native, as well as the center that we use to deliver our clients' outcomes. We're going to be kind of pushing it even further. I think we've now hit a stage of maturity on our partnership where we need a lot more concentrated bets specifically, partly because the pace of innovation from Google on these topics is coming so greatly. So that's some things that we'll be announcing here later this week around kind of what those steps look like.
John Furrier
>> You mentioned-
Gina Fratarcangeli
>> I think what Matt also wanted to say is that he loves the end-to-end stack that Google provides.
Matt Hobbs
>> Absolutely. Sorry, that was implied. So-
John Furrier
>> Got to love those TPUs too, by the way. Don't leave them out.
Matt Hobbs
>> End to end stack, absolutely.
Gina Fratarcangeli
>> Very, very secure and integrates with whatever existing technology.
Matt Hobbs
>> Absolutely.
John Furrier
>> Yeah, so how-
Matt Hobbs
>> Sorry, that was implied.
John Furrier
>> Of course, it was included. End-to-end. That's coming tomorrow. On the speed game, because this is a huge issue you brought up earlier, the speed is so fast that it's hard to figure out. What have you learned on the speed game? How do you keep up? How do you manage that challenge and turn that into an opportunity?
Matt Hobbs
>> Yeah. So on the technical release side, we have teams that are dedicated to Google that focus on every single release, every single private preview that's coming out, how it impacts what we're delivering now, but also if we have current projects going on that we could drive faster, like we're going to move and adapt new feature sets, et cetera. From the broader landscape that you hit, I think it's paralyzing. I think it's paralyzing for everyone of the movement of every day is changing. But the idea that you're going to be continually accruing technical debt in the way that you build the capabilities is a certainty. But the time horizon on how you resolve that technical debt needs to be a lot shorter. That's a lot of the issues that we have with our clients with two trillion of technical debt wrapped up and spending money on things that they don't want to spend money on. They can now actually address the technical debt a lot easier by leveraging the full end-to-end stack that's there from GCP.
John Furrier
>> So technical debt is actually, AI can help there, but on the speed side, you're saying the best strategy is to keep moving.
Matt Hobbs
>> Best strategy is, one, for some, get started, stop debating the tool, focus on the capability build. Gemini, it's a great tool set to start with, but you got to build the capability. It's going to continue to advance and adopt. So the entry point of what you use today is going to be different. So it's the worst version that you're going to use today. The meta versions tomorrow, then the next day, the next day.
John Furrier
>> You stole my lightning round question. I was going to save that one. Save that question. My final question for you before we go to the lightning round is, what does paralysis look like in an organization? I mean, no one's sitting there paralyzed. Nothing's moving. Paralysis is a problem. I've heard that before. Describe what it looks like. And they still have meetings, stuff is getting done. What does it look like?
Matt Hobbs
>> Constant debates as to whether or not you release or scale AI tooling to your employee base. Can we trust them? Can we trust it? What guardrails are we going to put around it? Well, maybe we should start with a hundred people. That's paralysis to me. When the decision is, can we wait? There's this build verse buy conversation that's going on with some executive boardrooms that just astonishes me a little bit, which is, well, should we just wait till someone's AI is our business and then we can just license it? But you won't survive. But that's a conversation that happens.
John Furrier
>> I don't want to be on that board.
Matt Hobbs
>> Yeah, exactly. Those are symptoms to me of paralysis.
John Furrier
>> No, but this is a real issue. This has come up a lot. But people disguise it. They still have meetings and they say, "Oh, we're going to roll it out."
Jim Anderson
>> We even have, you can talk about pilot purgatory, right? I mean, that's another form of sort of, you know what? Let's just run a pilot. But you never get into deploying it, getting the adoption and seeing the ROI associated with it and go from there.
John Furrier
>> Yeah. I think that survey by MIT was skewed on the agent thing because I see a lot of that. We have tons of sandboxes out there. Nothing is working. There's no ROI. So I think there's a forced measurement around that.
Jim Anderson
>> Yeah. I mean, I think today, actually, you are seeing ROI associated with the AI technologies. We've talked about it sort of before. And I think that's going to continue to accelerate moving forward.
John Furrier
>> At NRF early in the year, it was companies I talked to a year ago were playing with agents. They had them in production. Their top line revenue numbers are blowing away all estimates because it's revenue now. It's not just cost takeout.
Jim Anderson
>> Yes. Yes.
John Furrier
>> Okay. Lightning round. Question, Jim, we'll start with you. If you can give one piece of advice to someone who wants to scale their AI, what would it be? Same question for Gina and Matt, so you get prepared. Matt's got his answer already. Make up another one.
Jim Anderson
>> Yeah. What I would say is, listen, focus in on the industry expertise, right? Be niche, focus on the industry expertise, double down on that, and leverage that along with the organizational change necessary to make sure you can drive adoption of what you're doing.
John Furrier
>> All right. Gina, you're up.
Gina Fratarcangeli
>> I think set up an advisory board that involves finance, technology, leadership, business, sales, and prioritize the use cases effectively. Have everyone weigh in on what those use cases are going to look like and the impact they're going to drive to the business so that, one, you've got buy-in. But two, you're going through it in a methodical way and you decide whether ROI is the most important thing or starting to get adoption going. So methodical approach and really focus on the change management.
John Furrier
>> Matt.
Matt Hobbs
>> Well, I shared one. I think you got to better define the intent at the CEO level so that you know what journey you're going on.
John Furrier
>> All right. Fun conversation. Very insightful. Thanks for sharing. Love the commentary. Again, the AI world is upon us and as the change is upon us, the engineering and the systems think it will impact, all the applications, and the AI native applications are coming fast. This is theCUBE. I'm John Furrier, your host. Thanks for watching.