AI & Analytics: Shaping the Future With Alteryx CEO & Tom Davenport, Americas
February 19, 2025 | 6:00 PM - 7:00 PM UTC

Join theCUBE on Feb. 19 as Andy MacMillan, the new CEO of Alteryx, unveils his vision for how the company will shape the future of AI and data analytics, helping businesses boost efficiency, foster innovation and achieve competitive growth. Meanwhile, Distinguished Professor Tom Davenport delves into the critical distinctions between Generative AI and Analytics AI, offering valuable insights into their transformative impact on various industries. Don’t miss this compelling exploration of AI and the evolving landscape of analytics.

Andy MacMillan
CEO Alteryx
Tom Davenport
Distinguished Professor Babson College
Dave Vellante
Co-Founder & Co-CEO SiliconANGLE Media, Inc.
search

>> Hello.
Welcome back to theCUBE here in Palo Alto. I'm John Furrier, host of theCUBE in our Palo Alto Studios. Andy MacMillan is here. He's the CEO of Alteryx. Kind of making his public debut on theCUBE. He as on John Fort in CNBC earlier. New to the company. Thanks for coming on theCUBE. Appreciate you coming in.

>> I
really appreciate it. Thanks. It's fun to be here.

>> So
Alteryx has been around. We've been covering it for a long time. Obviously the big data movement, kind of an OG in data. Obviously the market's changed. You're now at the helm. You just recently took over as CEO of the company. Market's in a good position, kind of spins to your doorstep these days. You guys are in really good pole position. First of all, how long have you been Alteryx? When did you start? What's the plan? What attracted you to Alteryx?

>> Sure.
Yeah. I started in early December, so very new in the role as they say, but it's been a fast ramp up. I've known the company for a while, know the space. I used to run the data.com business at Salesforce a while ago, so definitely not new to the data game. What got me excited about the company is, I think you're exactly right. The tailwind right now is all around AI. Everybody talks about AI. I think the next few years is about getting prepared for AI and how people are really going to take advantage of this. Whether it's an agentic workforce or a large language model, the thing everybody's talking about is how do you get your data ready. I think we have the best product in the world for doing that so it was an exciting kind of opportunity to jump into.

>> It's
interesting. As we've been saying on theCUBE, there's kind going to be kind of history will be written I think in this moment. We're at a critical junction of the nexus of many things coming to cloud scale next level, you start see platforms become big. There's still tooling out there for certain use cases. But with machine learning and generative AI specifically now kind of peaking at the beginning of, I call, any one of the game, you're either going to be on the wrong side of history of this or the right side. And both on the company side, but also your customers, right? So the ones who have data are doing well, some who have been doing data differently may or may not be doing well. So there's kind of like platform refactoring. New architectures are emerging. A new kind of data layer is emerging while an existing market like analytics has been around. So the business analyst, the democratization of data has been thriving with lake houses and data lakes. But now you got to go horizontal with scale. You got specialism with machine learning and generative AI in the data specifically. So it's causing customers to really scratch their head and say, "Okay. What is the scalable architecture for this?" What is your view on that?

>> I
think you're right, and I think the thing that is shifting a lot is the entire organizing principle for how people put their data together used to be very siloed by department, kind of line of business-centric and was very siloed by the business applications you pick. So if you go into any large company and you ask them, "How's your data organized?", they'll first tell you about the applications they have and then those applications dictate the way their data's structured. And I think the big shift is the promise of AI is to be more effective and efficient across those layers, across those teams. Nobody wants to talk to an AI agent that sounds like you're talking to that live agent that keeps pointing it to a new department. And so we're going to change how we solve problems. And I think one of the things we're going to have to do is completely change the way we think about how we manage and store and organize all of this data, and that's going to be a big lift. And so how do we help people get empowered to do that? A lot of what we talk about is who are we empowering? It's not just going to be your data science team. It's going to have to be business analysts, people in operational roles, people that understand how your business operates. They're going to have to go in on a mandate to say, "I'm going to reorganize this stuff to have it make sense."

>> Andy,
you're in the valley here. You obviously got a great company with Alteryx. There's two types of companies we're seeing out there and I want to get your perspective on this. There's the, "We need to pivot," which means stop what you're doing and turn and change directions. And then folks who just change the trajectory of what they're on. It's not really a pivot, it's more of a, "Okay, maybe a zig or a zag, but we're in a market where you have to make these moves." Alteryx, you're new at the helm there. Obviously a lot of investment behind the company. You're in a much different place as a company than you were a few years ago or even last year. What's different about Alteryx? Take us through the scope of the trajectory, the plan. Obviously you're going into this kind of perfect storm of innovation, value extraction is going to be there, but there's a new way to doing things. What's the analyst perspective view of, if you had to share your thoughts on Alteryx's kind of position and where you're going? Where's your north star?

>> Yeah.
I think inside the company this is one of those moments where every leader should rethink essentially everything they're doing. There is no function where I have not had a chance to sit down and open up a ChatGPT or a Gemini or an agent force and think about how would this change the way that your team, how it runs, how it solves problems, how it scales. And so, one, I do think it's a pivot moment. Some pivots are small departmental ones and some are company-wide, but everybody should be looking at that. And so that's inside the company. I think outside the company, my feeling is there are companies that are going to build the AI future. There are also a lot of companies that are going to be sort of the picks and shovels to help everybody get there. And so I don't know if the world is going to move to an agentic workforce or a large language model workforce or some combination of those. I'm certain that to do either of those, people are going to have to get a lot more of their data prepared. And so we talk about AI in three layers at our company right now. There's the way you interface our product, everybody's creating an AI interface to their product, so we are too. How you get output from the product. So it's fun to get AI reports and things like that, which is all great. But the third one that's I think unique to us is how the product is actually used to help companies get ready for AI. And so we're going to transform the company internally with it, but we're also going to go to every one of our customers, every one of our prospects and say, "Here's how we're going to help you sort of get prepared." Whether you're heading down an agentic model or a large language world or whatever you're doing, you're going to have to have a different view of your data.

>> It's
interesting, this old expression, I'll never forget this, "Don't pull the nets when the fish are." The fish want data. You guys have been doing it at Alteryx, but they're also doing things differently. You mentioned some of the transformational things. Business transformation is the hottest topic. And it's interesting, we're going to get into this the business analyst role, which I think is super important you guys talk about. The old dashboard data, you had a great democracy, "Hey, things are happening" and then you make a recommendation. Now there's more action involved. You can actually take the data, analyze it and affect change whether it's automation, workflow, which will reduce a lot of that heavy lifting. So people looking at specific use cases that are low-hanging fruit, now agentic. We've been reporting it's coming, but the AI infrastructure is really where the action is. Where's happening with the data? Where's it stored? What's your view on that? Because I think the fish are running still. You guys have the net out there with the data analytics. The data analysts are still doing their job, but now they can at the point of value, they could actually implement directly. They don't have to escalate it or make a renovation. Take us through that nuance.

>> I
think one of the things we've done since the founding of the company was this idea that no matter how much you instrument something, for example, with a dashboard, you're still going to get asked questions and have to iterate and work, right? We've all been in the meeting where somebody says, "I'm looking at my dashboard." I don't know, "Why are sales down in the Southwest?" And everybody turns to some analyst who now has to go figure this out. And what that person does is they tend to go pull a bunch of data together and they used to do that in spreadsheets on their desktop and that was pretty inefficient. Now they do it in Alteryx. And that might be pulling some data out of the cloud data warehouse. It might be pulling in third-party data. Like, maybe the industry's down in that region. You pull all that together. And then that person has a choice to make. Sometimes it's just an answer, "Oh yeah. Well, sales are down for every company in that region. Secular trend. Let's move on." Or it might be, I need to automate this. I'm going to have to build a report on this every single week." And so they can go do that. I think what we're going to see now is a bunch of new requests coming in. We're rolling out AI. We're doing these things. We're all going to turn to that same analyst and say, "Go figure out how the AI thing is going to know how to do this for a very long time." That analyst is going to have to figure out how to pull this data together, systematize it, turn it into a workflow and build that out. And so I think that's a pretty natural evolution. I think one of the core thesis we hold is that you're going to be turning to somebody who knows your business to do that, not just somebody who's super technical. You're going to have to marry those skills together. But you're going to start asking people in your company to solve these new kinds of problems and they're going to need a set of tooling and capabilities to go solve it.

>> Yeah.
What's interesting and fun is that the low-code, no-code revolution's coming into this area. We're at the point of analysis. You can just code. And so the coders will go lower down the stack and work on harder workflows. So at the end of the day, workflows and data become the intellectual property for companies. So the first step that we've been reporting on is they go, "Okay, what do we have?" That's the first step. And then they go, "Okay, how do we use that data?" Then you start to see kind of that behavior change. What's your perspective on this? Because I think enterprise analytics, the business that you're in, is now the beginning of that change.

>> Yeah.
I think people are going to find not only they need to get a catalog of what they have and how they're going to use it. They're going to have to start what they need, because again, you're going to want external data. You don't want this agent or whatever it is that only understands your little universe. It should have a purview of the wider world. And then you're going to have to understand how it's being used. And I think the other thing is we're going to have to know traceability. One of the things I think AI is so good at is creative tasks. You can give it all this marketing collateral, tell it come up with something really fun and interesting and you go, "Wow, that's really fun and interesting." When I ask an analytical question about my business, how are sales in the Southwest region, what I'm not looking for is something fun and interesting that the AI sort of made up on the fly. What I want is the actual answer. And so we're going to have to compliment the creativity of AI with the predictability and the precision that people want out of their analytic systems. And so that is not only preparing the team to put that together, but it's the traceability. People are going to go, "Where did that number come from?" And we can't just go, "I don't know. The robots made it up." We have to say, "No. Actually, here's the traceability." And we might say, "Actually, that's not how we want it to do that. We want it to change." We're going to swivel right back to that same person again and go, "Can you make it do that thing differently so we get this actual number?" And so I think that's important, but it's not just the data you have, it's also the data that you're going to need. And I think people are going to have to really think about how they bring all that together in a way that is trusted and makes sense.

>> Yeah,
Andy, I think one point that I can share that's kind of validates that is that no one wants to use a generic foundation model to drive their business. They'll use it to complement data, that's to your point.

>> Right.

>> The
other thing that we get a lot on theCUBE over the past year is the following kind of directive we hear from commentating, which is, enterprises say at the top level, CEO or board, "Let's use AI to transform our business." And that then gets translated to, "What workflows do we attack it and then be more efficient or drive revenue?" That's the high level. And then they go, "Great. Solve that problem. Take that hill." And then it gets down into the platform, "Okay. So how do we do that?" That's kind of where we're at right now. How would you talk about that because you guys have an interesting narrative around you, the canvas, to solve AI problems because this is what I see the need right now. What is that canvas? Because that's the point we're seeing, "Okay. The directive has been made. We see clear line of sight of how AI could help the muck, the toil, the undifferentiated heavy lifting," whatever you want to call it, but now I got to put it to action. What do I do next? What is this canvas? Explain.

>> Yeah.
I'll tell you my direct experience doing it. At my last company, one of the things that we did was we rolled out a large language model wall to wall. Every employee, we went on site with the vendor that provided that large language model. We did a big hackathon, we came up with all these interesting ideas. It was really fun. We came up with 60 custom GPTs that we built that we rolled out around the company and they were really helpful. What we found was we could do that again in areas that were creative. Where it all fell down and where we couldn't drive as much value was anywhere that we needed to use our actual business systems or the data under those systems. And what we found was we had two choices. We could go in the front door of those APIs. So imagine I'm writing a GPT that I want to pull together everything I know about a customer before I go meet with them. Well, you can think of how many systems know something about the customer, my billing system, my CRM system, my marketing system, my loyalty system, my custom product database. You start pulling all that together and you go, "Well, all I have to do is make 12 API calls to pull this data together and you're back to like, "I'm basically building a dashboard. This is not that interesting." Or I have to go to the underlying data and use a tool like Alteryx to pull all this together and say, "Here's everything that I know about a customer at the data layer and I'm going to have the GPT iterate on that." And so what we discovered was going in the front door was a hard coding problem. We didn't have a tool Alteryx rolled out yet where we could get to all the data assets. And I identified that as the biggest need. I said, "I'm going to continue to transform the company with a large language model. I need a tool, a canvas where a business person can pull data together to solve a problem with AI." And so when I got the opportunity to come to Alteryx, it was like, "Well, this is the exact problem I had before. I've seen this problem. I've had this need." And that need is exactly as you described. It's a canvas or a workspace where somebody who understands the business can say, "I need this bit of data. I need this bit of data." And understands the actual data. They can say, "Oh, I know that my CRM system has seven ways we calculate this one thing for different reasons. I need this one for this problem." And I think that's the big unlock. I think that's where people will start using AI to solve business problems when they can bring not only the creativity but the actual data of the business to bear on the problem.

>> I
think that's a great point. I was talking to a founder of a company that sold it to another company. And the reason why he sold the company or was attracted to sell the company was is that he was trying to solve such a hard problem. His job was so hard that this company solved his problem so fast, he's like, "I need to kind of just join forces." Your point about this brute force attack, "I can program the APIs," again, a lot of grunt work, a lot of pounding coding, but what do you get? Is it repeatable? Is it a black box? So there's some technical, "Hey, we can do it internally." So there's a lot of that conversation going on around, "Hey, I can brute force it this way or build an abstraction, have an Alteryx canvas that doesn't have to change the underlying conditions." I mean, that's what you're kind of getting at, right?

>> Yeah.
And I think it also goes to the cost basis of AI. And there's a big debate going on right now, and I'm not here to take sides on it, of is it the value of these business applications going to make AI great, or does AI kind of compress the value of these business applications to become that logic layer? I think we've seen some pretty big name CEOs debating this in the press. I don't know that answer, but I was pretty sure for the use cases that I was familiar with paying to use all those APIs when I didn't want any of that business logic. I just wanted the data and it was my data. It's my own data. I'm not looking for training data, like this was my company's data. Made sense to go right to the data. I'm sure there are other use cases where using that logic might've made sense. That logic might exist for a reason. But I think we're going to more and more see organization saying, "I have all this data. It is my data even though it's in these business systems. I want access to that data so I can use it to make my AI system smarter." And that might even be buying an AI agent from one of those big vendors, but it might need to know about these other systems. "I want my AI agent to be smart."

>> It's
situational. I mean, you could argue that the logic makes sense if you need the logic, if that's the data you want. But those systems weren't built during the gen AI era, so they had a reason why they have the logic. Now, the trend I'd like to get your reaction on and see if this fits into the canvas is that the big winners that I'm seeing right now in the market are the ones that look at abstractions as an opportunity to kind of pull the silos out without breaking them down. In other words, you have two choices, break down the silos, re-architect everything which is a heavy lift, or just use the data in there and then for the best of breed use cases of why they exist, you still use the logic. You know what I'm saying?

>> That's
right.

>> It's
the abstraction of the complexity.

>> Yeah.
And I think, look, you still have a workforce that uses those business applications and they're not going anywhere. So I am not saying these applications go away. I'm simply saying you might have use cases, where if I'm writing something that spans multiples of these, going in the front door, having all that logic run, paying for that compute, that API call, whatever that is, might not make the most sense. For some use cases, you might want to go direct your data. And in a lot of use cases, you might even have to tell that system going in the front door, "Which piece of the data am I trying to even get to?" And so again, I think there's almost a role that emerges, maybe a few roles, of people whose job it is to sort of sit in the middle of these orchestrations and think about, "How does this all come together and what tools do I need to do this?" And then iterate. The other thing I think that's not going to happen is people are going to roll out their AI agent, push the go button and go, "Boy, we're done." I think they're going to iterate. They're going to learn how they want this to work. And so when we learn things in companies, we turn to the people running those things and say, "Hey, can you make this work a little differently?" And so I think that's a big part of this too.

>> You
guys have a lot of announcements on the product side. Clearly product-led growth is on your horizon, so I see that as a big thing. What is the value that you see if someone asks you, "Hey Andy, what is the key value of Alteryx right now?" Is it that canvas? Is it the abstraction? What's the main value proposition that you're offering customers? Because you have a lot of experience coming in with analytics and dashboarding and the analyst side. They will be driving the change. The business transformation is the business people.

>> Yeah.
I think what Alteryx really does at its core is help business analysts become sort of superheroes and super users of data or data knowledge workers. You are looking for a place. And again, it used to be your desktop and Excel. And now the data sets are so large and you're trying to move so fast that that's not really tenable anymore. What you need, if you are this kind of, think rev ops, marketing ops, we all have these great really incredible people that do operations for a living, they need a workspace, a canvas where they can say, "I need to grab this bit of data and this bit of data and some external data. And I know how to pull it together and do something smart with it and come up with a conclusion." That's really what we help our customers do.

>> Over
the past 20 years, I've been chronicalizing all these transformations. I have to date myself, but I will. Go back 20 years, it was IT transformation. Consumerization of IT, which was make the IT department easier, okay? Then it became digital transformation, which is okay, that's the company-wide. I think the AI one that's coming out of this, and this is generally validated by the experts we interview, is that it's business transformation. Now, if you believe that to be true, which I think that's generic thing of people, "Yeah, transforming the business, that's absolutely key." But when you get into AI, you're seeing people throw AI at the enterprise. And AI in the enterprise is one of the hottest categories right now because it's only 1% of the spend is even been realized yet. So huge TAM. You can't just throw AI at the enterprise because there's a lot of knobs and buttons in the enterprise that you got all kinds of domain specific kind of configurations. You got architectures. So search was hard to crack the code on an enterprise. You're seeing companies with RAG dominate that area. Analytics, again, hard problems around, "Okay. Easy to say dashboard, but you got to go get the databases." So it's like so nuanced in the enterprise. Explain how hard it is and why AI is such going to be such transformation. Because your canvas essentially makes that problem go away. I mean, that's the way I see it. What's your take?

>> That's
exactly right. It's a hard challenge because most of us have our first experiences with AI in the consumer space, and there's not a whole permissioning problem set in the consumer space. Right? Some people get to see some parts of Wikipedia, not some of the rest, right? But that's how businesses run. And I'll give you a simple example. When I've rolled this out in the past, one of the conversations we had was we have an incredible amount of information in our messaging platform, in our Slack system for people to communicate. "We should connect it to that." Well, that sounds really good, until somebody says, "Well, let's ask the GPT now what do people think of this leader that we have in the organization?" And everybody's private chats are part of that knowledge base. But that's not what we do. That's not the expectation for that system, right? We expect to have some privacy. The same is true if you're in a public company. You wouldn't want everybody in the company to be able to put together a predictive model of the company's next earnings statement inside the company because they have access to all this data. So the reality is enterprises have a governance model and an expectation of that governance model and security systems and things that are going to have to be inherent in these solutions. And so, one, they're going to have to have an ability to replicate those systems and how these tools work. And then the second, as we were talking about earlier, is we built a whole set of systems that run the business that have a set of inherent knowledge in those systems that now have to transpose themselves into this new AI universe. And that's either going to be having the AI use each one of these systems. Might be the answer. Or you're going to have to take some of the logic around all that data and make it clear to these AI systems. And so to your point, I think the idea that that is clearly a problem, and when organizations have a problem, they turn to their partners and their vendors and they say, "Who can help me put together a way to solve this problem?" Our goal is to show up for our customers and say, "This is really a preparedness problem." And we've literally called our space data prep for the last 20 years. You now have a preparedness problem. You need to prepare your data for something. We've been doing that a long time, if not through the lens of AI, through the lens of how an enterprise does it.

>> It's
interesting. Again, this is such a great conversation because one, that backend preparation was needed for many generations. We've seen that data dashboard business. But the mentioned about the Slack and the user experience, they don't care. They just want hyper-personalization. Now to make all that work behind the scenes, about privacy, "I don't want this data to be exposed to salespeople, finances," all kinds of knobs and buttons, this is where the value comes in. This is a hard problem. Scope the opportunity for the enterprise, because I think this is the nirvana. Everyone wants to get there. "Hey, Siri. Hey, CUBE. Hey, company chatbot. Get me my thing. Do my..." So they want the interface, they want the personalization. They just don't want to know...

>> Yeah.

>> I
think the whole architecture.

>> Yeah,
they don't want the complexity, right? I think AI is in some ways, if we do it right, the great simplifier. And that's not only for users but for customers too, right? We've all had to deal with companies where you say, "Boy, why is it so hard for me to just get an answer from this company I'm dealing with? What I want is the simplicity. Let me ask the system and let it tell me what's going on." The same is true inside the company, right? We're all trying to get something done. I joke companies are always trying to find this mix of enough process that it doesn't feel like it's the Legend of Zelda every time you're trying to do something. But I also don't want so much process that it feels like I'm going to the DMV just to try to get an order form through. The nice part about AI I think is you can take a lot of that complexity of process and maybe make it so simple that somebody just goes, "I just asked this thing and it knows how to navigate the internal workings of the system." Somebody has to help put that simplicity in place overall, that complexity. And again, I think that's where there's a few places where I think there's sort of a picks and shovel strategy. I talk about our product being one. I think enterprise workflow is another. And I think collaboration is another. If you really think about these horizontal systems that companies have rolled out, I think there's a new set of demands coming to these vendors to say, "Okay. In an AI universe, make my data make sense, make my workflows make sense, make my collaboration make sense." But I completely agree with your point, I don't want a new set of complexity when I do that. I want a simplification.

>> It's
interesting you mentioned the future of work. I think that's one big one that's going to be changed. Automation is coming in here clearly with personalization. And also the role of the analyst becomes a transformative figure because you call them superheroes. I think that's going to be a big trend. The thing you mentioned about your experience with LLMs and the hackathons is a good one because I think you're looking at it and you're realizing, "Hey, there's a lot of different models." So you have a lot of model integration. And in generative AI, it's generative. It's not like it's pre-programmed. So you have to do data prep in almost runtime. Not to get complicated in the weeds here, but the concept of a system being ready at any time for any query is hard.

>> Yeah.
And that's right. These systems, when you build these custom things, you can give it specific instructions. I went through this when I built my first GPT. It was a tool to help me prepare for reaching out to a customer. I took a bunch of use cases that we had and examples of value we provided customers. I load this thing in, but I didn't train it correctly and it just started making stuff up. It was really fascinating. I'd write these emails about these incredible savings that we had built for customers, and I was like, "Wow, that's almost unbelievable." And the answer was, it was unbelievable. And when I dug in, I was like, "Well, here's why." But importantly, I was then able to go in and retrain the system to say, "Hey, when I prompt for this kind of stuff, I want you to go specifically to this set of data and I want you to only use this data." And so again, to me, that's kind of an analyst job. I could go in there, you could imagine at scale and say, "Hey, when our customers ask a question about billing, I want you to only go to this little simple table I've created and I only want you to get this billing information and provide it to a customer. Don't make up an answer to their billing question." And then we'll iterate. So I think you're right. Can we make not only it simple, but make it predictable and ?

>> Trust
is critical. The data. You should mention the traceability comes into trust.

>> That's
right.

>> It's
funny, when ChatGPT came out, they thought I was the founder of the All-In Podcast. I'm like, "Close enough." I mean, because I worked with some of those guys in other opportunities, so it got it wrong, but it sounded great, right?

>> Yeah.

>> Like,
"Okay, wow. Okay." But that's your point. There's a lot of false positive and enterprises won't tolerate that. I mean the bar is high.

>> Right.
Right. No, I think that's right. I think we're going to start to see examples over the next couple of, I want to say years, but it might be months, of people who get it right. We're going to see some, I think, amazing experiences where we'll start and go, "Wow. Look at all this company is doing. What an incredible customer experience." And I think we're also going to start to see the opposite where we say, "Look at what this company did. They rolled this thing out and they didn't have that traceability" and the unpredictable happened. And how do companies react to that, I think it should be really interesting. I think that will then come full circle to, again, a whole set of processes and capabilities that companies are going to say they need to have if they're going to be successful.

>> Well,
we're going to do an agentic AI summit here, two-day event. Definitely want to have you guys in there for sure because this is top of mind. It's pretty clear. Even Andy Jassy at AWS re:Invent mentioned it. Google Cloud folks have mentioned it as well, that agentic's coming, it's just not today. The AI infrastructure is the key. That's just the speeds and feed. You're seeing all the semiconductor companies doing well. And the data layer that you're in the build out mode now. So you're in a good spot. As CEO, new at the helm holding the wheel of the ship, what's your investment strategy? What are you focused on? What are your growth there? It's obviously product-led growth. Must be you got the Copilot. You got a deal with Google Cloud, you're in their marketplace. What's your focus? What's your investment thesis this year? What are you investing in?

>> Yeah,
the biggest area we're investing in is, one, connectivity in cloud data warehouses continues to be a big driver of our business as people want to get more value out of the investments they've made in that infrastructure. And then the second one is exactly in line with our discussion here, which is, our product as it exists today is the product that solves this problem. So this is not a future state we have to get to. People use our product to do data prep. This is the new data prep. I think what happens when you go after new use cases though is you discover new features people want, new things that they need. And so I expect us to have a very fast moving roadmap around as people use our data prep product to get their data prepared for AI, we'll be working closely with our customers to sort of make sure that all those things we learn end up showing up in the product.

>> Yeah,
I mean generative is just kind of prepping data at runtime. And then resilience. Final question on resilience. This comes up a lot. The bar is high on resilience. I mean, some say, "Generative AI, just another app." So you have experience in that. What's your thoughts on resilience? Customers have a high bar on integrating their apps. How do you guys look at resilience?

>> I
mean, I think as we think about having, whether it's an agentic workforce or against sort of a large language model-based infrastructure, those things can't go down. They need to be, again, predictable and there in steady state. And so a lot of what we're working on is how to make sure that the system's always ready for people to go, that we're fast. Again, people are going to want to bring in very large volumes of data quite quickly and be adaptable. And that again, we're adaptable. I think part of being on the forefront of something new and exciting is acknowledging that we don't know all the ways people are going to use this. We don't know the way it's going to be. Again, I said earlier, positive and even negative. How do we make sure that our customers and our users are ready to move quickly and adapt as we learn? And I think that's going to be a really important thing that happens.

>> It
must be fun for you to go into customers. Certainly you've been hacking LLMs and doing your stuff on your own too prior to joining Alteryx. Must be fun going to see customers and seeing their problems. What's the coolest thing you've seen? Or when I say coolest thing, like coolest challenge/opportunity that you've seen in the customer base you've been poking up.

>> Yeah,
I think it's exactly right. It's fascinating how much right now in these meetings is people spinning their laptops around saying like, "Look at this thing that I built" and then seeing something else that somebody else built. It is really interesting how people are thinking about their business. I talked to a customer in Europe recently and they were talking about how while they're a manufacturing company, a lot of their cost basis is the design time. They're a bespoke large scale manufacturer. And so they spend a lot of time getting the designs right. And they said one of the things that they believe AI can help them do is learn from all their previous designs, do a lot of pattern matching really quickly and come up with, think of almost like an 80/20 of a starting point of a design. And that would compress every aspect of their business down to faster turnarounds, faster time to money, better customer experience. Really fascinating. And I just think, again, every business is thinking about how do I completely change. That would change everything from their margin structure, their delivery timelines, their staffing levels and what they're hiring. Just fascinating transformational concept. And they were showing me like, "We can already do a lot of this today." It was really cool.

>> There's
no bubble when there's value extraction and value creation with technology. Andy, final question. For the folks watching that are either Alteryx customers and prospects, what's the pitch? What's the narrative that you want to share? Why Alteryx? When should they call you in? Is there certain signals and things that in their environment that jump out off the page to them and say, "Hey, we got to call Andy"?

>> Yeah,
I think that's a great... Thank you for the question. There's a couple of things that I think we see. One is, again, these big investments in cloud data warehouse. We tend to follow up very quickly. People say, "We put all our data somewhere. How do we give people access to it in a way that can solve problems?" So that would be one. The second is, as your company starts to go down this AI path, and I think everybody will quickly realize you have a data challenge to sort out, we're happy to come partner in that area. And then the last one is to our existing customers. One of the things that's so exciting right now is we have this massive community of users. We have 600,000 users in our community. We have these ACE experts. People are ultra certified experts. And what I think is exciting for them is I think the future of AI means that their skill set is going to be even more valuable as company trying to get their data prepared for AI. And so a lot of what we're doing is partnering not only with our customers, but even our users to help them get ready for this and really feel like this is a tailwind for their career.

>> Congratulations
too on the new job. We've been covering the events. Definitely your community has got a lot of engagement. Their right. It's a trust network. It's very cool. Thanks for coming on theCUBE. Appreciate it.

>> Thanks
for having me. It's really fun.

>> Andy's
here, the helm at Alteryx. Again, perfect position. If you're in the enterprise, you want to connect and get that data out, get that value in this new generative world where things are being generated, you got to have that data ready and prepared. You need a canvas. Alteryx has a great solution. Check it out. I'm John Furrier here at theCUBE in Palo Alto. Thanks for watching.

>> As
a Formula 1 CEO, I lead hundreds of people in dozens of departments. But I'm not the only one making decisions. Getting the right information from our data is critical to the entire team. That's why we rely on a platform like Alteryx. It gives us the insights we need to work smarter and think faster, on and off the track. From logistics and operations to our financial systems, Alteryx makes it easy for everyone to understand analytics. We all know we need to see around the corners. One of the smartest moves you can make as a CEO is empowering all your employees. So when people ask me how do I make my decisions, I tell them I don't. They do.

>> Hi,
everybody. Welcome back to AI and Analytics: Shaping The Future, made possible by Alteryx. We're here with Tom Davenport who needs no introduction. Tom, last time I saw you was this summer at the MIT CDO conference. Thanks for making some time for us.

>> My
pleasure. Thanks for having me on.

>> Yeah,
you bet. We talked a little bit last time, and I know you've written about this, is sometimes I call it legacy AI. But that's still booming. So how do you see gen AI complementing or even competing, certainly competing for the mind share with traditional AI and analytics. But how do you see it complementing and going forward in terms of driving business value?

>> Yeah,
it's a big issue. And part of why I talk and write about this is I want people to call it something other than legacy AI or traditional AI or non-generative AI. The analytical AI is the term that I like best for that more traditional machine learning where we're trying to predict numbers, not text or images. And I think it's at least as important as generative AI despite all the publicity about generative AI. And in many cases I think it's a bit more likely to make money for organizations because you use that type of AI in areas like pricing, in personalizing marketing, content, figuring out who to target a particular ad to as an analytical AI activity, fraud elimination in financial services, figuring out who to give a credit card to and not to give a credit card to. That's all analytical AI. So been around for a while, but it certainly has legs. And I think in many cases we'll see combinations of analytical and generative AI in many use cases.

>> When
I think about analytics over the years, it obviously starts with the data. We shove everything into a data warehouse. We build cubes that were fairly complicated, and you needed some really smart people to do that. And it was the elapsed time to get to answers was quite significant. The science projects of Hadoop and the promises, the failed promises frankly. And then the cloud definitely compressed that time to value, but it's still generally those analytic systems historically, they're historical systems of truth. And with AI, it feels like we're just getting closer and closer to real time, connectors to backend systems where the business logic and a better understanding of the metadata. All those sort of technical barriers, they seem to be at least smoothing out or breaking away with AI. Do you have any perspective on that and specifically the real time nature of analytics and the industry's ability to compress the time to value?

>> Sure.
Well, we've seen of course a lot of technological change in the generative AI space, but there's been a lot in the analytical AI space as well. We have automated machine learning now for about a decade or so where people can create models with the help of, in a way, a form of AI to do AI and it will basically do all the steps in a machine learning analysis except for deciding what to do in the first place and figuring out what data set to use. And it'll try a hundred different algorithm types in couple of minutes or so. So that really speeds things up. And I think even more importantly, maybe makes this whole domain accessible to people who probably had a statistics course in college but haven't done a lot with it since. So with some of these tools, you don't really need to know very much about the ins and outs of statistics or you don't have to write Python code to do the analysis. You just say, "Here's the data set I want and here's the variable that I want to predict" and it's off to the races. So it's not only much faster, but makes it possible for a whole different group of people to do sophisticated analytical work.

>> Tom,
I was at this dinner last night and it was a dinner for investors and startups. A lot of young people there, as you can imagine. Smart, young, eager kids in their 20s. And those of us like you and I, who dye our hair gray, we were lamenting that we wished we were 25 again. And one of the folks said to me, "Well, you had the dotcom." And now you remember well, of course you and I, both, the dotcom. And back then the Negroponte, the bits versus Atoms Industries, and it feels like AI is maybe more ubiquitous. But what industries do you think are going to experience the biggest transformations and the biggest opportunities? Is it sort of across the board? Are there any patterns that you're seeing with this analytical AI that you're talking about?

>> Well,
it's always been, as you've suggested, based on the data that's available. And the industries that have the most data tend to be the ones who can make the most use of analytical AI. So financial services has been that way for a long time and was a pioneer in many domains about deciding whether to extend credit and is a transaction fraudulent, that kind of thing. And of course, if you're talking about hedge funds or investment-related AI predicting, which investment stocks or bonds or whatever are going to do well, that's always been an analytical AI activity. So financial services probably more than anybody, but retail has been very big in this regard. Lots of point-of-sale data that they can use to try to predict things and make sure we don't run out of stock on the shelves. Telecom has been quite big in this area. Everybody in that industry wants to predict and reduce churn. And it's very easy to create a churn model to identify the factors that are most likely to lead to churn. So I think the industries that have the most data tend to be the most successful. And it's not only an industry split, but big companies tend to have accumulated more data. Small to medium organizations are typically at a bit of a disadvantage in that regard because they just haven't had the scale or the time to accumulate a lot of data to analyze. And that's a difficult problem to solve. We don't really tend to have synthetic data on the analytical AI side. You have to actually accumulate it through your business transactions.

>> You
mentioned retail and I think about supply chain. I'm far from a supply chain expert. You may have done some work in that area. But it seems to me that there's a lot of waste in the supply chain and a lot of opportunity to tighten that up and drop money to the bottom line or just have better consumer experiences. How do you see analytics and AI playing in the supply chain and helping to address some of those challenges?

>> Well,
I think the trick with supply chain has always been matching supply and demand. And demand planning is, I think, still in many cases, while it's becoming more analytical, certainly it's still a bit of an art and you have to pull together lots of different types of data, everything from what you've sold at this time of the year in the past to what's the weather been like and what's the economy like and so on and so. That's always been a domain where we have to combine a lot of data and then we have to analyze it through various means. And once you have an idea of what the demand is, it makes it a whole lot easier to create the supply that will satisfy that demand and reduce waste as you suggest.

>> Yeah.
One of my guests told me that it takes 45 days to even address a supply chain imbalance or a disruption. So hopefully with anticipatory analytics we can compress that. Let me ask you to put your practitioner hat on. I know you consult with a lot of organizations, enterprises that are trying to put data at their core and get competitive advantage. What are you hearing from them? What are the challenges that they face when trying to incorporate AI, analytical AI, generative AI. Pull it all together, how are they funding it and how are they operationalizing it and driving value and getting ROI, which many complain about not seeing the ROI?

>> Yeah.
Well, the big issue, whether it's generative or analytical AI has always been how do we get to production deployments. It's easy to do a proof of concept or a pilot or a little experiment, but putting something into production means you have to train the people who are going to be using this system. You have to integrate it with your existing technology architecture, you have to change the business process into which it fits. And it's getting, I think, better with analytical AI. There were a lot of studies early on suggesting that in that domain, 87% of machine learning models never got implemented. Something along those lines. I don't know how good the data collection was, but there were certainly a problem. And in generative AI, we see the same issue that a lot of organizations are experimenting, but they haven't made a full commitment to embedding it into their customer service process or whatever content generation process they want to use it for. So I think the key is to from the beginning say, "Okay. Assuming it's successful and it's tests and experiments, we need to put this into production and to plan for that from the very beginning." Budget for it. It costs a good bit more to put something into production deployment than it does to experiment with it, sometimes a hundred times as much depending on the complexity of the system. But we need to think about that in advance.

>> I
saw a survey recently, I hope I'm right. I think it was an IBM survey. I'm pretty sure it was. But they were asking folks what the value proposition was or their expectation for AI ROI. And the number one response by far was revenue generation. Second was productivity, which surprised me. I would've thought by far it would be the reverse that really doing more with less, maybe either reducing headcount or keeping headcount flat and getting better productivity would've been the number one response. Now, I guess if you're raising revenue, that gives you better productivity. But what are you seeing in terms of it? Are you seeing companies really trying to drive revenue? And it would seem with analytical AI, given the retail example, that actually would make sense. What are you seeing?

>> Yeah,
I think if that's your objective, then analytical AI is probably going to get you there more easily than generative AI because as I was suggesting, you can target the right customers, you can figure out what's the best price to charge, all those sorts of things. I think generative AI has been more oriented to productivity kinds of improvements, but most organizations haven't seriously measured the productivity gains. In order to do that, you'd probably have to do some kind of control experiment where some of the people in your organizations don't have generative AI and some do. And you compare the speed with which they create content and the quality of that content. Never easy to measure quality. So most organizations haven't really done much in the way of looking at serious productivity analyses from generative AI, so maybe that's the reason it came in second. Revenue gains are easier to measure, and I do think in many cases they'll involve analytical AI.

>> And
probably easier to justify to the CFO. Are there any specific use cases that have caught your attention? People talking about deep fakes, obviously security and privacy. Are there any that stand out that are noteworthy that analytical AI can support or advance?

>> Sure.
There are lots and lots of them. We've been doing analytical AI for so much longer than generative AI, so the use cases have kind of piled up. I'm very interested in healthcare and the role of AI in healthcare. All of the advances that we've seen in healthcare AI, many of them have just been in research laboratories, not yet at the clinical bedside, but predicting who's going to get diabetes, analyzing what treatment strategies are most likely to pay off, figuring out whether that x-ray or CAT scan or whatever image you have shows a cancerous lesion. Those are all analytical AI use cases. If you're looking at an industry like financial services, anything that you do to predict who's a good customer and who isn't, which investments are likely to pay off. Almost every industry I can think of, if you were talking industrial businesses, as you suggested, it's supply and demand in many cases, predictive maintenance of your equipment is an analytical AI application. So there are literally thousands of possible use cases that you could support with analytical AI.

>> I'm
excited about your healthcare. I just got my perpetual 14% increase in our organization's healthcare. I wish I could raise prices 14% every year, but I don't think my customers would go for that. I'd be in big trouble. So maybe analytical AI can help solve that problem. I have to ask you, it's the buzzword of the day, agentic. We started writing about it a while ago and we were very excited about it. Now it's a lot of agent washing. But how do you see agentic evolving? I think people are very optimistic, but there's a lot of work that has to be done, particularly around the data estate. Do you have any thoughts on that?

>> Sure.
I mean, I am very excited about it as well. And I was really, before I was an analytics and AI guy, I was a process guy. I do think that agentic AI is going to have a major impact on how we carry out most of the business processes in our organization. And you'll have workflows being orchestrated at the top level and calling on various agents throughout that process to do this task or that task. I think in many cases people identify agents with generative AI, and that will certainly be important if you have to analyze a document or translate an email or something like that. It's really good at those types of activities. But I think many agents will also be analytical AI in nature to make a decision of some type or another. Again, if you're trying to do something with marketing and you want to decide who's the best customer to send this ad to, you'll need analytical AI for that. And then generative AI might actually customize the ad to that individual customer's attributes. So I think it's going to be a very exciting period. I'm hoping that there's still an important role for humans. I was talking to a vendor yesterday who has a kind of a built-in escalation capability to get to a human. Clearly, we wouldn't be interested in agentic AI if we didn't think that it could do something in the realm of productivity improvement, but we humans will need to play a role as well. And I think checking on the agents, making sure that the transaction has happened that we want to happen, looking for any hallucinations from the generative AI agents because I think those are going to still be an issue, will be things that humans can still do.

>> I
think your point about process and workflows is right on. I think that's not going to be the hardest part to put into the system, but it's going to be critical once you get your data in order. You're pretty good. In fact, that's one of your superpowers is naming things. I think systems of engagement was yours. So analytical AI, we can now retire. Legacy AI and non-gen AI. So thank you for that. But I want to ask you, as those worlds come together, the gen AI and the analytical AI, are there ethical considerations or privacy considerations that organizations should be more aware of as gen AI systems and analytical AI systems come together?

>> Well,
there've always been privacy issues with analytical AI and predictive models where we're using AI to make important decisions. We had issues about redlining in mortgages where whole neighborhoods were eliminated from bank mortgages for their housing because not everybody in their neighborhood, but a few households were having a tough time paying back their mortgages. The good thing about analytical AI is that you can get much more granular about how you treat your customers and your suppliers and so on. It just gets you down to a very detailed level in your predictions. Some people are uncomfortable about all the data that's used for that. I've done a lot of work recently in the personalization space. And people want their ads and their offers to be personalized, but they don't want to give up the data in order to do it. So we're a little schizophrenic about that particular issue as consumers. The issues with generative AI privacy tend to be different, more intellectual property related issues in many cases. And we don't want the prompts that we put into a model to be used for training. But I think the privacy issues historically have been a little more challenging with analytical AI than with generative.

>> Hey,
last question. What are you working on these days? What's exciting you? What's up for 2025 for Tom Davenport?

>> Well,
I am doing some work on agentics like everybody else, I guess. I've just finished the manuscript of a new book that's about... I think it's going to be called The New Science of Customer Relationships: Delivering On the One-to-One Promise. And you were talking about you and I being in this industry for a while. You may remember the whole idea of one-to-one marketing and how we were all supposed to get things that were highly personalized to our needs and desires. And I don't know about you, but I don't get those very often. So we, kind of in this book, talk about what really derailed it and what AI can do, both generative and analytical AI, to really realize that promise that we've been expecting for such a long time.

>> Yeah,
we make a lot of promises in this industry because we're so excited. And we drive a lot of value, but we got to keep moving forward. Tom Davenport, thanks so much. Always great to have you on theCUBE.

>> My
pleasure. Thanks for having me.

>> You
bet. And thank you for watching AI and Analytics" Shaping the Future, made possible by Alteryx. I'm Dave Vellante. Keep it right there. We'll see you next time.

>> I
am the manager of business intelligence within the Analytics Center of Excellence.

>> I
am the manager of the global operations business intelligence Team at MillerKnoll. We all want to do the fun step. We want to do the consulting part, the analytics part, but there's so many steps that goes into preparing the data and getting there.

>> It's
not always fun. It's not always pretty, but the fact that I can use Alteryx and do that in literally 45 seconds is unbelievable. And then I can get to doing what I love doing every single day.

>> I
really didn't land in what I loved and what I knew I wanted to be until the last three years. And this sounds so cheesy, but it's because of Alteryx. I did not have anything in database administration, data visualization, data science. I have no degrees in that, and yet here I am managing a global team because I upscaled with Alteryx. Anybody can start it. Anybody can learn it. Anybody can upscale themselves. You don't have to have that background. I have a bachelor's in journalism.

>> There
are so many barriers, whether it's geographical or whether it's financial or socioeconomic that just play a factor into women not being able to get into these higher roles in data and analytics. To have a program like Alteryx that you can upskill yourself so quickly and so easily with a no-code interface, and then you have this amazing and strong community, you can literally train yourself. Women don't just need a seat at the table. They need a voice at the table. And they need to make sure that people are listening. And I think that Alteryx really is a tool to enable women to upskill themselves, to educate themselves, and to get to that next level.
person_outline 17
WW
William W.
MZ
Mark Z.
JW
Jo W.
KL
Kelly L.
EV
Emily V.
GI
Gabby I.
LV
Lauren V.