In this episode of the “AI and the Autonomous Supply Chain” interview series, theCUBE Research’s Dave Vellante sits down with Chris Burchett, SVP of generative AI at Blue Yonder, for a conversation on how AI is rewriting the rules of global logistics. Burchett shares his personal journey, from pioneering days at i2 Technologies to leading Blue Yonder’s AI transformation, laying out a vision where supply chains aren’t just optimized, but self-improving and resilient.
The conversation tracks the evolution of AI in supply chain management, from early deterministic systems to today's cutting-edge generative tools. Blue Yonder now delivers more than 20 billion predictions a day, thanks to powerful machine learning models that handle everything from real-time logistics to adaptive demand forecasting, Burchett explains. His insights spotlight the shift from rule-based systems to ones that learn, anticipate and act autonomously.
Burchett also breaks down the “SADA loop,” a key framework for agent functionality, and reveals how knowledge graphs play a vital role in harmonizing complex supply chain data. The discussion unpacks how reinforcement learning and personalization allow AI not just to support but guide decision-making. As AI matures, Burchett sees it becoming a strategic engine that transforms supply chains into dynamic, value-generating ecosystems.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AI and the Autonomous Supply Chain. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For AI and the Autonomous Supply Chain
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI and the Autonomous Supply Chain.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
AI and the Autonomous Supply Chain. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to AI and the Autonomous Supply Chain
Please sign in with LinkedIn to continue to AI and the Autonomous Supply Chain. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Chris Burchett, Blue Yonder
In this episode of the “AI and the Autonomous Supply Chain” interview series, theCUBE Research’s Dave Vellante sits down with Chris Burchett, SVP of generative AI at Blue Yonder, for a conversation on how AI is rewriting the rules of global logistics. Burchett shares his personal journey, from pioneering days at i2 Technologies to leading Blue Yonder’s AI transformation, laying out a vision where supply chains aren’t just optimized, but self-improving and resilient.
The conversation tracks the evolution of AI in supply chain management, from early deterministic systems to today's cutting-edge generative tools. Blue Yonder now delivers more than 20 billion predictions a day, thanks to powerful machine learning models that handle everything from real-time logistics to adaptive demand forecasting, Burchett explains. His insights spotlight the shift from rule-based systems to ones that learn, anticipate and act autonomously.
Burchett also breaks down the “SADA loop,” a key framework for agent functionality, and reveals how knowledge graphs play a vital role in harmonizing complex supply chain data. The discussion unpacks how reinforcement learning and personalization allow AI not just to support but guide decision-making. As AI matures, Burchett sees it becoming a strategic engine that transforms supply chains into dynamic, value-generating ecosystems.
In this episode of the “AI and the Autonomous Supply Chain” interview series, theCUBE Research’s Dave Vellante sits down with Chris Burchett, SVP of generative AI at Blue Yonder, for a conversation on how AI is rewriting the rules of global logistics. Burchett shares his personal journey, from pioneering days at i2 Technologies to leading Blue Yonder’s AI transformation, laying out a vision where supply chains aren’t just optimized, but self-improving and resilient.
The conversation tracks the evolution of AI in supply chain management, from early de...Read more
exploreKeep Exploring
What percentage of global organizations surveyed have piloted or implemented generative AI in their supply chains, and what percentage of those found it to be very effective in helping with disruption and decision-making?add
What capabilities does generative AI bring to the table?add
What is the importance of a knowledge graph in representing relationships in a supply chain and utilizing human-curated data?add
What are the high value pieces of real estate in the changing enterprise software stack, specifically in relation to the harmonization layer and the ability to orchestrate multiple agents?add
What are some key areas for research in training agents through reinforcement learning to seek the right answer to a given goal, particularly in understanding supply chain context?"add
What is the importance of human-agent interaction in intelligent business planning and supply chain management?add
What are some key practices for implementing responsible AI in software development?add
What is the current state of the foundation model space and how is innovation happening in the field of AI models?add
What is the significance of Blue Yonder as the second step in technology development according to the speaker?add
>> Welcome back to AI and the Autonomous Supply Chain, where we're envisioning the future of business planning made possible by Blue Yonder. My name is Dave Vellante. I'm here with my co-host George Gilbert. And we're pleased to have Chris Burchett in, he's the Senior Vice President of Gen AI at Blue Yonder. We're going to go deep into the AI world and product. Chris, thanks for your time.>> Well, it's great to be here with you, Dave.>> How'd you get here? What's your background? Tell the audience about your path to getting to this point where they put gen AI in your title.>> Yeah, it's been a great journey I must say. I started my career many years ago back in the '90s, and early on I was the 13th developer at i2 Technologies. And during that stint at i2, I spent seven years there, we grew from 4 million to a billion, and that was an amazing experience. And I had a chance to lead some machine learning teams then. And really, I've done AI throughout my career. I left and started a company in the security space, but I rejoined Blue Yonder six years ago to lead the platform and our native SaaS product teams, and those product teams are also doing AI. And so I've had a great opportunity to really see the progression of technology and see how AI has been enabled increasingly with more and more power. And that's a super exciting place to be because customers, they're always asking how they can respond faster, and AI really brings them the precision and the ability to automate the supply chain. And so it is been a great journey and I'm super excited about it.>> Yeah, I'll say you were there in the early days. I think i2's IPO was probably, George, you were probably covering it as a securities analyst in like 96, and it was like one of the original rocket ships.>> Right.>> So you've seen it. And then that's when the e-commerce boom was just happening and was in its infancy, and supply chains were changing. But now you fast-forward 30 years, 25, 30 years, what are you seeing as the state of supply chain today? What's broken? What's working? What's your mission to improve, and how are you doing that?>> Yeah, it's very interesting. The supply chain is facing more disruption today almost than any other time. It's not just demand, it's supply, it's everything. It's tariffs, it's across the board. And the beautiful thing for Blue Yonder is we're not a recent entry into the AI space. We've been doing this at scale for many decades. Every day, we compute over 20 billion AI predictions for our customers. And our customers use AI to solve those really complex problems that I mentioned. They need to manage the inventory levels at the right point in each location in their supply chain, and they need to ensure on-time delivery of their orders through a complex network. And they have multi-enterprise collaborations to respond to these new disruptions that are happening, and they need predictive insights across those suppliers and trading partners. And so what we're seeing is that our customers are really eager for AI in order to be able to respond to this new world that they live in. Last year, we did a survey and over 80% of the global organizations that we surveyed have piloted or implemented some form of generative AI in their supply chains. And that means that... I mean, that's the majority of those responding. And of those, 91% have found it to be very effective in helping them deal with the disruption and in their decision-making process. So I really see that in the current stage, I think with all of that's happening in the supply chain, AI is going to separate winners from losers in a way that really hasn't happened in the past. If you don't have AI as a part of your strategy, you really need to.>> So we've got autonomous vehicles. Larry's got his autonomous database, we have autonomous factories. So I wonder if you could help us understand how you see AI fitting in. And my question relates to, you guys were always doing AI before the AI awakening heard, AI shot heard around the world and gen AI. Help us understand how the Venn diagram between, if you will, the classical machine learning of the deterministic side of AI and then the probabilistic side. How do they interplay? Where do you apply each?>> Yeah, it's a great question. The supply chain world started out with optimization technologies. And so in the '90s and even today, we started doing linear programs and mixed integer programs to model constraints on your supply capacity primarily. And from that, you could run a solver and you could say, "Here's my demand orders and here's my supply that's potential, and now let me match those and give an optimized result." And this is something Blue Yonder is best in class at doing. We have rich history doing this. And I would call that the more deterministic optimization, hardcore mathematical optimization. And then in addition to that, we have a rich set of heuristics that you can add to that. Now, that's how we handle some classes of problems. But then machine learning came along and said, "No, no, no, we can take a more statistical approach and we can use probabilities to figure out an even more accurate way to forecast demand, and to predict what is going to happen and what is going to be needed." And so for the last dozen years or more, we've been really pioneering and perfecting the ability to operate large scale predictive machine learning for the world's retailers and supply chains. So much so, I mentioned the 20 billion of AI predictions that we do every day. In order to do that, you have to be able to run, every week, you have to be able to update the supply chain model so that you take into account the latest of the seasonality effects that's in the data. So there's a lot of rich capability in predicting what goes on there through machine learning. Now that we have generative AI, what this brings is a new class of capabilities. And in generative AI, what we have is, first of all, very natural interface with the computer. For years really, we've been learning how to speak to computers in their language. Now they're talking in our language, which makes it so much more accessible for end users to be able to just ask questions of their data and interact with it. The other thing that these generative AI models bring is reasoning. So not only can they interact very naturally with you and let you drill into the data, but they can also help do chain of thought reasoning where they go through step by step and even write code, and then run that code and see the answers, and reason through problems. And so this is giving us the ability to connect the dots in ways that we couldn't before. In the past, we had these very brittle rule-based systems where if you could say if, then, else a number of times you could maybe get to the end of a sequence. But with the ability to reason through problems and actually call new APIs when new situations arise, the promise here is that it's going to be so much more robust. And that's what we're seeing in some of the early uses of generative AI.>> And George, you've done a lot of work and a lot of thinking in this area where you've moved from sort of static hard-coded microservices, for example, to a new world where processes are essentially building blocks. So why don't you pick it up from here?>> Yeah, let me... Chris, it's funny, I think you've mentioned before that you were at i2. And I actually spoke at one of the company meetings once, I think it was in 1999 or 2000, kind of at the peak. I was a big fan of what they were doing. And when you talked about the solver and optimization, it was like the Eli Goldratt and the goal and finding the optimal path through an operation. But what you've talked about now is handling much greater uncertainty and with a much greater scope. What I'd like to know is how you model uncertain demand and uncertain supply, and you've got essentially a model that it's not rigid, it's got probabilities in it, and then generative AI in the form of agents can respond to different conditions. Because in traditional software, like you said, you had to code all the rules, you had to code every eventuality. So help us put all the pieces together in that. How would an agent sitting on top of something that models uncertainty better and that doesn't have hardcoded rules, how does that help human supervisors, or a team of agents help human supervisors, handle what would be overwhelming uncertainty otherwise?>> Right, right. This is really important. And the way we think about it is that agents need to be able to do four things well. They need to be able to see, analyze, decide, and act. And so we call this the SADA loop or SADA loop. The agent starts by sensing... Like you're saying, there's uncertainty everywhere. So as new events, the agent needs to be able to respond quickly. And so as new events happen, things become more clear. Now, that one thing is now solid because you got the event, you got that signal from that carrier. And now how do you respond? Well, you need to be able to analyze and pull in all the relevant facts that are necessary and analyze and understand root cause for what might be happening related to this event that happened. And then you need to be able to consider your options and decide which recommendations are you going to make. And this is all based on some of the rich logic that I talked about, either planning logics or heuristics, or maybe probabilistic machine learning. And all of this goes into deciding which recommendation to make. And then you bring that to the human. And with human in the loop, you're going to get approval to make an action. And increasingly, we believe that humans, as they get comfortable, the human teammate will start to trust the agent a little more and say, "You know what? Always answer that way." And you get more and more automation that way.>> Can you talk about how, when the agents propose a course of action and then the human accepts that, how might that get internalized? Does that update the agent's programming or might the agent materialize that underneath to make it hard-coded repeatably so that there's precision performance and explainability? When does it do one, when does it do the other?>> Yeah, you're getting to a really important point, which is the agents, they have to have a common language that they're talking. They have to have a common frame of reference. So the key to this all is a unified data model and the ability to update that data, and also the ability to represent relationships, key relationships. And so the agent needs to be able to refer to this unified data model and then understand the relationships that may be represented in a much richer way than just through normal tables. And as the agent is processing, it needs to also have a long-term memory capability. So it needs to remember what it's done and the context that it's operating in so that when it comes back to a similar situation, it can see the results of that and actually learn from that.>> So that memory could be either something persisted in storage or it could be fine-tuned into the agent, or it could be maybe materialized into this common data model, like extending the data model.>> Right, right and I think really what we'll see is all three will happen. Some of it will be in the data model itself. Some of it will be kind of externalized data, maybe through a RAG architecture or through something else through vector databases for similarity searches and things like that. But that long-term memory is really core to the agent being able to, like I said, learn and do better and better, and understand what the user wants.>> Let me ask a question about this because everyone's talking about agents. It's like the shiny object. My favorite analogy is in the movie Up, where the house is flying away and the dogs are chasing after it, trying to shoot it down, and then the guy in the house yells, "Squirrel," and now the dog's scattered. The agent is the squirrel. So many companies are talking about using agents across systems using an API connectors, and these are not harmonized. And then they're talking about a data estate that's just sitting in raw relational tables. What do you lose in terms of the scope and fidelity of your ability to see, and analyze, and decide, and act when you're working off just a bunch of disparate APIs and raw data estate?>> Right, right. Yeah, that's a great question. And this is one of the challenges. We saw Google came out with their A2A, agent to agent protocol, and it really builds on the model context protocol and is kind of complimentary to what Anthropic came out with. And the promise of these is that they'll let agents be able to discover tools and then discover each other and connect to each other. But one thing that's not solved is how do those agents understand what they're talking about? To your point exactly, is that if you don't have a common semantic knowledge, where you understand the relationships between different concepts and between different data... And for this reason, we're actually adding a knowledge graph to our architecture so that we can connect these different concepts and so we can bring them all into a harmonized understanding. So agents now know how these things interrelate and they can know deeply. And it's very extensible as well because you have a very efficient way, with a knowledge graph, to represent not only conceptual relationships but actually data relationships themselves, and that's very extensible and scalable. So I think this is going to be foundational for agents to be able to understand one another in the future.>> Yeah, so->> Stay on that for just one second if you could, because we ask this question a lot. You mentioned, I think it was agent space, they call it, and others have their own terminology. And when we ask like, "Okay, how is that data harmonized?" What we get is a range of answers. Either AI magically does it, gen AI figures it out. Or well, that's somebody else's problem. The third parties or humans are going to have to figure it out, they have to harmonize data today. And none of those are either really, frankly, believable or practical. So you're saying, Chris, that you guys have developed IP to actually do that harmonization, and the key ingredient is the knowledge graph. Is that correct?>> Yeah, that's right. That's right. The knowledge graph becomes a foundational element. So it starts with this end-to-end canonical model where you represent a shipment as a shipment across all the transportation warehousing, planning. So that's the one level of semantic alignment, if you will. But really what we want to do is now represent deeper relationships. We want to represent the fact that this shipment is for this customer and it has these items. And these items are stored in the warehouse at these locations, and they have this weight associated with them. And so you think about all of the concepts that are in the supply chain, that we, as humans, just naturally our brains, our minds just naturally have that capacity, but the agent doesn't. And so you really need something like this knowledge graph to represent those relationships and for it to be very dynamic and updated all the time. So we believe that this is a key element. We also believe that there is a curation process of data that does come from humans. So some of it is discoverable and can be managed through the system, but there's also knowledge, like in the consulting project that did the deployment, there is a wealth of knowledge about the business model and about the problems the customer is trying to solve. And all of these things can come together into a unified data model for the agent to use. So I think it's a very holistic picture that we're going for.>> Let me ask on that, because you said something interesting. You're talking about harmonizing all the concepts, and you also talked about understanding the relationships like this shipment for this customer. For decades, really since the rise of databases and then the applications that were built on top of them starting 50 years ago, we've managed and counted things, kind of people and resources. Now, you were talking about understanding the shipment for this customer. That's an activity. Tell us, elaborate on how now we're getting to the point where we can manage business processes and how we can have a static organization managed for efficiency, but a more agile organization that can sense and respond to its environment and operationalize new plans. In other words, what happens when the business processes are assets, not just the data about people and things?>> Yeah, I think that's a great observation. The business processes in our software tend to get represented pretty directly. So one of the things about supply chain software is you tend to have the business process that you're following as a first-class part of the software. And so what that allows is it allows the agents then to understand where they are in the process and what their role is right now. So if I'm going through a planning process, I might be early stage and I need to gather what options there are. And then I might need to be collaborating with my supply partners or my other partners. And then eventually I'm going to get to a point where I'm going to drive some consensus across my organization. And so at each phase of the business process, you really want the agents to be understanding their role to some extent and then working according to where they are in the process.>> So just to be clear... Dave, let me just ask this one last follow-up. The distinction, the technical underpinning, because this is your bailiwick, you're the guy building this, we used to, in applications, the assets and the people were managed as data and the processes were managed as code. Which meant you couldn't really change the code, it was embalmed in symbolic code. And now if I'm understanding it right, the processes are assets, I should say the processes are sometimes data just like people and assets, and that's what gives you that ability to orchestrate them and change them and optimize them. Is that fair?>> Yeah, I think that is fair. Sometimes we call it metadata. It's not the actual business data, but it's metadata that configures how the system operates. And so there's a rich ability to model that in the supply chain solutions that we provide.>> Okay.>> George, you and I have talked a lot about how the enterprise software stack is changing, and there were two high value pieces of real estate that we're touching upon today. One was that harmonization layer, the semantic layer, and the other is the ability to orchestrate these agents and multiple agents. And of course you just addressed George's question around processes as a new dimension, which many of the discussions that we have around agentic, they don't even touch that. So that's encouraging. And I'm interested in how the scope of the agents and their ability to ingest and be guided by top-down goals. George, this is something else you've published. And other business constraints, grow market share, but maintain margins at X. How do the agents, as you envision, deal with that? And is that something that you can actually execute today?>> Yeah, that is the ideal state. And what these agents are good at is you give them a goal and they can, through reinforcement learning, we can actually train them to seek the right answer to that goal. And so that's a key area for us. I would say it's a little bit more on the research side. Some of the things that are easier to do and get high quality answers from agents today, they tend to be things where you can have them write code and you can verify the code and respond immediately. But one of the places where we see great potential is, can we get the agents to understand in a more native way? Can the model understand supply chain context in a more native way? So you don't have to give it so much data, you don't have to ask it to write code quite as directly because the model itself understands the supply chain concepts. And that's really where we're going. And I think as we get those kinds of models, where we'll arrive at is now we can really say, "Oh, I want to have a different planogram for each store now, and I want to be able to maximize the profit while maintaining the balance with my category vendors. And I want to the desirability of which item has to be at eye level and so on." All these different myriad of constraints, the model and the agent will be able to... You'll give it that high level goal and it will just solve for those planograms for you, for example.>> And in terms of the journey and thinking about this journey of intelligent business planning, intelligent supply chain, it seems like there's a very tight strong human-agent interaction that will take place over some amount of time to really better understand those day-to-day operations within supply chain management. And as George was alluding to, the agents will learn, infer from the reasoning traces of humans. And then over time, humans maybe trust the agents more as they learn. And then that gets us to that sort of end state. I know it's never an end state, but a steady state as to what you're envisioning. Is that a fair description of the journey?>> Yeah, I think that's really well said, Dave. I think really what we see is that agents become your coworker in some sense. And the point of it is that humans should be elevated out of the mechanics of the software. Like I said before, so long we've come to the machine and tried to map our reasoning onto the machine, and we build complex software systems to do that. But really what should happen is I shouldn't have to be tied to a bunch of dashboards and charts and graphs. I should be able to get a notification wherever I am, on my phone, or through Teams or Slack, or whatever it is that I use. I'm out doing my job and I'm not tied to some computer interface, and the agents are there kind of working with me. And I trust them enough that they check in with me when they need approval and they notify me of things that I need to know. But in general, now the human is elevated to do more strategic things and to do things that we do better, like collaborating with other teams and thinking more strategically, and working with customers, and those kinds of things. So I think that's where the journey is going to go to. And it does follow that normal progression like you're saying, where first you start with some analysis that you start to trust. Then you start seeing decisions and taking actions. And eventually, you're able to step away from all of the mechanics and let the machine run a little more.>> Can we talk about governance a little bit? I mean, it's a hot topic. How do you approach governing agentic? What role do you play versus third parties? There's tons of governance platforms out there. There's open source, catalogs that are coming up in popularity. How do you approach governance?>> Yeah. A core part of our software development practice that we've put in place is the responsible AI practices. We're actually partnering with Microsoft and building on some of their practices. What happens with the responsible AI is that you build it into the fabric of your development and test and release process, and you have what are called evals where you're evaluating the model response continuously. And you build a set of regression tests like you would any software. And then you build an observability system where you're constantly monitoring that those evals and the model performance is exactly what you would want it to be. And so this is a key part of any kind of, I think, agentic or generative AI offering. You have to have a very mature governance. And that's the sort of operational and development governance. But from that, there's actually external audits and third-party assurance. If you know about the cybersecurity world, there's a pretty rich ecosystem now of third parties who will come in and audit you, and they will penetration test your defenses and give you reports, and help you continually move your posture of security forward. I imagine the same will grow up around the generative AI space where there will be companies that help provide third-party audits and governance. And of course, it's not only correctness. In the case of generative AI, it's also worrying about concerns for bias and for fairness, and a variety of topics.>> Let me follow up on that. You said something really interesting about the whole observability system and then the evaluations. That sounds like it would become part of almost anyone's job who's going to interact with agents, where part of how they transfer some of their expertise, the automatable part of their job, is to look at the agent's performance, evaluate it in the observability part, but they also are responsible for writing evals to say, "This is what a good job looks like." And that becomes part of everyone's job. It's almost like instead of managing a subordinate, that's how you manage your agent.>> Right. Yeah. I think that's very well said, George. That's exactly how it is. And so in order to bring a new feature that has generative AI in it, you need to think through all the different types of questions you expect to be asked. And through observability, you need to continue to add new questions into your evaluations so that you have a richer and richer set of tests essentially as you want to update and further refine that feature.>> Okay.>> So let's end with a little roadmap discussion, where you're headed with your R&D. Things are changing so fast. Every day, there's a new benchmark out. There's big language models, small language models, you've got thinking models. How do you keep up personally? How do you make sure Blue Yonder stays ahead of these trends and you've got your customers' back?>> Right. Yeah. Honestly, I probably spend at least 50% of my time on, when I'm not in meetings or in actual work, I'm spending it staying in touch with what's going on. I watch model providers, open source. There's a set of podcasts that I listen to regularly on the way to work and from work, I'm listening to these things trying to catch up with what happened today in the world of AI. And actually, this is kind of a little personal project of mine, building my own agent to monitor for these sorts of things and notify me of summaries. So that's one way that I kind of use it on the side. Because I always want to be developing in whatever technology I'm leading, at least as a hobby so that I understand it deeply. And I think there's nothing like that. So there's also standards that are emerging all the time. I mentioned, A2A and MCP. And startups. So there's just a number of different sources, but it is an ever moving target. And partly what I feel about for our customers is what I'm doing is I'm building out a platform that is built on the best in class and gives them access to any model, and also gives them access to their data in a way that's consistent. And so if I can provide them with a standard set of tools and building blocks, and keep those up to date, then really what I'm doing is I'm helping them be future-proof. So if that platform is immediately able to apply things to their supply chain solutions, then I'm doing my job of keeping them protected from all the change and able to use this technology as it evolves. And so that's really my whole mission.>> So make sure I understand it. So you actually have a curated set of tooling, an opinionated stack if you will. But did I hear you correctly that you let your customers sort of bring their own LLMs? Is that right? Or do you somehow restrict that or advise them?>> Not bring their own LLMs, but we do provide access to a wide variety of LLMs. And so we want to always be measuring different LLMs. We measure them against the supply chain, certified supply chain manager test. And so we actually benchmark models against that, that certified industry test, the CPSM I think it's called. And so we measure and then we make those models available to our customers. And then, yeah, you're right, David, we actually allow customers to build. We have a low-code tooling that we're making available through the platform that will allow customers to build their own agents as well as use ours.>> So you're obviously loving all this CapEx and all the innovation. Whether it's Llama, Elon's getting involved with Groq. The DeepSeek innovations, they got to be music to your ears. And you of course still have OpenAI driving hard. What's your take, Chris, given your deep knowledge here? There's two sides to the equation. A lot of the VCs say this is going to be a commoditized industry, and yet others say, "Well, if the innovation continues, there may be some differentiation." I guess it doesn't matter to you, but when you think about things like scaling laws, what's your take on, and I'm throwing a lot at you here, are we hitting diminishing returns? How long can that continue? What does it mean for smaller language models? You see Google, obviously foundation model provider, you see Amazon getting into the mix. Obviously Microsoft has its relationship with OpenAI. It's amazing that you can keep up on all of this. But condense that into your takeaways into what's happening in that fast moving world.>> Sure. Well, you're right. We got to a point where scaling laws looked like, "Hey, there's not enough data to keep going with the scaling laws at this point." And so I think regarding the scaling laws, I think the new frontier is going to be physical data. It's going to be three-dimensional data of things... And this is where you look like at companies like Tesla, they're always creating new data as cars drive around. And so physical data is probably where the scaling law data gets unlocked a little bit, I think. And then you get into so-called physical model, the models that can model the physical world and actual constraints. So I think that's where that's going. But from a knowledge base, I think yes, somewhat we have reached the limit. But now what you see is you see the model vendors actually really changing and innovating in new areas. So what you see is, oh, context windows are getting so much bigger. So GPT 4.1, right? Million token context window. Gemini 2.5 was released a few weeks ago. Great model with a million token context window. So they're pushing each other in other dimensions as the scaling laws kind of steady out a little bit. And I think what you're also going to see is more and more personalized models. So you saw OpenAI just had a new memory feature where now it's going to hold on to more of my context and it's going to personalize to me more directly. And I think we're going to see that as well. So that's kind of in the foundation model space. I do think there is a place for small models too, and these are going to be very purpose built for specific needs. If you look at the success of Cursor, what did those guys do? They really manage the GPU at a low level and the buffers and the cache, they're maintaining that in a very intelligent way that fits their need. And I think there's going to be a class of models that arise around these kind of purpose-built use cases. And so I think the technology has a long way to go to continue. I think it's just the start. To be honest with you, I feel sometimes we're still building flashlight apps from the mobile days. Everybody had a flashlight app, right?>> That's great.>> But if you think about where technology went with mobile, the next generation was like, oh, really game changing with now social media. Social media is something that couldn't have existed without mobile. But now it's taking advantage of mobile in a way that it wasn't before. And then if you think about the third step is really Uber, where not only are you taking advantage of mobile, but you're actually doing it in a way that completely changes a whole other industry that's completely unrelated. So I think about that progression from mobile and I think about where are things going with gen AI. And I really think that Blue Yonder has a chance to be the second step, at least right now. I don't know if we could see the third step from here. But the second step to me is when you take the physical world that we get signals from and you bring those signals into the digital world where the LLMs operate and where we all live with LLMs all the time now, and you allow them to make decisions that actually change the physical world and you execute those changes in the physical world. So I think that's stage two for this technology.>> George, it's such a pleasure hearing Chris talk about some of the visionary concepts that you and I have laid out. This digital representation of an enterprise, the digital and the physical worlds coming together. And Chris, really appreciate you sharing your knowledge and your insight. Chris, thank you very much for your time.>> Yeah, thank you. It's been a pleasure.>> All right, and thank you George as well. We'll be right back with more on the future of business planning and AI's impact on intelligent supply chains. This is Dave Vellante. Keep it right there.