In this Mixture of Experts segment, Saket Saurabh, co-founder and chief executive officer of Nexla, joins theCUBE’s John Furrier to discuss the critical shift from traditional data engineering to "context engineering" in the era of generative AI. Saurabh explains that while AI models are powerful, their true value in the enterprise is unlocked only when fed the right context – a complex mix of raw data, semantic meaning and knowledge buried in documents. He outlines how the role of data is flipping from curating specific datasets for analytics dashboards to connecting virtually every system to allow AI to make informed decisions and execute actions.
The conversation delves into the architectural changes required to support "AI factories," moving beyond static IT stacks to dynamic, low-latency data products. Saurabh details the necessity of a robust abstraction layer that handles connectivity, semantics and delivery, likening it to a TCP/IP stack for enterprise intelligence. He also highlights how Nexla is utilizing AI to automate the creation of connectors for brittle legacy systems, ensuring that agentic flows and large language models can reliably access and act upon data across the organization. The discussion concludes with insights on the evolution of ecosystem protocols like MCP and the need for rigorous testing to ensure AI reliability in chaotic enterprise environments.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Mixture of Experts Series
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Mixture of Experts Series.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Mixture of Experts Series
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Mixture of Experts Series. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Saket Saurabh, Nexla
In this Mixture of Experts segment, Saket Saurabh, co-founder and chief executive officer of Nexla, joins theCUBE’s John Furrier to discuss the critical shift from traditional data engineering to "context engineering" in the era of generative AI. Saurabh explains that while AI models are powerful, their true value in the enterprise is unlocked only when fed the right context – a complex mix of raw data, semantic meaning and knowledge buried in documents. He outlines how the role of data is flipping from curating specific datasets for analytics dashboards to connecting virtually every system to allow AI to make informed decisions and execute actions.
The conversation delves into the architectural changes required to support "AI factories," moving beyond static IT stacks to dynamic, low-latency data products. Saurabh details the necessity of a robust abstraction layer that handles connectivity, semantics and delivery, likening it to a TCP/IP stack for enterprise intelligence. He also highlights how Nexla is utilizing AI to automate the creation of connectors for brittle legacy systems, ensuring that agentic flows and large language models can reliably access and act upon data across the organization. The discussion concludes with insights on the evolution of ecosystem protocols like MCP and the need for rigorous testing to ensure AI reliability in chaotic enterprise environments.
In this Mixture of Experts segment, Saket Saurabh, co-founder and chief executive officer of Nexla, joins theCUBE’s John Furrier to discuss the critical shift from traditional data engineering to "context engineering" in the era of generative AI. Saurabh explains that while AI models are powerful, their true value in the enterprise is unlocked only when fed the right context – a complex mix of raw data, semantic meaning and knowledge buried in documents. He outlines how the role of data is flipping from curating specific datasets for analytics dashboards to c...Read more
exploreKeep Exploring
What are the key themes discussed in the context of data entrepreneurship and the impact of generative AI?add
What are the current challenges and trends in the technology stack related to the value of data and its impact on cloud and distributed computing?add
What is the impact of AI on decision-making and data engineering processes?add
What challenges do enterprises face in data consumption and system integration, and how can AI contribute to creating value in this context?add
What is the vision and purpose of Nexla in relation to data connectivity and AI?add
>> Hello, I'm John Furrier with theCUBE, host of theCUBE here in our New York Stock Exchange studio on the East Coast. Of course, we've got our Palo Alto studio connecting Wall Street and SiliconANGLE bringing the two communities together, part of the NYSE and CUBE relationship, Wired programming. Of course, we love data and entrepreneurship, and in this world where generative AI is changing the game, it's going to be a new way, the old way will be on the wrong side of history and all the action is that the infrastructure and data layers as agentic, which is very much hyped up in a legit way, will become soon be powered by these data layers. And the entrepreneurs are doing it.
Saket Saurabh is here, Co-Founder and CEO of Nexla. He's here. He has been in the data space for a very long time, serial entrepreneur. Saket, great to have you on from Silicon Valley in New York here. We'll get to why you're in New York, but thanks for coming on.
Saket Saurabh
>> Thank you for having me.>> We were talking before we came on camera about the waves and the big data wave. You go back 16 years when theCUBE started, that was the awakening of the big data movement and all the same kind of conversations about, "Oh yeah, ATMs are going to replace bank tellers and big data is going to replace this."
The same conversation's happening in AI, jobs are going away. Nothing really happened. Actually, more jobs were created, but now more than ever, the data piece has been recognized and validated. Hence, the gen AI consumer wave, that the competitive advantage of the future is the value of the data. Okay, great, check. But the real challenge going on that you're involved in other entrepreneurs is that that changes the tech stack. That changes the way people think about how to architect their systems. NVIDIA stock is all time high. You're a former NVIDIA innovator. You've been in data, so you have the confluence of large scale infrastructure and software ramping up. You have cloud computing scale moving into distributed computing, and you have this new data phenomenon layer of developing, some call it the semantic layer. Some call the data warehouse in the cloud, but it's different. I want to get into that with you because this is an area where a lot of the smartest people are really working hard because if you get it right, the spoils and the riches and the value really flourish. Explain what's going on. Did I get it right? Did I miss anything? Do you see it the same way? And what is the current state of data infrastructure, data engineering? Give us your thoughts and your vision.
Saket Saurabh
>> Yeah, no, you get this exactly right. I mean, the way I think about this is that we are in a world where as a company or as a solution, you're doing one of two things: you are either in the model or you're in the context. Context is all the information we give to the model so that it can do something useful for us. The context when we think about this is really the information. What information can we tell the model? What's the knowledge? What's the data? What is happening in different systems, and how can we ask the model to do something for us? Bringing together that context turns out to be a lot more complicated. And part of the reason is that enterprises have so many systems built over the years and stitching it all together is where really the secret sauce lies. Because models are pretty smart and sharp, but they also get to look very similar between each other, right?>> Yeah, and there's an arms race too. It seems like it's like an F1 race. Gemini's ahead one day, OpenAI's the next, so it's hard to keep track of what's going on. Now, I was having a conversation with another expert and he said, "Well, that's a feature, not a bug," but you can abstract away the application layer and developer side, you can let that behavior happen. And then you abstract out the infrastructure side and deal with that context, flywheel, whatever model. Then you've got model routing, all these new things pop up.
Saket Saurabh
>> Yeah.>> Do you see it the same way? How should someone think about, because you could spend all your time figuring out one model and then go, "Wow, wait, it's obsolete," or then they then leapfrog. There's a leapfrog game on the model side, which is confusing people. How do you clear that air up there?
Saket Saurabh
>> Yeah, I would certainly say that that leapfrogging will continue to happen, but we're also seeing that the capabilities of models are stabilizing. There are two things that will happen from that model perspective. One is yes, you don't want to build your entire solution stuck to one particular model, be able to take advantage. For example, if it becomes a coding application, well, it turns out maybe Claude is doing it better than OpenAI, whatever, however this evolves. So you want to be that. The other part that is going to happen is you will take specific problems you're solving and try to fine-tune some of these models. There's a lot of value in that, not in general purpose. And then I think that there are some model innovations that will come that will help us solve enterprise problems better. Now, the way I think about this is that we're all ultimately about the application. What problem are we really solving? And we've been through three waves of innovation. First wave was the data center, then the cloud warehouse, and now we are in the model era. It is all about how does that model really serve our purpose? And I'm happy to dig more into that.>> Dave Vellante always talks about this because when I first met him 16 years ago, we started working on theCUBE together. I wrote a blog post in 2007. It was called Data is the New Development Kit. Now, that's dating the 2007 because remember back in the old days you had developer kits?
Saket Saurabh
>> Yeah.>> But the thesis was data is the value for software engineering. Go full circle now it's like data is feeding in the engineering side of developers. Now, some applications developers aren't data science, they're not wrangling data. They just want to code or use now coding assistants. The data engineering layer came out of cloud and the conversations over the past say eight years, this idea that there's an SRE for data coming, and that's been validated. So you're playing in this data engineering layer and you use the term context engineering.
Saket Saurabh
>> Context engineering.>> I want you to explain this because I think this is probably one of the most robust and lucrative areas to play, because this is an architectural system design thinking, engineering innovation idea. It's not like, "Hey, I just built an data engineering layer," it's not that easy. Explain this layer. We've seen layers like TCP/IP change the world. We've got Kubernetes doing great. I think that this data layer will be the most important area and which is why everyone's fighting for it. Databricks, give me the data lake data, so this is a contested idea. What is it about and can anyone own it or is it open source? What's your thoughts on this? Because companies have to get this right? This is a table stakes design.
Saket Saurabh
>> Yeah. Look, companies absolutely have to get this right because that's how you unlock the real value of AI. We won't be able to get that without doing this, right? And the TCP/IP analogy is also great so let me double click on why this is happening in the first place. The primary use of data, before it used to be analytics. I want to make a business decision. I need a dashboard. I'm asking this question. Somebody goes and builds that together and they build that data into a dashboard. And then as an exec, you're making the decision. Now, that has flipped because of AI. In AI, what you can do is you can tell AI, "Here's all my data. Here's all, this is the meaning of the data. Here's my question, now you can figure out what to apply and what is relevant and all of that stuff." The model changes from let's curate only some data and bring it to a dashboard versus let's make all possible data and information available to the AI and let the AI make some decisions. That changes an equation where data engineering is not about some very specific data being brought together into a warehouse and visualizing that, it's about connecting to everything. If you can connect to everything and you can tell the AI, "Here's my question. Here's my problem," by the way, that's one part. The second layer beneath that is the semantic layer, which is, okay, what does it even mean? What does this inventory mean? What does this price mean? And all that stuff. So you give the meaning, that's a semantic layer, and then you have the connection layer, which is go fetch that data or go take that action in this inventory system or in the sales tool. These three layers have to come together to make it happen, and that's where the data engineering and the context engineering comes into play. And so context becomes a lot about the data, the knowledge, which by the way is in documents. AI models are greater than that->> A connection comment is relevant. You've got to connect to it.
Saket Saurabh
>> You got to connect to it.>> It's working.
Saket Saurabh
>> In TCP/IP, if you remember, you have a physical layer, the connection, the ethernet, the Wi-Fi and all that stuff. And then you have the packet, and then you have the network and all this stuff. In this world, it is the connector that actually goes into the system that can read and write to it. You cannot operate without that. That's your gateway. Then what did you read? What does that even mean? Well, there's a data model, there's a semantic layer, and then well, how do we bring it to the right place? That's the delivery of the data to the model, or to another system, or to an application, and it's bidirectional because you'll take data in and you'll use that data and you'll push actions out of that. I would say that all stitches into context engineering and the big change that has also happened in addition to the application which holds data, the database that holds the data, events that are coming there are documents, the documents that also capture knowledge. Stitching all of that together and then saying, "Well, I have memory. I remember what you asked me last time, so let me use that." All of that together makes that context.>> That's great, great description. I love that. It's going to be a great highlight on my highlight reel for the week. I want to go one step further because you brought up a good point. I love the connection and then the meaning. Those are two things, they work together. If you look at the current market right now and enterprise and even throw the hyperscalers in there and the neo clouds. There's an IT mindset, the old school mindset with stack some servers, load Linux on them, get a database, have a query, goes into it. I'm over some, but that's static world, but it got connected. Now you're in a generative world, it's like runtime.
Saket Saurabh
>> Oh, yeah.>> A whole nother stack is changing. I remember 16 years ago during the big data, early days of Hadoop, someone from Bank of America came on. They were working on fraud detection. They were doing machine learning. They were hitting that up early. They use it as oil refineries. Data's like a oil refinery. You got to get the oil out of the refineries. Today, Jensen Huang talks about AI factories. So eventually, that's where the market's going. There's going to be large factories and small factories processing data. You're seeing the enterprises adopt this. In fact, NVIDIA loves the data factories, but last year at GTC, you'll appreciate this being a former innovator at NVIDIA, Jensen Huang said, "KV cache is the operating system for AI factories."
No one got that. I'm like, "Oh my God," and then I was the only analyst that said was drilling on that because that's networking. That's not Linux, and his meaning was they're connecting GPUs and you got NVLink and a bunch of other stuff. But the point is that those connections make the hardware go faster. That is going to be fueled by data. I think this connects into the modern infrastructure.
Saket Saurabh
>> Correct.>> The question for you is what is that? First of all, do you agree that this whole dynamic's happening, AI factories, and then two, what data architecture and what software runs on it? If KV cache is connecting the chips on the NVIDIA side and there's other chips out there that are more open source, what's the data software look like? Can you share your vision on this? Because this is where that next layer sits on top of or close to the chips in these new AI factories.
Saket Saurabh
>> Having worked for Jensen, he's a very forward thinker, relentless innovator, and I would say that he's exactly right in how companies are basically becoming AI factories. The role of the data here is very much similar to say networking where think about all of these different systems and applications where data exists or information exists. It has to be stitched together. Now, it's like connectivity. The layer that is emerging in many companies is the concept of a data product, which is something that encapsulates data product, a data product. It encapsulates the understanding of data. What you want is that your AI doesn't have to go to a hundred different systems and figure out how to connect that. That is never skills. You have a layer that knows how to connect, get the meaning and package up these nicely created data products, and these are all in our->> Latency we're talking about here, really low latency. It has to be fast.
Saket Saurabh
>> Very low latency, it has to be because you have some user on the other end waiting for an answer or an action to happen. The concept of data products, which until a lot of industry thinks of it as a store of data actually is a virtual entity in our mind that gives a common layer. A common layer that understands the data, builds that up and stitches it together in real time, feeds that to the model. I think that's where some of the industry direction is going into where you combine the data with the meaning of data, and the main thing here is to remember that the data consumption has to become easier. Data consumption from systems has been hard. It has taken a lot of effort for people to stitch it together and that doesn't scale.>> I want to get your thoughts on the enterprise AI because I think you're hitting this, I'm glad we went there because this goes to the next level, which is, okay, I want to create value with my AI factories and whatever software state I have, and now I have applications coming on top of it. They're going to drive either revenue value or cost reduction. Perfect storm for all companies. There's no doubt about it. Okay, these enterprises have brittle systems. Now, the good thing is abstraction is a good thing. The abstraction layers are always opportunities, and AI will certainly help when you have tons of GPUs and XPUs helping there. I'll say we're looking good. The question is, what does an enterprise need to do? What are you seeing? What are some of the conversations? Because this is probably the number one work area. When I talk to large enterprises or just general enterprises, like I got to get my data right now, they do the RAG, easy. Search is easy. That's the first, oh wow, it works, search. But that's not the real answer. The real answer is how do I get the best data at the right time fast as possible for any query? It's generative. It's not static. They have the workflows, they got the data, now they've got to answer the question. That could be different every single time. What are your conversations like with the enterprise? Because is there progress there? Are we still stalled? How would you peg the momentum? I mean, the excitement's high.
Saket Saurabh
>> Yeah, see, excitement is high. When I talk to large enterprises, I tell them that there's a lot of conversation with model and what models can do, and I'm like, "Let's shift that conversation. Let's shift that conversation to what are you going to bring to that model? You as a company have some valuable data. That's your differentiation, but your data by itself doesn't give that full context," so you've got to bring a lot of pieces together to make that happen and stitching those pieces is not easy. Abstraction is a very key construct. Even at NVIDIA, I don't know if you know, but NVIDIA's always had more software engineers than hardware. Abstraction has what allowed them to create these chips and continuously innovate. Abstraction in the data world will be about all this, what we call is variety. There's a lot of variety of systems, structures. Abstraction gives that common layer and my conversation with a lot of the leaders there is how do we make sure that the understanding of data is being built? So that's semantic layer. That is largely based on probing and understanding and doing that automatically. AI plays a big role. And then how do we have that live connectivity to bring that information out? I would say that there's still a long ways to go. The systems are indeed brittle. The way to solve that is through automation. One thing I would say, for example, what we do, generating connectors to all these systems that are there within the enterprise, whether they bought a system, whether they built it in-house, AI is actually generating those connectors. We are using AI to then take that understanding that metadata and build that into something that AI can understand. Like, hey, I'm going to throw data at you. This is what it means. And then applying and saying there's governance. What can be used for the given user who's accessing? So those layers have to still be built in a scalable way.>> Let's talk about what you guys are working on. I think this is why I love AI right now, because it's not so much just a consumer thing. What you just pointed out is how you're going to take the thousands of man-years of coding and intelligence and identify automation using AI to make the abstraction better. Now the hardware will give you IT speeds and feeds. You're getting at more of a software innovation. Is that right? Are you guys applying that AI to go faster to make those efficiencies? Because at any given time, the agents can spin up to do the work that you might have to go and do a PRD, "Oh, we got a new use case." That would be software project. Is that happening?
Saket Saurabh
>> That is happening. This software layer that we have created also takes advantage of hardware. We actually go use NVIDIA technology like NIMs for example, because we can run these small models that can do certain operations really effectively. We're looking at it parsing a document with images and visuals. We can use GPU accelerator mechanics for that. Video content, something that you are building a lot of is now accessible by AI to do search summarization things. Again, when you are connecting to data, I think about, hey, there must be useful information on a video that we can take and in a document and a database or in a CRM or whatever, so stitching that together. We do take advantage by the way, still bring that hardware software world together so that we can take advantage of all the hardware acceleration and the data that's coming from other->> I wish we had more time because I'd love to expand. I think the NIMs and what NVIDIA is doing, what you just mentioned, changes what an ecosystem looks like. The old school ecosystem was, "Hey, get a partner, do some joint selling."
Now, this technology embedded in partnerships. MCP for example, was one of the best organic innovations this year that I saw still evolving in a great way, but still, that shows that that's going to usher in the agents. That came out of the community. That means if we're going to work together as a partner in an ecosystem, there's now a technical, it's not just build an API, which is still great, but now you've got context. Got delegation, trust with data crossing boundaries. What's your vision on that? What's quick take on how MCP and how the relationships of the old ecosystem NVIDIA is building out an ecosystem?
Saket Saurabh
>> Oh, yeah.>> It's a classic ecosystem that we cover, but they're doing it differently. And that's an example. You don't need to comment on NVIDIA, but just in general, cloud providers are doing the same thing because the enterprise just want to get the best data.
Saket Saurabh
>> The two challenges that as an industry we need to solve through MCP's a great start, but I think we'll see some iterations of that. It has its own limitations as well. But that's great. That's a good start. The two problems that we'll end up having to solve, one is that as we do these agentic flows and you do one action followed by another, followed by another, you can complicate and multiply the errors. You may end up with outcomes that are not very reliable because you used multiple chains. How to make that, manage that better is where giving the right context and the prompt. If you use ChatGPT and put the prompt, you know that the better a prompt that you put, the better outcome you get. All of those pieces become extremely important so that we can do this series of actions and make that possible to get done with right. Reliability. It could be managing a purchase order, paying an invoice. There's a lot of enterprise actions that happen that can be automated. The second part I think that I believe is that there has to be model-level innovation as well. Just think of the biggest AI application today. Self-driving cars, the way most that are going around, at least in the barrier. It takes a human being maybe two weeks to learn how to drive a car. It took us 10, 15 years to train AI, how to drive a car.>> Get their permit, they get their permit at 15.
Saket Saurabh
>> The reason is that the AI has to work in a human world, in these streets and this chaos. The same thing in the enterprise. Remember the enterprise, a big enterprise is a lot of chaos. AI has to work in that chaos. If it takes us that much time to train an AI to drive a car, I'll tell you, running a purchase order in a big company is extremely complicated. You hire a graduate and it takes them three to six months to learn how to do that right and not make mistakes, so there will be a evolution of the techniques and methodologies, ultimately the knowledge of how a job is to be done and how is to be verified has to be brought to the AI, right? We'll see evolution in model side and we'll see evolution in the data side.>> Awesome. I want to ask you about, again, people love the commentary, they're calling in. I want to ask you about your company. Give a quick overview of what you guys are working on. Some of the momentum, status. Put a plug in.
Saket Saurabh
>> Yeah. Nexla started with the vision that everybody will become a user of data and data will be used in different places. We created the connectivity layer so that we can connect to different systems, understand the data, bring it together in a common way and feed to different users. We didn't think of generative as the potential user, but it has become the biggest user. One of some of the problems that we have solved is for brands or companies that we know, like a DoorDash, get data from all the different merchants, have in a common structure so that you and I can say what is available nearby and get it delivered. And so us as a company have been solving a problem that, again, preparation meets opportunity is exactly the right problem that matters for the AI world, which is connect to all that it understand it. We were the first company to be considered a cool vendor for using the understanding of data to stitch it together. And today, if the scale that you are thinking about in terms of stitching information, there's no way to do it other than use AI back to make data AI ready, and we are bringing that AI into making data ready.>> All right, Saket, thank you for coming in today, appreciate the commentary. Thanks for participating in the mixture of expert series, appreciate it.
Saket Saurabh
>> Thank you so much. Pleasure.>> I'm John Furrier with theCUBE. We are here at the NYSE, thanks for watching.