We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. Signing in with LinkedIn ensures a professional environment.
play_circle_outlineImpact of recent Scale AI news on AI and data strategy innovation.
replyShare Clip
play_circle_outlineTransforming AI Training: Snorkel AI's Journey from Data Bottlenecks to Advanced Language Models and Agentic Systems
replyShare Clip
play_circle_outlineThe critical role of expert knowledge in scaling AI applications and evaluations.
replyShare Clip
play_circle_outlineFuture outlook involves leveraging expertise to enhance AI across varying sectors and use cases.
replyShare Clip
play_circle_outlineTransforming Financial Services and Healthcare: Overcoming Generative AI Challenges and Ensuring Data Quality for Successful Applications
replyShare Clip
play_circle_outlineDiscussion of the need for tailored platforms and strategic partnerships in AI deployment.
Henry Ehrenberg, co-founder of Snorkel Inc., joins theCUBE’s Dave Vellante and John Furrier during theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders 2025 event to discuss the strategic role of data in building scalable AI. The conversation revisits Snorkel’s Stanford roots and the journey from academic innovation to an enterprise-grade platform.
Ehrenberg shares how Snorkel leverages expert-driven workflows to align data strategies with business impact. The discussion explores how generative AI is being operationalized across industries, wit...Read more
exploreKeep Exploring
What is the current state of the AI industry and how is it being influenced by data strategy?add
What was the origin and development of the research team mentioned in the text?add
What are the advancements in technology that have enabled the scaling of expertise in AI?add
What is the significance of AI data strategy in relation to recent trends and innovation in AI technology?add
What has been the experience of organizations regarding the adoption of AI, specifically in relation to the use of large language models and the importance of model evaluation?add
What are the challenges and strategies for AI products when entering the enterprise market?add
>> Welcome back everyone to theCUBE here in Palo Alto. I'm John Furrier with my cohost Dave Vellante. We're here for the robotics AI infrastructure series, three days of wall-to wall coverage of the NYSE. A great program of experts coming in. We've got the great guests here talking AI, Henry Ehrenberg, co-founder of Snorkel AI, hot AI startup. We've got great big clients in the middle of all the action. Henry, great to have you on. It's been a busy week for you and the company.
Henry Ehrenberg
>> Yeah, absolutely. Thank you so much for having me on. No, absolutely. Obviously the Scale AI news over the last week has created a very high energy space around AI and AI data strategy as a whole. I think if nothing else, it just really underscores this next wave of AI innovation is really going to be driven by focused attention on AI data strategy and we're here to meet the moment.>> One of the big conversations, not to go on a tangent, but I will because it's a cultural thing, is a generational shift, one. Two, the data philosophies of companies and who they work with is almost like picking a sports team or a college or university to go to the culture of the team and the expertise. We're seeing this come up a lot in the conversations. It's not just do you do something? Who's behind it? How trusted are you? This is a huge, we're hearing it both in terms of PhDs deciding where they want to go work, so cultures of the companies are mapping to the cultures of the philosophies of data. You've got privacy, you've got intellectual property. I mean data is wrapped up now in all these other issues that were waved away in the old days, but now it's important. What's your reaction to that?
Henry Ehrenberg
>> Yeah, no, I think it's a great point. I mean, you compare maybe Apple on one side of the privacy spectrum and other companies that are a little bit more forward when it comes to their usage of data, and I think that is, like you said, a reflection of engineering company culture as a whole in many respects. There is a huge AI talent war happening right now, and I think a lot of folks are picking where they go not just based on compensation or current technology, but where they think things are heading culturally.
Dave Vellante
>> What was the founding premise of your company, and how has it evolved?
Henry Ehrenberg
>> Yeah, absolutely. We started as a team close to 10 years ago now. We started as a research project in the Stanford AI lab. This was several waves of AI innovation ago. Rewind back before agents, back before large language models to when deep learning was this new hot thing. That sounds ancient right now, but at the time it was a huge step forward. You had Google open sourcing TensorFlow, you had Facebook following suit with PyTorch, and all of a sudden a lot of ML practitioners have these tools, have the GPUs now as well through the cloud to put deep learning into practice more so than they ever have been able to before. The big bottleneck that we saw though when we actually tried to take these cool new models and the hardware and train them for real world use cases was the data. I can't make use of them if I can't get my hands on a large high quality label data set to actually train and then evaluate these models. We set out to solve that bottleneck first from the research and then open source perspective. How can we scale expert knowledge when you're trying to build a data set that really requires experts to weigh in and annotate the data, steer, guide and evaluate the models? And that's something we thought was going to be a quick project, but here we are close to 10 years later at this point, started the company around five or six years ago now and have scaled from there. Our focus on that key bottleneck of AI data hasn't changed one bit since we've started the company. Of course, the technology and the way that it supplies has shifted as these waves of innovation and AI have come.
Dave Vellante
>> So can you be more specific about, so back then you were using probably just well-known math techniques, SVM or whatever it was, and how has the technology evolved to enable you to?
Henry Ehrenberg
>> Yeah, no, absolutely. I mentioned scaling expert knowledge to specialize AI at scale. That is really our mission. The key technical challenge there is how you scale expertise, right? How do I take one bit of expertise out of the head of an expert and use that again to steer and evaluate AI models at scale? The tools available to us to do that have shifted rapidly just as the types of models that people are trying to specialize have shifted as well. So now, we have a lot of techniques at our disposal from leveraging experts mixed with LLMs to scale those pieces of knowledge in addition to building out agentic systems to take inputs from multiple experts and use that to evaluate data set quality, use that to evaluate the quality of actions that agents are taking and go much more broadly from there. So exactly to your point, years ago, it was maybe much simpler techniques focus on SQL queries and domain heuristics, and now we're able to scale that much more through AI.>> One of the things that comes up, and I'm glad that the Scale AI thing happened with Meta and the big investment, I thought it was an acquisition first and kind of is, but it highlights and shines the mainstream light on what you guys are doing because there's a lot of people trying to onboard into production gen AI and they come down to the data. Again, there's a lot of things going on with the data these days, whether it's performance on certain types of chipsets or clusters, and it shines the light on. So what does it tell us, the whole, and by the way, Scale AI helps you guys because now they're aligned with Meta. You guys have a greenfield opportunity to take more territory as a business, so congratulations. What does it tell us in the mainstream world of what's going on here? Why is this such a big piece of the chessboard? Why are these moves being made in your opinion?
Henry Ehrenberg
>> Yeah, no, again, I mean exactly to your point, I think the main thing that it does is underscore the importance of AI data strategy for these next waves of innovation. Exactly to your point, it's been a very high energy week for us, right? Tens of millions of dollars in new pipeline engagement spinning up across the board with major LLM providers that are looking to move away towards more neutral and strategic data vendors for the long term. I think one of the key things is that it is really reflective of the shifting tides in the type of data that you need to, again, push the frontier of AI systems. This last wave of innovation, you think about the advent of LLMs, instruction tuning, things like that largely driven by massive piles of internet data that more or less everyone has access to, and then large scale of cheap annotations, which some of the legacy labeling vendors we're able to provide. If I think about what I need to push this new wave of models forward, what do I need to do to really challenge Gemini or challenge o3 or Claude Sonnet 4, any of these really advanced models? It's just not going to cut it to show it the same internet data or give it really simple low quality labels at very large scale. You really need to focus on getting expertise out of the head of experts to help guide knowledge guide reasoning, agentic systems usage of tools across the board.>> So on the extracting, synthesizing this, and connecting the dots, I hear you saying is that, okay, the models want to be neutral, the enterprises want to of distill off these models and then integrate their set. Is it an integration issue, or is it, what's the needle moving moment here? Because I think it's more than the labeling, which by the way, it's important, but I think it points to the big trend of operationalizing gen AI into production. Is that what's happening? What's the needle moving takeaway here?
Henry Ehrenberg
>> Yeah, absolutely. I mean, I think the role of data to help push those frontier models forward isn't going away. All the major labs are still going to want to push their models forward to improve knowledge reasoning, agentic capabilities, and again, that does come from extracting expert knowledge, but to your point, when we then think of how enterprises are looking to apply tune and evaluate those models, it can look quite different. We offer products for both. We have our data as a service, which is really geared towards major LLM providers who are looking to push those frontier capabilities, and then our enterprise platform is a really great fit for those data science machine learning teams that are looking to apply tune, distill and evaluate those models.>> So do you have a dual business model, the OpenAIs of the world, the models themselves and enterprises, you have both going on?
Henry Ehrenberg
>> Yep, that's exactly right. We've been working on our enterprise platform for years at this point and just seeing the incredible market opportunity given our really unique technology to scale expert knowledge spun up our expert data service.>> You're going to be building vertical models, small models, you're like a model broker. The way-
Henry Ehrenberg
>> To some extent. We like to think of ourselves as->> Okay, Dave wants to jump in here.
Henry Ehrenberg
>> Yeah, absolutely.
Dave Vellante
>> We wrote a piece six, seven months ago, why Jamie Dimon is Sam Altman's biggest competitor. The premise was that Jamie Dimon is never going to leak all his proprietary data up into the internet so that LLMs can train, that it was that proprietary data that was inside the firewall, if you will, that was going to be the real competitive differentiator.>> An exabyte, by the way, over an exabyte of data.
Dave Vellante
>> At the time, I think it was Alex Wong who said it was 800 petabytes. It was really two, three exabytes or something like that is the real->> It's a massive amount of data.
Dave Vellante
>> Yeah, huge. Okay, I would imagine you're seeing a lot of interest in financial services and healthcare and probably government, and so I'd love you to talk to that and what are they doing, and what's the mainstream going to do? Are they going to buy that as managed services? Are they going to buy it through SaaS companies? Are they going to do their own model development?
Henry Ehrenberg
>> Yeah, it's a great question. We've been working to your point with large enterprises for years as they have gone through this AI adoption curve, starting with simpler deep learning models and now, hey, what is our strategy around large language models? You mentioned distillation before, that's a huge element of it. One of the really big focuses today is on model evaluation as well. If I am a large financial services company, I can't just vibe check my way and put a large language model or agentic system into production. I really need to make sure that I can trust it to follow of course regulations for my industries, but also my business workflows with really high accuracy. That's one of the most important applications of data. And then our platform is for model evaluation, and building not just generic ones, accuracy across the board, but getting really fine-grained and custom evaluations specific to my own business, the regulations that apply to me and what I know good looks like as an expert within my business, and that comes from the data that comes from experts within my organization.
Dave Vellante
>> How do you reach those enterprises? What's your route to market on that side of the business?
Henry Ehrenberg
>> Yeah, this is something that does not come easily. I think the technical reputation that we've built over the years being really dead focused on the types of real world problems that a lot of these enterprises face in AI adoption from. Again, how do we best use our experts' time? How do we focus on quality and how do we think about end to end value delivery? I think this is a place where a lot of other AI products get it wrong when trying to go to the enterprise. It's not enough just to throw software over the wall. We see enterprises less and less happy just bringing in software. They want to bring in value, and AI is still a very, very early market. So going to market as a really strategic partner to your customers, working with them to think about what it's going to take, not just the models, not just individual pieces of software, but end to end deliver AI value.>> You have pull coming into you and then you engage with a professional service kind of motion to work backwards and then address their solution and then go from there.
Henry Ehrenberg
>> I think you see Palantir becoming a bit of a darling again, whereas a few years ago that blend of platform and services probably wasn't looked at as favorably. Again, in a really early stage market like AI, you need to think holistically about how do engage with your customers.>> Yeah, Henry, I love what you guys are doing because I think you guys are taking the nice playbook for setting up for the agentic run that's coming because you mentioned some of the things. Data labeling is just, you got to get that done through runtime. You got to know what's going out there. You got to know what experts, what kind of content. And then the evaluation piece came up a lot last week at Databricks event because Jonathan Frankle and I had a great conversation around how evaluation is the precursor to anything you do because there's math involved there too. It's not just human reinforcement. You can simulate human reinforcement there. A lot of the upfront work to prep the agents is on the evaluation. Sounds like you agree with that, and that's where the focus is today. That sets up the next level, which is releasing the agents. You got to have a pipeline of data, so you got to check the labeling box, get through that progress to the next level evaluation and then release the agents.
Henry Ehrenberg
>> Sounds like you're saying the data plays a very important role around this process.>> So we always say that you agree that's the good progression.
Henry Ehrenberg
>> Oh, yeah.>> What happens now that the agents are about to be released? I can evaluate. I put scope on the agents. Are you essentially setting up the HR department for these things? It's like, how'd you do on your report card today? You did good. Did you complete the task? This kind of evaluation is like a job review.
Henry Ehrenberg
>> Yeah, exactly, and I think that's a really good analogy. I mean, I would think about it before I go and ship my first version of something to production, I have to be really confident in the evaluation that I'm building. Again, is it custom enough? Is it fine-grained enough to my specific settings, but it's not enough just to do that one time and let it go. The world changes, how people interact with agents changes, and so this is something that you need to do continually, that you need to update continually, need to provide new data for evaluation scenarios and keep humans in the loop to grade the outputs.>> Is that reinforced learning there, or what's that step? What's that iterative process?
Henry Ehrenberg
>> Yeah, so reinforcement learning is obviously a really great technique and one have seen multiple waves of adoption and people forgetting about it and then all of a sudden comes back to help tune those models over time as again, your environment shifts and what you expect of an agent in production shifts. Again, I mentioned the shifting tides and what it's going to take to push these models forward. One of the big shifts that happens in an agentic world is that the complexity of the state space blows up, right?>> Explain that.
Henry Ehrenberg
>> Sure, absolutely. I think a really good analogy, and so let me go back and say, whereas before an LLM might just be outputting text, all of a sudden the set of actions that an agent can take, it's much greater than just outputting text. I might interact with an API over here, talk to an MCP server over there, I might interact with another agent over there, use tools, reason, all these different pieces, and so the kind of path that an agent can follow to accomplish something is far greater, far more complex. There's many other places on the map, and so really focusing in to get experts to guide and evaluate those paths is super, super important. It's hard, and I think a really apt analogy is autonomous vehicles, for example. That is one where the state space of what a vehicle could do, what could be in the environment is very, very complex. We've had a really, really good self-driving for about 10 years, and I just saw the other day that I can take Waymo out of San Francisco, so it's taken quite a while. These really complex use cases to see that full production deployment engineering agents is hard. It takes really great focus on evaluation and tuning, but we can get there with the right.>> Our last panel were mostly hardware. We're talking about robotics and hardware and some of the low-level stuff, and one of the guests said the models, the one-shot versus multistep and all that stuff's going on. If the math question is what's two plus two, you don't want to wake up everybody.
Henry Ehrenberg
>> Yeah.>> There's a GPU energy cycle, so the data contextually can be managed. How do you guys play in that kind of equation if you're going to be on the model side? Is there differentiation on service levels or model interactions? I don't know what to call it, but you can see efficiency starting to come down where I don't need to talk to the whole thing, I just want this piece. Is that in line to your platform? How does that come out that's going to affect power and energy affect reasoning paths.
Henry Ehrenberg
>> No, exactly. I think we've been talking about small language models, distillation of language models for quite a while now. I think we had a couple of really great Fortune 500 use cases covered in the press when that was first being talked about, about a year, and I think that's actually coming back now. I think I just recently saw a Gartner report that in a couple of years, really the big adoption spike that they're expecting is around distilled small language models that are fine-tuned and evaluated for specific tasks.
Dave Vellante
>> Took Gartner a while to get there.
Henry Ehrenberg
>> Yeah, no, exactly.>> Two years. We had it last year actually.
Henry Ehrenberg
>> Yeah, exactly. No, but to your point, when you think about efficiencies, when you think about being able to evaluate with high confidence and really specialize to specific use cases, I think we're going to see a continual rise.>> So people don't have to necessarily build their own models. They can just distill off the main ones and then they're going to bring their data to the table, which in a way is a model. They going to have some sort of data where it's unstructured data, multimodal data language or computer vision. How are you seeing that integrate in, you see that combination fusing together?
Henry Ehrenberg
>> Yeah, absolutely. And again, that's what a lot of people are using our enterprise platform for is to bring their own data, their own knowledge plus connections to base LLMs that they might be using to again, distill fine-tune into those smaller models that they can ship to production with high confidence.
Dave Vellante
>> A couple of interesting vectors here and some tailwinds for you. Obviously, the Scale AI Facebook thing. You mentioned Palantir before, and you were referring to I think software and services coming together, which has actually been historically pretty rare that a predominantly services company can pivot and scale the software. We've seen some attempts to do it and it's never really worked out that well, it seems like it's working out for Palantir. One of the things they appear to be doing is doing some hard engineering work and developing a software platform that's harmonizing data, something similar to what you guys are doing, but different and then maybe controlling the agents within their domain. How do you think about, does the analogy for you guys carry through forgetting about the team stock aspect of it, but does it carry through in terms of other software layers that you're developing that are skating to the puck, if I can use that analogy?
Henry Ehrenberg
>> Yeah, no, and I think people do need to think, especially as we head into this kind of agentic world, think about different layers up and up the stack, all the way from hardware data layers, the models themselves, and then increasing levels of interaction between models and all the way up to the-
Dave Vellante
>> Governance and-
Henry Ehrenberg
>> Yeah, exactly right....
Dave Vellante
>> so many other things, right? Security.
Henry Ehrenberg
>> Exactly right. And there is incredible innovation happening out in the space across many of those layers. It's not our goal to replace any of them where there is really great innovation, but where we can provide differentiated value, especially when it comes to leveraging your own data, your own expertise, and increasingly injecting internal expertise through our services as well. That's where we really want to focus. We want to make sure though we're a clearinghouse, we integrate really well across the stack for those other assets.
Dave Vellante
>> And that gives you better economies at scale, is that right?
Henry Ehrenberg
>> Exactly right. Because many of the large enterprises that you're referring to, they've already placed bets maybe with certain clouds or certain model providers, certain kind of application stacks. We want to make sure that we can power all of those, and again, you can inject your own data and expertise to tune and evaluate.
Dave Vellante
>> Do those enterprises that you're working with, those on-prem organizations that aren't necessarily putting data in the cloud, they have data in the cloud, they do plenty in the cloud, but they're also building their own stacks on-prem. What are you seeing in terms of the expertise levels outside of say, financial services?
Henry Ehrenberg
>> In terms of technical expertise?
Dave Vellante
>> Yes, the ability to have the AI talent to actually do what needs to get done.
Henry Ehrenberg
>> No, it's a great point. On-prem is a bit of a dirty word in the business. We say within your own cloud tenant, which is an increasing way that we see economies of scale. From our side, the point very much remains the same. The talent wars across AI are not just within the major labs. Obviously we see many of our enterprise customers recruiting really great talent. I think where they've had the most success is by building teams focused on delivery of value for certain lines of businesses rather than maybe just trying things on the experimental side. So as long as they stay focused there, I think they've been seeing really great success.>> What is your role now as co-founder? What are you working on? You're overseeing the platform, the technology roadmap, what's your day like, what's your focus?
Henry Ehrenberg
>> Again, really high energy week, like the last one, things change across the board in general and primarily focus on technical strategy, especially for our new product bets. Other days I might be doing party planning, whatever it takes to really go.>> Board schmoozing?
Henry Ehrenberg
>> I mean, everything has value in that sense, but technical strategy, especially recently has been focused on our expert data as a service, our new product offering, thinking about how we can use our years and years of fundamental and applied research combined with a really highly curated set of experts across thousands of domains from physical sciences, law, math, lifestyle stuff, and really think about how we can use our technology agentic systems that we've built to scale their expertise and deliver this really->> I mean, being a co-founder, founder-led companies always transition well when they are at the helm. AI, you said bets, there's a lot of new things happening. You have to really keep your eye on the prize, that's the north star, but evaluate what's coming at you. You got to play what's in front of you. What are some of those bets you're looking at in areas? Obviously life sciences and healthcare are great use case. We're seeing growth there previously, bad IT environments, supercomputing availability for life sciences. Healthcare is healthcare. These are right markets. What's your focus in terms of the north star vision?
Henry Ehrenberg
>> In a space that moves as quickly as AI, you do have to keep your eye on the horizon. Technology shifts every single day. Being aware of those and making sure that you're able to deliver that new innovation to customers when it can drive value for them is extremely important. But I think the other thing that's been really, really important for us is to have a really durable mission. One that we know is going to weather the tides of all of these shifts in the AI space. And again, coming back to the importance of data and expertise when it comes to AI strategy as a whole, that has really been the through line for us. That lets us connect different trends to customer value to take our research and apply it given the new technologies that are coming up.
Dave Vellante
>> And not getting distracted by all the noise out there, but can you give an example maybe of an architectural decision or a technical decision that you made that was grounded in your technical principles?
Henry Ehrenberg
>> Yeah, no, I think with our new evaluation product, so we can deliver really high quality evaluations to customers for their LLMs and agentic systems in couple different ways. One, providing expert data to them for scenarios to evaluate against, but then again, letting them customize, build, really specialize fine-grained evaluations. There's always a temptation to just automate the heck out of everything with all the new innovations in AI, hey, I just press a button and the agent takes care of evaluating everything. And of course, there's a lot of innovation that we have internally where we've shown that to be true, but at the end of the day, we really want to allow customers, especially large enterprises that have really specific workflows, goals, regulations that apply to them to have that fine grain control from a human expert level when it comes to the evaluations that they're creating and running. So giving people the ability to customize and specialize, not just have one auto magic evaluate button, something like that has been really key to our adoption of the enterprise.
Dave Vellante
>> Staying with that, you had to make a trade-off because you could have made it simpler, but the trade-off was you aren't going to get as good of accuracy. That probably took some thought or maybe not because your principle was fine grain control.
Henry Ehrenberg
>> Yeah, no, absolutely. Especially when it comes to understanding the needs of enterprises, you do have to make those types of trade-offs. And again, we have a lot of innovation things that we are able to deliver to customers with that kind of level of simplicity and automation. But in many, many other cases, you do need to offer those more specialized interfaces to let people have that level of control that they need to meet their business-
Dave Vellante
>> Can you have your cake and eat it too, and not gain weight, as Victoria would say, and give people those knobs to turn, but at the same time provide some kind of abstraction? Is that the-
Henry Ehrenberg
>> Yeah, definitely. And it comes to knowing your users, so being able to surface those fine-grain controls to the types of users who aren't afraid of them and know how to operate them while still offering those simplified experiences to the broader base of users that you have.>> Well, Henry, it's great to have you on. And again, being in the high speed, high velocity market you're in, all the action happened around you. It's a really fun time, congratulations. I guess I'll end with a question around customer outcomes. What are they seeing? Give an example of what's in it for them, the benefit they get out of doing the work, because a lot of people want to see stuff fast, time to comfort, time to value. What are some of the outcomes that they get when they deploy properly? What happens?
Henry Ehrenberg
>> Yeah, no, we've seen, again, across both our expert data as a service and our enterprise platform, really incredible results. For customers who are LLM providers themselves, the big ones that you've heard of, and then large enterprises that are thinking about how to apply those to their own very unique business goals and settings. For example, we've delivered really fantastic, very, very hard evaluation data sets for LLM knowledge and reasoning across thousands of subdomains so really putting these LLMs to the tests as you evaluate how you're driving innovation, and oftentimes evaluations like that are the ones that actually make the difference in terms of improving models for our enterprise customers. Just recently, we were talking about small language models. Fortune 500 telco company was able to deliver, I think, over eight figures in value from distilling a model for agentic reasoning for the billing domain. We also had one recently combination of both our expert data service in addition to our enterprise platform, where they were able to fine-tune a customer service agentic system and I think improve NPS by eight points, which is, that's huge.>> They're deploying the models into the apps or agents themselves, and then they're applying it to their business. They're like agent builders data extracting the value from the heavy lift that they would've had a hard time doing. You're streamlining that process.
Henry Ehrenberg
>> Yep, exactly. Exactly right. A lot of these problems without the injection of those expert data sets that we can provide without the platform that we provide to customize evaluations, fine-tune models, build AI data sets at scale, using your own in-house expertise, they feel intractable. Our goal is to really make that scaling of expert knowledge specialize AI, something that's possible.>> While I got you here, give a quick rundown on some of the numbers when you guys performed, how much you raised so far? How many employees and what are you guys looking to hire and do? Put a plug in and share some numbers.
Henry Ehrenberg
>> Yeah, absolutely. Again, like I mentioned, we started as a research project at the Stanford AI Lab close to 10 years ago now, and the entire co-founding team was working together then. We spun out as a company in 2019. Since then, we've raised over $250 million from some really, really fantastic investors. Addition just led our most recent round in addition to co-leading our Series C. BlackRock, Lightspeed, Greylock. Again, really fantastic partners through that entire journey, and we scaled the company now up over 200 folks full time and so it's been really, really amazing to see the business grow that way.>> Congratulations.
Henry Ehrenberg
>> Appreciate it.>> And continue to kick some butt out there and congratulations on all the success. Thanks for coming on.
Henry Ehrenberg
>> Appreciate it. Thanks so much for having me.
Dave Vellante
>> You bet.>> All right. I'm John Furrier with Dave Vellante for theCUBE, special series on robotics and AI leaders. They're coming in, sharing their opinions, also talk about the market trends, and also setting the table for this next generational wave of value creation and extraction. Thanks for watching.