We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. Signing in with LinkedIn ensures a professional environment.
Ciphering Intelligence: The Future of AI Encryption
Nicolas Dupont
Founder & CEOCyborg
Eiman Ebrahimi
CEOProtopia AI
Anuj Jaiswal
Chief Product OfficerFortanix
Nicolas Dupont, chief executive officer at Cyborg Inc., Eiman Ebrahami, chief executive officer at Protopia AI, and Anuj Jaiswal, chief product officer at Fortanix Inc., join theCUBE’s John Furrier during theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders 2025 event to explore the high-stakes world of security for AI and robotics. The conversation tackles confidential computing, data integrity and the infrastructure essentials for safe enterprise AI.
Ebrahimi and Jaiswal break down the challenges of protecting sensitive data across the AI lif...Read more
exploreKeep Exploring
What is the focus and solution offered by the companies mentioned in the discussion regarding the use of AI in regulated sectors?add
What considerations should be taken into account regarding the location of workloads when discussing data usage and compliance in an enterprise context?add
What are the factors influencing the production of use cases within an enterprise?add
What new product is being discussed in relation to data security and its adoption by enterprises?add
Ciphering Intelligence: The Future of AI Encryption
search
>> Welcome back everyone to theCUBE's and the NYSE Special week here in Palo Alto for our media week around robotics and all the AI leaders coming in of course. Big face-to-face event, 5:00 to 8:00 on Wednesday. I'm John Furrier, host of theCUBE, a great lineup here. We'll be talking about robotics and AI around ciphering out the intelligence, protecting it, securing it. We got a great lineup here. Nico Dupont, CEO, Cyborg. Eiman Ebrahimi, CEO of Protopia. Good to see you again.
Nicolas Dupont
>> Good to see you as well.>> And Anuj Jaiswal, Chief Product Officer for Fortanix. Gentlemen, thanks for coming on theCUBE, great to see you again. Now we're in this kind of the second wave of the first wave of innovation where we're seeing robotics is a big theme here, mainly because software and physical AI is super hot. AI enterprise is another big topic area where the data is unlocking as we speak and people are starting to think about the systems. So I want to get into the confidential computing side as well as how to handle the data layer because the data feeds the beast, so to speak in AI and that's one of the big things we're seeing. So before we start, talk about what you guys do and set the table. Nico, we'll start with you.
Nicolas Dupont
>> Sure. Thanks for having us, John. I'm Nico Dupont. I run a company called Cyborg. We're working on helping companies in regulated sectors adopt AI by solving for the gap in confidential AI, which is N10 encrypted vector and graph databases.>> And the problem you solve?
Nicolas Dupont
>> The problem we're solving is how do you ensure that the entire inference life cycle is end-to-end encrypted so you have verifiable cryptography so that you're able to maintain confidentiality of the data, regulatory compliance and security standards.
Anuj Jaiswal
>> Thanks John for having us. I'm Anuj Jaiswal, I'm Chief Product Officer at Fortanix. As a company Fortanix, we are nine-year-old and we pioneer in technology, call it confidential computing. So we build products leveraging confidential computing. Our core product has been enterprise key management and making sure data is encrypted and secured while it is addressed in motion as well as in memory. That's very important. In terms of the AI, what we have done is we've built an end-to-end solution AI platform that ensures the data is secured all the way from ingestion of data through going into the vector database and the inference of the data. So we leverage all kind of confidential computing from the CPU as well as the GPU's. And what we have built is an AI platform where agentic workflows can be built, so-called the gen AI applications can be built and all the data can be secured, especially when it comes to sensitive data and intellectual property for any enterprise.
Eiman Ebrahimi
>> Hey John, I'm Eiman. I'm the CEO and co-founder at Protopia AI. The problem that we're solving is really focused on how to unlock enterprise data with target compute that ends up fitting the essentially investment envelope for these AI use cases. I think what we are seeing in the market is that going from POC's to production is often tied to whether or not the enterprises can use the most relevant data for their use cases. And in order to be able to do that, they need a means by which that data can be protected through that life cycle. Now, like Nico was saying. Also, there's an entire life cycle to that inference pathway and we're focused on post what happens in something like semantic search. How do you protect the data when the data is going to be sent to a server, potentially where the language model is going to run and what is the exposure of that data that can happen there, minimizing risks around the data layer.>> Yeah, I'm super excited about this wave because you saw the hyperscalers. They're all throwing CapEx money at everything. Data center demands rocking. We're seeing that play out. A lot of innovations just in the past two years. The enterprise, I won't say has been a slow role, but it's been highly enthusiastic, but not a lot of confidence. And the folks we talked to have said the same thing. It's like, "Hey, production is a very narrow window because the security bar is so high because there's a lot of things going on in the enterprise."
So the first question is how much data is locked up right now? There's different stats out there around one percent of the data is enterprise ready, they use less than 50% of all their data. Do you guys have any general sentiment around where they are in the enterprise around the data? Because the data is the key. Securing it will be the discussion here, but what's your observation? Any thoughts, any stats, gut feeling? We know it's low, but how low is it?
Anuj Jaiswal
>> Maybe you want to .>> Jump in, first one in.
Nicolas Dupont
>> The first part of what you said. I don't have that much data about how much or stats on how much of the enterprise it is used, but we're definitely seeing a lot of pilots dying on the vine, AI POC's that never make it to production. I think the latest statistic, I forget who it was from was 46% never make it. And privacy, confidentiality and really just compliance is the key driving factor there because you can build a POC super easily and you can show ROI, but then when it comes to convincing the relevant stakeholders in terms of risk, in terms of security, in terms of compliance, that's a completely different story. And that's what companies like the three of ours are trying to help.>> Any other observation data? You mentioned you think it's less than 10% before we came on camera. What's your thoughts?
Eiman Ebrahimi
>> Yeah, I think there's another kind of axis to the question that what we need to think about when we're thinking how much data is actually being used and that axis is where are those workloads going to be run? Historically, whenever we've talked about sensitive information and things like compliance that Nico's pointing out, the enterprise's approach has been we're going to bring the compute to where the data is. And that approach was last year especially became pretty popular with open models becoming very capable. And the idea of we're going to build a lot of things in-house at the enterprise level was something that people invested a lot of effort into. But the fact is that the ecosystem evolves very rapidly. You suddenly then had reasoning models show up, agentic workflows show up. A lot of really good applications that were built around proprietary models show up. And so this idea that bringing the compute and the models and the people that are going to manage them right where the data is didn't end up scaling very well. And so the cost argument is the other axis to how much of the data is being used because the data is not going to be used when the cost becomes prohibitive with respect to all the infrastructure that needs to happen for the actual data to be allowed to be used on that compute. And so that's where I think a big portion of the question is, is how much of the data will be used if we unlock the right infrastructure for that data to be able to run through the models there? And that's a determining factor in what that number is. When I think about that 10% number that we hear about, it's related to how much of the data is being used with managed infrastructure as opposed to just dedicated environments that the data owner will be responsible for. So I think that that axis becomes super important, the axis of cost.
Anuj Jaiswal
>> Yeah. If I may add, I personally have interviewed several CIOs and CTO's and CISO's, and it was very similar to seeing a lot of scientific experiments which are happening within the organization. And that option is increasing, but when it comes to the sensitive data or intellectual property where there's a lot of ROI, when the AI needs to be implemented, whether it's agentic workflows or gen AI applications, there's no bridge which takes scientific experiments to the production. And that's the problem, which is basically around data security, privacy, compliance, regulations. And that has been a big challenge for the industries to say, "Okay, how do I get an AI platform where I can bring all this sensitive data from multi-sources, make sure my data is protected all the way from the beginning to the end." And I'm assured like CISO's like security has been an afterthought for the last 18 months, but now it has becoming really prime. They want to start with the security first and then think about, "Okay, how do I take it to the production? How do I show the ROI?">> Even gen AI, I've heard CISO's say it's just another app to us, app review. We're going to go through the normal process, our normal bars of resilience has to be maintained, but Eiman points out the agentic workflows start to kick in. You got now more complexity, but the value's there.
Anuj Jaiswal
>> Value's there.>> So the question is the infrastructure not yet in the enablement position? Is it cultural in the enterprise or is it the startup's fault? Because if the startups are dying on the vine at 46% roughly give or take, but a big number, are they even set up to run POC? So many of the old school POC's, "Well, hey come on in, here's the sandbox, play around," and then the staff of the enterprise would evaluate that. Kind of like going through the airport, you got to go through security, now you've got TSA preclear, so you have all kinds of mechanisms in the enterprise, and yet their legacy IT hasn't transformed, and that's just in the past 18 months. I say 12 months with the agents, but they gen AI even a year ago was that, so the question is whose fault is it? Is it finger pointing both ways? Is there accountability? Is it just cultural shift change? How do you guys view this? Because every enterprise wants to get stuff into production. They want to get the hot startup, but do they even have the time to do it? So what's going on with this? Because that's a big number. We got to get that number down.
Anuj Jaiswal
>> Yeah. In my perspective, I think it's not about finding whose fault it is. I think it's the evolution. That's the way we look at it. When first people start seeing is, you see a jukebox, you start playing some random songs and then you start looking for, "Okay, what is my playlist and how do I leverage this playlist while I'm enjoying or entertaining myself?" So what we are seeing right now is there's a huge transition which is happening because enterprises, especially in the regulated industries, whether it's BFSI's or health care, they've realized that there's a huge value, there's a huge potential, and now security is becoming prime. So one is the enablers, which is the CIO and CTO organizations, and then there's the guardians who are the security organization and CISO organization. Now they both are working together to bring out a product or a platform, which is they can bring it entire organization scale. So I think it's a phase of evolution. It hasn't happened in the past and now it's happening.>> If you bring the axis that Eiman was just talking about and add the next layer, which is startups need to rally around a beachhead to get into the enterprise. So I see startups more of, well, they'll navigate quickly to the value point. How do I get a beachhead in the enterprise? I think on the IT enterprise side, I think they have a responsibility to organize. Are you guys seeing any patterns where enterprises are doing it right? Is there automation kicking in? Is it more of an architecture? Is it a systems problem?
Eiman Ebrahimi
>> Yeah, I think there's definitely, I fully agree by the way that there's no finger pointing that would even make sense in this environment. I think the ecosystem needs to solve a lot of these fundamental problems together in order for all of the ecosystem to actually flourish. But from the point of view of how we seek some of our ... Think of them as AI native application providers that are providing solutions to the enterprise, how they deal with this issue of data exposure is that one very common pattern is that startups that have a killer use case for the enterprise that they're solving for will often early on in the first year or two, lean heavily into, "We will deliver it to the enterprise however the enterprise wants to consume it."
What if that means dedicated compute environments on-prem, we'll deliver it there. If it means dedicated compute environments in the cloud, we'll deliver it that way. The issue with that is that the costs associated with that, the cogs of the AI native application provider will soon become prohibitive when they do that. So when they transition to the next phase, when they're now at scale and they're trying to grow very fast, you'll see a lot of them start discontinuing those very heavy dedicated environment solutions. But then the data privacy aspect of the story gets in the way because the enterprise has now been offered already a way that it worked that was dedicated, but then the cost versus privacy solution again kicks in.
Nicolas Dupont
>> I agree with everything you guys said and you guys are very nice for not pointing fingers. I'll do the same.>> Well, I have to blame somebody.
Nicolas Dupont
>> The jury needs. No, yeah. But I think there's also an ecosystem question here because we are, it's incumbent upon companies like us to be able to bridge the gap between the fact that the engineers that are building these AI solutions, whether internally for internal use cases or startups that are building it for the enterprise, are operating at an infrastructure layer where they're oftentimes not really worrying about where it's running. They're not really concerned about the inference life cycle, the foundation model, and then we are talking about solutions that fundamentally require wholesale rethinking of where it's running, how it's running, how you secure it, how you attest it. So you've got a couple of layers of abstraction between those two. And I think that building out and letting the ecosystem or helping the ecosystem catch up to be able to provide verticalized solutions that can bridge that gap to where the value which is at the application layer, the AI, and the enabler, that is the infrastructure and everything upstream of that is in a package that can speak both to those that are building it and creating that value, those AI solutions architects and whatnot, as well as those that need to be convinced on the technical side, who are the CISO's, the CIO's and those that need to be convinced on the business side, who are the risk officers and the auditors, et cetera.>> Yeah, you have a game of ratchet where people are leveling up on the business side, technical and the customer and then the ecosystem.
Nicolas Dupont
>> And it's pretty unique here. You don't have to worry about you're building an email system, where are you running your mail server? That only matters to a couple of stakeholders. Here it's key business imperative and you have to have so many stakeholders.
Eiman Ebrahimi
>> Yeah, I think the thing that you're pointing out Nico there of just the example of email. A lot of applications in the past have been very limited in what the data is that you are feeding to the application. And so the question of where it's running has been easier to get over the bar for a lot of organizations. The entire set of workflows that are imagined for gen AI is quite different. Now you're talking about plugging into all data sources everywhere in the enterprise. And so that mode of operation is new. And so thinking about how data gets exposed is also going to require rethinking. Let's just take the primary example of developer productivity tools. Plugging into code bases of the organization. This is something that has completely transformed what the meaning of sensitive data is in terms of the data showing up somewhere else. Prior, we used to just think about PII, PHI, these sort of sensitive information categories where you could point to something in a data record and say, "This is what's sensitive." With code that's not even possible anymore. And so there's a whole rethinking of what does sensitive actually mean and how do you use it with infrastructure that is available to you.>> I guess the next question is great point by the way, is what's the optimization for the customer? Where are you guys seeing the drivers for your business? What are some of the conversations? Is it a new persona? You mentioned these new things are happening. Is it a certain governance conversation? I mean governance used to be like side conversation with separate conference rooms for that staff that would handle it. Now it's everyone's concern. Security's security. So where's the intersection? I'm trying to get to where you guys are spending a lot of your time. What's driving your business? What are the conversations?
Nicolas Dupont
>> I would just say if I knew exactly what the right approach for this, we'd be a unicorn by now. So we're still figuring out that.>> You're talking to customers. They're attentive to the wave, they're betting the farm, so to speak on this.
Anuj Jaiswal
>> Yeah, I can provide a perspective directly from the security perspective, and this is a quote which came from the CISO themselves was trust is not implicit because everything what the engineers are building, these are small different parts of the entire AI pipeline and when the security comes in, it needs to be plugged in at each and every level. It's not just DLP, it's not just making sure that the models are running confidentially. It's not just privacy, it's not just compliance, it's everything all together. Just because the gen AI, the way it consumes the data is just so massive. Anybody can bring anything into the AI pipeline and leverage the, or trying to reap the benefits of it because they see the value. But at the same time, the bad guys, they're only looking for some small opportunities to get access to that sensitive data or intellectual property. And that's massive, so today most of our conversations revolve around how do we enable both the enablers, which is the CIO's and the CTO's and the security organization to work together so that we provide an AI platform that is end-to-end providing that security, leveraging confidential computing. Most of the people, they get surprised when we tell them that all your data connectors, they're running in a vanilla VM, and that means all that data is going in clear text as well as all your models, all your agents. And confidential computing provides that privacy by design, that kind of hardened security. So enabling this whole hyper-secure environment is where most of our conversations revolve around.
Nicolas Dupont
>> And it's hard to do it right.
Anuj Jaiswal
>> It's really .
Nicolas Dupont
>> It's easy to turn onto the confidential VM switch on your Azure dashboard, but actually making sure it's secure, verifiable, transparent. .
Anuj Jaiswal
>> And making sure that attestation is working and if attestation stops, the whole system freezes. And that's where we see a lot of our conversations revolving around.
Eiman Ebrahimi
>> And I think there's also one additional, again, as the evolution of workflows continues, one of the new things has become this plain text exposure of data, not just being about data in compute on the server side as well, because what we're seeing with larger and larger contexts reasoning models is that the language models implementations are now such that you find persistence of sensitive data in clear text on the server side as well. Meaning the notion that it was just ephemeral in memory is no longer the case either. You have all sorts of logging both deliberately and inadvertently that happens.>> I agree.
Eiman Ebrahimi
>> Yes. There's also spills from memory to disk that creates persistence, all of which are implementation dependent. The enterprise data owner knows nothing about this and has no way of controlling it. And so what ends up happening back to that 10% is that the enterprise data owner often will decide what data they will use with a particular endpoint just based on what is the exposure. If it's going to be plain text of any kind, I just won't to use certain types of data. And that fundamental problem is I think something that all three companies here are heavily focused on reducing that exposure across the board where you are now able to unlock data sources that maybe you wouldn't use before.>> And you guys do encryption. You've got a lot of data action. I guess my question is this that comes up a lot. What does it mean in this era to take something to production in the enterprise? What's changed in the past 18 months? We got better chips, better software. Now you got geographies everywhere. What's the new thing that we have to pay attention to? What's the key production passcode, secret backdoor? Is there a way to get into production? Are people doing it differently and is it different? Are there different rules for production or is it still the same game?
Eiman Ebrahimi
>> I think just the enterprise working across the various pillars of the business. We've talked about the security side here. I think one part of the business that hasn't been talked about much is the AI center of excellence is we see kind of a lot of collaboration happening among these various pillars of the enterprise. Those that will benefit from the use cases actually going to production, being aware of what the security side, the CIO, the CISO will require of them. That collaboration is really an unlock that we see on the enterprise side towards going to production. Where that interaction is high and they are creating good communication pathways between those pillars of the business is where we see a lot more of the use cases actually going to production. And where that is still in its early phases, you see a lot of one-off efforts. The CISO side is heavily focused on how do we stop bad things from happening, which is absolutely the first order of business, but then how do you actually realize the value from the use cases is something that the business side is heavily interested in achieving. And so it becomes a matter of how well those different parts of business are working together.
Anuj Jaiswal
>> Yeah, I think one of the things, what we are learning is in last 18 months or about two years, there's a lot of investment which has gone in to see whether, what's the ROI, whether there's a real ROI implementing AI or leveraging data with AI. And that is helping right now. For us, what we are seeing is that it is slow, but it is incremental in moving towards the production. The other part, what we are seeing is most of these agents which are trying to consume the sensitive data, and up until now all these models were trained on synthetic data where the variance is not as expected as in the real data. But now leveraging the technologies where you can train your model on a confidential computing environment, bring in the actual sensitive data to train your model, now the variance is high. You're actually having much higher value or model accuracy, which is coming into the play. But again, the threat comes in. So you have to make sure that the model is safe and secure. There's privacy by design built in, MNL Ops built in. So we are seeing a good positive momentum, which is happening at this point where when we have a conversation, there are over a hundred use cases being identified. Now it's about, "Okay, how do we take one end after another use cases.">> It's POC after POC hitting the enterprise. Nico, how do we get the numbers down? Because a lot of startups who have ARR, there's one R missing. It's not recurring until they get in there. So this is a concern for a lot of the capital markets from the VC side as well. It's a little hyped up right now and bubbly, but the demand of the enterprise is coming. We see the value. It's kind of like just got to get through these mile markers.
Nicolas Dupont
>> Yeah, no, I agree a hundred percent. I think to Eiman's point earlier that a lot of startups are building where the enterprise wants it. And if it's on-prem, usually it's a lot more of an esoteric application case that you have to do a lot of custom work for that is not repeatable. And so the R, the recurring R is not built in. And I think that solutions that are able to abstract away the infrastructure where it's running and be able to provide cryptographic guarantees of security and compliance, whether it's running on the cloud, whether it's running on-prem, whether on the edge, are the applications that are going to be able to reduce the uniqueness of each deployment and allow it to become more recurring. Because I think a lot of the business cases that are being solved, the data is different, but the pipeline is largely the same. You see, you want want to plug into your ERP and be able to do predictions for next quarter and whatnot. Using an LLM in the same industry, it's going to be largely the same.>> So it's evolution, get those things standardized.
Nicolas Dupont
>> Exactly.>> Understand it a little bit more.
Eiman Ebrahimi
>> Yeah, and I think you see that there are startups that are thinking about how to provide these sort of guarantees around the data for their enterprise customer much earlier on in the sales cycle, meaning even when they go in to do a POC, being able to tell the story of I am delivering the value of the application I've built for your enterprise, and you can try it out with your enterprise data without risking exposing it when you are actually running this application. Doing that very early on in the sales cycle, both we've seen reduces the length of their sales cycles and can increase repeatability because now when ultimately maybe if some enterprise would've run the POC with synthetic data or data that didn't really matter to them, if it got exposed, you've already crossed that next bridge of how do you start using your actual enterprise data. And so what we see among our partners that are selling to the enterprises when they've thought about these next steps instead of just being focused on how do I get the first deal, especially in beachhead accounts, it makes for a much better story. And I think that ...>> That's the new land and expand. Getting the data a couple moves down on the board, so to speak.
Eiman Ebrahimi
>> Yep.>> Well guys, great conversation. I guess we'll just wrap up with the final question of share a perspective from your companies and your personal experiences going on right now. A breakthrough that you've experienced that you could share in your business. It could be a customer scenario, it could be an engineering feat. Nico, we'll start with you. A breakthrough you're proud of and you're super psyched for.
Nicolas Dupont
>> That's a good question. There's a lot in a startup, you're always figuring things out.>> You can name a few if you want. It's not pick your favorite child kind of thing.
Nicolas Dupont
>> To say one that I'm actually at the Confidential Computing Summit this afternoon presenting this. It's kind of hard and I think you guys can relate. It's hard to be able to demo security, especially cryptography. How do you make it, package it in something that in 10 minutes somebody will understand it and feel good about it and be able to visualize it. And that's something we've spent a lot of time figuring it out. And not to say too much about that, but we figured out a way to be able to essentially visualize what an attack looks like so that it's accessible to the layman and even useful for those security officers that are always thinking a little bit abstract.
Eiman Ebrahimi
>> Yeah, that's excellent. Awesome.
Nicolas Dupont
>> So that's something we're pretty happy about.>> Thanks. Breakthrough.
Anuj Jaiswal
>> I'm actually very excited about a new product. What we launched . The gap, what we saw in the industry was there was no platform which was available, which is an end-to-end platform, ensures that data security all the way from data ingestion to the inference leveraging confidential computing as a technology. And I'm seeing that a lot of enterprises are actually starting to use it and trying to go to production. So that's a bridge from scientific experiments to the production and I'm definitely very excited about that.>> And there's demand too. They're hungry for it.
Anuj Jaiswal
>> Absolutely.>> Eiman, what's the breakthrough for you? You got a lot going on.
Eiman Ebrahimi
>> Yeah, I think there's two things that come to mind in the recent past for us, both of which are tied to ecosystem partnerships and being able to reduce the frictions of how potential customers can get access to the solution that we're delivering through partnerships. One is very recently we upstreamed the capability into the open source project VLM in order to be able to have language models or foundation models be deployed with the ability to consume essentially what the output of our product is, which is these randomized re-representations of embeddings that the model can consume readily. And that was a big unlock for us because it helps a lot of our inference platform partners be able to stand up endpoints that can consume this now protected data with no barrier at all. They can just pick out that particular version of VLM and do that. And so that came through a lot of work that we did through the open source project with the maintainers there. That unlocked a lot of things for us. And another has been working with partners on the compute side that know the value that this protection of data brings to all inference platforms. And they've been very helpful in basically creating these data privacy layers that our product generates so that again, enterprise customers that are interested in using this, especially in the open model space, have readily available software that they can just deploy. And again, both of those things just boil down to ecosystem partnerships.>> Well, you guys are great. And since you made me think of another question, I'll throw it out there because this comes up a lot certainly on our podcast. The word data protection, data security meant different things and even data protection on the backup and recovery side has been classed as a case of ransomware. We've seen that. Is in the gen AI era, is the words different? Is it nuanced? I mean, data protection? You're basically protecting data. That's a security paradigm. That's not a backup and recovery thing. Data security used to be against threats. So data's data. I mean you're securing data. Are these words obsolete? Are they nuanced? I mean, what's your feeling on some of the semantics? Because I've had this debate. No, that's data security. No, we're data protection. What does that even mean? So are there recasting of the words in the categories? Because if that's the case, then you're going to see market categories open up around this because you guys are horizontal a bit here. Thoughts on the categorization? Is it just semantics? Anuj, you're smiling. You know what I'm talking about.
Anuj Jaiswal
>> I've been in a security industry for what two decades. My observation has been that security is so vast and there are so many different verticals, so many different areas where you want to ensure your data is actually secured. The way I see security is how do you reduce your blast radius? And by reducing the blast radius, you're basically confining it from 0.1111 or 0.001 millimeter to one centimeter, to one meter and beyond. And most of these tools are provided to the enterprises to help reduce that blast radius so that even if the bad actors get access, it's becoming even more challenging for them to get into that blast radius. So what we focus on is we try to reduce that blast radius to as minimal as possible. And it becomes even more difficult when you create a trusted execution environment, which is tamper-proof, leak-proof, breach-proof to go into an application and look for the data because at the end of the day, data is the crown jewel. It's not the application.>> encrypted right into the models themselves for consumption.
Anuj Jaiswal
>> Right.
Nicolas Dupont
>> No, I agree with everything you're saying. I mean it's a huge space. We were at RSA together two months ago. You go there and the number of acronyms, even for somebody in security is pretty overwhelming. So it's only going to grow. I think those terms are cut from the same cloth, protection and security, but we're just right now going through a generational shift where we are increasing the potential attack surface by an order of magnitude. It's not your data resides here, it's in this silo, it's in this data lake. No, it's now your data's everywhere. And so to that point, we need technology that are able to homogenize and provide security at a granular data level wherever that data is.>> Yeah. Eiman's pointing out the developers love these models because they're enabling value, right? So that's security. So model security again.
Eiman Ebrahimi
>> Yeah, I think security for enablement is actually a concept that's becoming more and more relevant. Meaning you are enabling business outcomes, you're enabling better ROI both on the return side and on the less investment side, better total cost of ownership of infrastructure, et cetera, by way of better security. And I think that's a paradigm that seems new compared to a lot of how security was thought of before, which was focused on how do we just prevent the bad things from happening. I think preventing bad things from happening is now has a different side to it of what can we enable by way of doing that.>> Yeah, and AI can help there too. Gentlemen, thank you so much for the time. Went over a little bit there with a bonus question. Thanks for coming on. Appreciate it.
Eiman Ebrahimi
>> Thanks for having me.
Anuj Jaiswal
>> Thanks for having us.>> I'm John Furrier of theCUBE. We are here for the robotics and AI leaders. Three days of coverage, unpacking all the latest from the mixture of experts here in theCUBE and the NYC Wired. Thanks for watching.