We just sent you a verification email. Please verify your account to gain access to
RSA Conference 2024. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For RSA Conference 2024
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for RSA Conference 2024.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
RSA Conference 2024. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to RSA Conference 2024
Please sign in with LinkedIn to continue to RSA Conference 2024. Signing in with LinkedIn ensures a professional environment.
>> Welcome back to theCUBE's coverage RSA, my Co-host, Shelley Kramer. Great to see Great to be with you. We're talking 000 people. I mean, people are starting to roll in. Four days, wall-to-wall coverage. theCUBE's going to be here at Moscone West. Jack Berkowitz is here, CUBE alum. He's the chief data officer now at Securiti, spelled with a T-I. Jack, welcome back to theCUBE. Good to >> see you, Dave. Good to see >> Yeah. Last time we talked was in 2022, we were at Snowflake Summit. Data and security are coming together, right? That's sort of the real theme here and the premise of security, the company. Tell us why you went from being practitioner, now you're in the vendor side, still a practitioner, but why'd you make that move?>> Yup. Really two things. Well, as a practitioner for five and a half years as an enterprise CDO, seeing that was there, at my last company at ADP, we were on top of one of the biggest data sets in the world. That's why we were working with Snowflake, working with a lot of other companies as well. And guess what? We had nation-state issues that we had to deal with. We had normal data leaks that we had to deal with. We had coordination issues that we had to deal with. Actually it was those seams, those coordination between me, the chief security officer, privacy officer, operations, all that coming together was a problem for us. And so I turned out to be a customer of and it helped us solve not just the coordination and the finding of information, but that coordination problem. Once you solve that coordination problem, you can lock things down. So, at the next phase of my career, I was like, Well, where do I want to spend my time?And I wanted to spend my time working on data problems, working with other CDOs and chief security officers. That was what was interesting to me.>> You got a cool demo on your website, actually. Checking out the website's, securiti.ai. It's Securiti, T-I, with i.ai, and you've got the public cloud, which has become the first line of defense. You've got data clouds, you a little Snowflake logo, you got private clouds, you got SaaS clouds, you got the public clouds. Of course, the big three. So, your focus, you're saying, is the data flow between all those estates and protecting the seams in between. Is that correct?>> That's it. A hundred percent. So, any big company right now has a hybrid situation. Nobody is a hundred percent in Microsoft, nobody's a hundred percent in Google, nobody's a hundred percent in AWS. You have your own data centers. You have all of this information flowing around, and when you want to take a look at it and you want to say, Hey, wait a second, where is my customer information? Where's my PII information?Or better yet, How is this data being duplicated? How many copies of this data do we have?Try to find that today. And so we're doing at is building what we call a data command graph that shows all of that information in context. And it's not just where the data is, but what's the sensitive data encompassed inside of that? What are the regulations? How do the regulations come into play? How is your security posture into play with that? And now what we're talking about, which we just announced last week, how do LLMs start to play into all of that as well? So, itreally trying to get your hands on that complexity that's >> data command graph. Data command graph is a visual knowledge graph. Is a hundred percent. So, we have a set of graph representations that allow you to represent not just the data and the flow, but also business processes. Like I said, the regulations or the policies, different countries around the world, different localities inside the US, all that together and you can visualize it. And so you can say, Hey, let me see where is this information? Let me see the people reflected in that information in people graph. Let me see who has access to information through a data access graph.All of that put together in one context.>> Do you consider this like a governance solution? Is it a posture management solution?actually all of that, right?>> That's what >> Yes. Yes. It's this idea that if you thought about those things as separate pieces, you're going to have disconnects. This way, you have one continuous view. I happen to be doing posture management. I happen to be doing data governance. I happen to be doing AI governance. What's the difference to me? Right? I'm a chief data officer, I need to see it all. And so I don't want to go to different tools. I just want to see one continuous view.>> So I have a question because some of our... We did some research in partnership with ESG, our research partner recently, and I think we went into that research kind of >> ETR.>> I'm sorry, ETR. >> too. >> Yes, ETR, our research partner. Yes. Good catch there. I think one of the things we went in expecting to see is that we would see more of a consolidation of vendors and tools in the market. And that's actually not what's happening. What's happening is that people are adding to their security vendors and they're using solutions. They're looking for best-in-class solutions and moving away from the concept of just one platform is the best solution. So, that's some of the mindset that's in the market right now. How do you combat that? Because what you're saying, we just had this conversation. Yes, yes, yes.You solve for all these things. How do you go into a sales pitch with a prospective customer? Do you get a foot in the door and show them some key capabilities and then hope to sell them more as you go What's your strategy there? How does that >> Well, there's two sides to it, right? Obviously we would love to have a good footprint inside of a customer for sure. But there are some best-of-breed capabilities. Everything we've built, including all of our system internally, is all based on APIs. And so we can integrate and actually push either up or down information to those other tools. I'll give you good example. We'll work with Lacework on certain >> aspects Soon to be Wiz, or maybe not.>> Or maybe not, right.>> We work with some of the data catalog vendors. So, if somebody wants to have a data catalog, we can sense the data in real time we can update the data catalogs in real time. We can push down security to Snowflake or to Databricks. And so I think it's an understanding as to where the edges are of all these best of breed things, but you still need to bring those together. Asking either a security group or a data group to integrate a whole bunch of the tools->> That's tricky.... How much integration do they want to do versus actually doing their job about security?>> So, you're cloud agnostic.>> Yeah.>> You're data platform agnostic.>> That's right. And it'll actually run inside of on-prem databases as well. we can scan inside of on-prem data centers. Sorry. Yeah.>> So no, I get it. Historically, that would just be a lot of, and it still is, I'm sure, just a lot of roll up your sleeves, do the integrations, just understanding each of those individual platforms. People say, Oh, I'm so sick of talking about LLMs.I'm not, I love to talk LLMs. Things are moving so quickly. You mentioned, you just made an announcement. What is that news? Tell us we already have connectors to over 400 systems. We're already detecting all of this data inside of these systems to understand if it's PII data or data that you don't want to move across. One of the biggest barriers to using the LLMs right now is companies are nervous. What information am I actually putting across that? Even regulatory, is that a processor, a sub-processor? All of it gets into play. What we've announced are a series of firewalls both for query as well as retrieval that protect that information in context. So, we are actively understanding the data as it's flowing from those source systems all the way across to the LLMs and then back out. As well as, let's say you're using a vector database to store things. What are you actually protecting once you bring it out? So, LLMs that sit in that entire pipeline process is what we've announced.>> So, you're ensuring, you're kind of putting a wrapper around the data to make sure that that data doesn't leak, that data doesn't to LLM vendor or any other vendor that's not supposed to see it.>> That's right. That's right. I mean, we just see, you see these examples just last week or the week before in Austria, the first lawsuit about somebody able to extract private information directly out of one of the LLM vendors. By just constructing a proper query, they were able to get that information out, and lawsuit's going to entail about that. So, we're really about allowing corporations, allowing smaller companies to be able to use these LLMs with a degree of confidence.>> Help me understand this, because sometimes I get confused because every vendor, you know this, says, We could do it all.And then you get in there and you're like, Yeah, but... I think about AWS Bedrock, right? And the announcements that Swami just put out the other day. Much of it was similar to what we're talking about, even though they've sort of been dealing with LLM leakage for a while. So, is it the case that you pick up... It's just an example, I'm sure there's Azure examples or Google but you pick up where that cloud vendor leaves off? Or is it the case where the shared responsibility model requires you to do more for what the cloud vendor is promising. In other words, or the cloud vendor is overpromising and you have to come in and help->> Yeah, >> Or is it case where, okay, the cloud vendor has that covered, but there's all this other stuff in your estate that's not covered and you picked that up >> Yeah, exactly. That's really the part, Dave. Swami and I know each other well, right? I presented on stage with them before and I was an early user of Bedrock, number two or three actually. And so yeah, exactly that. There's aspects of data governance, there's aspects of data security. There's aspects of really just operations inside of a company that Bedrock isn't going to do today.>> On-prem databases, as an example.And also the entire workflow around a company. What is that approved data for legal to say, Those are the documents to use,as opposed to the documents that somebody just happened to drop on an S-3 because you requested them via email. So, that data governance portion of it is something that will pick up and we will help take because>> Amazon has no visibility on that. That's not their purview. That's your responsibility.>> Yeah.>> So, talk to me about LLM sort of arms race that's going on right now. Llama3 just came out, I think, was it last week? I can't keep track of it. Yeah, I think it was last week or two weeks ago. It looks very capable. Snowflake announces Arctic to compete with DBRX, and that's going to be interesting to see if those guys are actually going to fund the LLMs, but they're very specific to their world. Somebody told me the other day, there's 800 large language models now in HuggingFace, which 400.>> It's tripled since I got that->> So, there you go. How do you keep track of all this stuff? What are you seeing in the market? What are your concerns? What are you loving?>> Well, so it's interesting, right? I had been working, my teams had been working with LLMs, the earlier stage, BERT and RoBERTa and those things for years. Then 18 months ago, I'm sitting, meeting with VC saying, Tell me what's coming,and the next day, the OpenAI announcement came out. I knew, I had about 12 hours warning. It's the arms race and it's wide open, and I think it's confusing for people. I think it's confusing because what does it really mean If I have a context window now, going from 200,to a million, what does it actually mean to somebody? What does it actually mean to tune an LLM because I happen to be a hospital system and I need medical codes? So, I think a lot of it's about confusion right now in the market. What we see is a lot of people wanting to maybe not lock in one way or another. And that's why flexible, keeping your data flexibly aligned with those things is important. Your ability to switch LLMs if you need to, your ability to keep your data secure for yourself, your ability to configure your operations dynamically is important. I think it'll be interesting. I don't think there's enough market for all of these companies to come out on the other side. At a certain point, you're going to see a pull-back in the spend because there's other things that IT, other things that security to spend money on. And what you'll see right now a lot of money shifted. I talked to a lot of friends. I mean, we have friends in the business. Wait a second, I had a bad quarter, but this other guy had a great quarter.It'll balance out, I think.>> How are you using LLMs? How do you decide which LLMs to apply where?>> That's a great question. One of the things that I believe in strongly is actually measuring the outputs, and we're going to start to see these capabilities emerge in summarization. So, a big use of LLMs right now is in call centers, summarizing a call on behalf of people. Well, there's actually a summarization metric called ROUGE metric. Instead of giving me these strange precision about passing an SAT score, what's your ROUGE metric for a test set? That would be super interesting to see. So, what we talk to our clients about is making sure that you're measuring constantly as you move forward and define, it's a classic thing for machine learning. Doesn't change for LLMs. What's the definition of good, right? We used to optimize models to, there's a good point. Well, there's a good point on these things too, right? Why do I think that? I just saw JP Morgan is going to build an index GPT.>> Yeah, I saw that.>> Maybe that was yesterday or last week. How do I know that's good versus the Bloomberg model? I don't. So, some way of measuring is important.>> Okay. How about open source? Are you using open source LLMs? And are you concerned about some of the... You guys are smaller, so maybe they're designed for ByteDance, using it or Google, but are you concerned about some of the terms being restrictive in open source?>> Well, I think the terms being restrictive is an issue. Supply chain attacks is even a bigger issue. And so one of the things that we do inside of security, we're not doing the actual scanning of the models, but we actually will expose those model scans as model cards for people using our system. so our system's flexible. You can pick OpenAI, you can pick HuggingFace, you can pick any of the different LLMs to use, but while you're doing that in your firewall, we'll also expose some of those metrics so that you can take a look at, Hey, are there vulnerabilities here inside of this code?It'll be interesting. The Manchurian Candidate, that's a real thing in LLM world, right? It's a real thing in machine learning, and we need to stay vigilant on you mean by that.>> Well, all LLMs are is just compressed data. That's really what it is. just compressed data. And just like you could have a SQL injection attack from 15 years ago to extract data out, imagine you were to have a prompt attack that could extract information out that was sitting there. Or better yet, or worse yet, imagine you have a prompt attack to instruct the LLM to do something inappropriate, right? And that was seeded in a data set that was trained a long time ago, or even freshly done. If you don't have control of your data, that is a real possibility. And so poisoning attacks is one of the OWASP top 10 security threats. And what we're doing in making sure that you can operate really successfully by encountering and taking care of those problems.>> It's like a ticking time bomb that nobody knows is there from legacy. It's kind of an off the wall question, but it's relevant to a chief data officer. It's not really directly related to security, but it is in the sense that could simplify things. When you think about the typical data pipeline today, you've got many, many, oftentimes dozens of hyper specialized individuals doing data engineering, data cleansing, analytics, you know the data pipeline very well. Do you see, and it's been built up over the last 10 plus years, and some of the most sophisticated data pipelines in the world have these really highly specialized individuals doing something and they're very dependent upon each other. How do you see AI broadly, not just LLMs and gen AI, but just AI, the AI awakening, how do you see it affecting that really complex data pipeline that kind of grew out of the Hadoop world? Is that whole thing going to get blown away?>> I think there's going to be automation across that pipeline for sure. I mean, one of the capabilities we have in security is we can infer a data lineage. We can actually watch data elements move not just within a system, but across system. So, we have this interesting demo where you can see a CSV going to a Snowflake staging table, going to a Snowflake star, then off to Tableau, and then up, all automated, a hundred percent automated. Well, I had to pay people hundreds of hours to build that by hand, and it was always out of date. If I don't need to do that, if I can just have the system there, then I have a live operating system. Again, to hit the marketing term, I'm in command of my data. It's like the notion of building a blueprint that's continuously updating. And I think that's the promise of AI in those data pipelines.>> It really compresses that end-to-end lifecycle. Jack, it's great having you back.>> Good luck with the rest of the show.>> Thank you.>> All right, keep it right there. Dave Vellante for Shelly Kramer and David Lythicum. right back. We're at Moscone West, the live coverage of RSA right back. You're watching theCUBE.
>> Welcome back to theCUBE's coverage RSA, my Co-host, Shelley Kramer. Great to see Great to be with you. We're talking 000 people. I mean, people are starting to roll in. Four days, wall-to-wall coverage. theCUBE's going to be here at Moscone West. Jack Berkowitz is here, CUBE alum. He's the chief data officer now at Securiti, spelled with a T-I. Jack, welcome back to theCUBE. Good to >> see you, Dave. Good to see >> Yeah. Last time we talked was in 2022, we were at Snowflake Summit. Data and security are coming together, right? That's sort of the real theme here and the premise of security, the company. Tell us why you went from being practitioner, now you're in the vendor side, still a practitioner, but why'd you make that move?>> Yup. Really two things. Well, as a practitioner for five and a half years as an enterprise CDO, seeing that was there, at my last company at ADP, we were on top of one of the biggest data sets in the world. That's why we were working with Snowflake, working with a lot of other companies as well. And guess what? We had nation-state issues that we had to deal with. We had normal data leaks that we had to deal with. We had coordination issues that we had to deal with. Actually it was those seams, those coordination between me, the chief security officer, privacy officer, operations, all that coming together was a problem for us. And so I turned out to be a customer of and it helped us solve not just the coordination and the finding of information, but that coordination problem. Once you solve that coordination problem, you can lock things down. So, at the next phase of my career, I was like, Well, where do I want to spend my time?And I wanted to spend my time working on data problems, working with other CDOs and chief security officers. That was what was interesting to me.>> You got a cool demo on your website, actually. Checking out the website's, securiti.ai. It's Securiti, T-I, with i.ai, and you've got the public cloud, which has become the first line of defense. You've got data clouds, you a little Snowflake logo, you got private clouds, you got SaaS clouds, you got the public clouds. Of course, the big three. So, your focus, you're saying, is the data flow between all those estates and protecting the seams in between. Is that correct?>> That's it. A hundred percent. So, any big company right now has a hybrid situation. Nobody is a hundred percent in Microsoft, nobody's a hundred percent in Google, nobody's a hundred percent in AWS. You have your own data centers. You have all of this information flowing around, and when you want to take a look at it and you want to say, Hey, wait a second, where is my customer information? Where's my PII information?Or better yet, How is this data being duplicated? How many copies of this data do we have?Try to find that today. And so we're doing at is building what we call a data command graph that shows all of that information in context. And it's not just where the data is, but what's the sensitive data encompassed inside of that? What are the regulations? How do the regulations come into play? How is your security posture into play with that? And now what we're talking about, which we just announced last week, how do LLMs start to play into all of that as well? So, itreally trying to get your hands on that complexity that's >> data command graph. Data command graph is a visual knowledge graph. Is a hundred percent. So, we have a set of graph representations that allow you to represent not just the data and the flow, but also business processes. Like I said, the regulations or the policies, different countries around the world, different localities inside the US, all that together and you can visualize it. And so you can say, Hey, let me see where is this information? Let me see the people reflected in that information in people graph. Let me see who has access to information through a data access graph.All of that put together in one context.>> Do you consider this like a governance solution? Is it a posture management solution?actually all of that, right?>> That's what >> Yes. Yes. It's this idea that if you thought about those things as separate pieces, you're going to have disconnects. This way, you have one continuous view. I happen to be doing posture management. I happen to be doing data governance. I happen to be doing AI governance. What's the difference to me? Right? I'm a chief data officer, I need to see it all. And so I don't want to go to different tools. I just want to see one continuous view.>> So I have a question because some of our... We did some research in partnership with ESG, our research partner recently, and I think we went into that research kind of >> ETR.>> I'm sorry, ETR. >> too. >> Yes, ETR, our research partner. Yes. Good catch there. I think one of the things we went in expecting to see is that we would see more of a consolidation of vendors and tools in the market. And that's actually not what's happening. What's happening is that people are adding to their security vendors and they're using solutions. They're looking for best-in-class solutions and moving away from the concept of just one platform is the best solution. So, that's some of the mindset that's in the market right now. How do you combat that? Because what you're saying, we just had this conversation. Yes, yes, yes.You solve for all these things. How do you go into a sales pitch with a prospective customer? Do you get a foot in the door and show them some key capabilities and then hope to sell them more as you go What's your strategy there? How does that >> Well, there's two sides to it, right? Obviously we would love to have a good footprint inside of a customer for sure. But there are some best-of-breed capabilities. Everything we've built, including all of our system internally, is all based on APIs. And so we can integrate and actually push either up or down information to those other tools. I'll give you good example. We'll work with Lacework on certain >> aspects Soon to be Wiz, or maybe not.>> Or maybe not, right.>> We work with some of the data catalog vendors. So, if somebody wants to have a data catalog, we can sense the data in real time we can update the data catalogs in real time. We can push down security to Snowflake or to Databricks. And so I think it's an understanding as to where the edges are of all these best of breed things, but you still need to bring those together. Asking either a security group or a data group to integrate a whole bunch of the tools->> That's tricky.... How much integration do they want to do versus actually doing their job about security?>> So, you're cloud agnostic.>> Yeah.>> You're data platform agnostic.>> That's right. And it'll actually run inside of on-prem databases as well. we can scan inside of on-prem data centers. Sorry. Yeah.>> So no, I get it. Historically, that would just be a lot of, and it still is, I'm sure, just a lot of roll up your sleeves, do the integrations, just understanding each of those individual platforms. People say, Oh, I'm so sick of talking about LLMs.I'm not, I love to talk LLMs. Things are moving so quickly. You mentioned, you just made an announcement. What is that news? Tell us we already have connectors to over 400 systems. We're already detecting all of this data inside of these systems to understand if it's PII data or data that you don't want to move across. One of the biggest barriers to using the LLMs right now is companies are nervous. What information am I actually putting across that? Even regulatory, is that a processor, a sub-processor? All of it gets into play. What we've announced are a series of firewalls both for query as well as retrieval that protect that information in context. So, we are actively understanding the data as it's flowing from those source systems all the way across to the LLMs and then back out. As well as, let's say you're using a vector database to store things. What are you actually protecting once you bring it out? So, LLMs that sit in that entire pipeline process is what we've announced.>> So, you're ensuring, you're kind of putting a wrapper around the data to make sure that that data doesn't leak, that data doesn't to LLM vendor or any other vendor that's not supposed to see it.>> That's right. That's right. I mean, we just see, you see these examples just last week or the week before in Austria, the first lawsuit about somebody able to extract private information directly out of one of the LLM vendors. By just constructing a proper query, they were able to get that information out, and lawsuit's going to entail about that. So, we're really about allowing corporations, allowing smaller companies to be able to use these LLMs with a degree of confidence.>> Help me understand this, because sometimes I get confused because every vendor, you know this, says, We could do it all.And then you get in there and you're like, Yeah, but... I think about AWS Bedrock, right? And the announcements that Swami just put out the other day. Much of it was similar to what we're talking about, even though they've sort of been dealing with LLM leakage for a while. So, is it the case that you pick up... It's just an example, I'm sure there's Azure examples or Google but you pick up where that cloud vendor leaves off? Or is it the case where the shared responsibility model requires you to do more for what the cloud vendor is promising. In other words, or the cloud vendor is overpromising and you have to come in and help->> Yeah, >> Or is it case where, okay, the cloud vendor has that covered, but there's all this other stuff in your estate that's not covered and you picked that up >> Yeah, exactly. That's really the part, Dave. Swami and I know each other well, right? I presented on stage with them before and I was an early user of Bedrock, number two or three actually. And so yeah, exactly that. There's aspects of data governance, there's aspects of data security. There's aspects of really just operations inside of a company that Bedrock isn't going to do today.>> On-prem databases, as an example.And also the entire workflow around a company. What is that approved data for legal to say, Those are the documents to use,as opposed to the documents that somebody just happened to drop on an S-3 because you requested them via email. So, that data governance portion of it is something that will pick up and we will help take because>> Amazon has no visibility on that. That's not their purview. That's your responsibility.>> Yeah.>> So, talk to me about LLM sort of arms race that's going on right now. Llama3 just came out, I think, was it last week? I can't keep track of it. Yeah, I think it was last week or two weeks ago. It looks very capable. Snowflake announces Arctic to compete with DBRX, and that's going to be interesting to see if those guys are actually going to fund the LLMs, but they're very specific to their world. Somebody told me the other day, there's 800 large language models now in HuggingFace, which 400.>> It's tripled since I got that->> So, there you go. How do you keep track of all this stuff? What are you seeing in the market? What are your concerns? What are you loving?>> Well, so it's interesting, right? I had been working, my teams had been working with LLMs, the earlier stage, BERT and RoBERTa and those things for years. Then 18 months ago, I'm sitting, meeting with VC saying, Tell me what's coming,and the next day, the OpenAI announcement came out. I knew, I had about 12 hours warning. It's the arms race and it's wide open, and I think it's confusing for people. I think it's confusing because what does it really mean If I have a context window now, going from 200,to a million, what does it actually mean to somebody? What does it actually mean to tune an LLM because I happen to be a hospital system and I need medical codes? So, I think a lot of it's about confusion right now in the market. What we see is a lot of people wanting to maybe not lock in one way or another. And that's why flexible, keeping your data flexibly aligned with those things is important. Your ability to switch LLMs if you need to, your ability to keep your data secure for yourself, your ability to configure your operations dynamically is important. I think it'll be interesting. I don't think there's enough market for all of these companies to come out on the other side. At a certain point, you're going to see a pull-back in the spend because there's other things that IT, other things that security to spend money on. And what you'll see right now a lot of money shifted. I talked to a lot of friends. I mean, we have friends in the business. Wait a second, I had a bad quarter, but this other guy had a great quarter.It'll balance out, I think.>> How are you using LLMs? How do you decide which LLMs to apply where?>> That's a great question. One of the things that I believe in strongly is actually measuring the outputs, and we're going to start to see these capabilities emerge in summarization. So, a big use of LLMs right now is in call centers, summarizing a call on behalf of people. Well, there's actually a summarization metric called ROUGE metric. Instead of giving me these strange precision about passing an SAT score, what's your ROUGE metric for a test set? That would be super interesting to see. So, what we talk to our clients about is making sure that you're measuring constantly as you move forward and define, it's a classic thing for machine learning. Doesn't change for LLMs. What's the definition of good, right? We used to optimize models to, there's a good point. Well, there's a good point on these things too, right? Why do I think that? I just saw JP Morgan is going to build an index GPT.>> Yeah, I saw that.>> Maybe that was yesterday or last week. How do I know that's good versus the Bloomberg model? I don't. So, some way of measuring is important.>> Okay. How about open source? Are you using open source LLMs? And are you concerned about some of the... You guys are smaller, so maybe they're designed for ByteDance, using it or Google, but are you concerned about some of the terms being restrictive in open source?>> Well, I think the terms being restrictive is an issue. Supply chain attacks is even a bigger issue. And so one of the things that we do inside of security, we're not doing the actual scanning of the models, but we actually will expose those model scans as model cards for people using our system. so our system's flexible. You can pick OpenAI, you can pick HuggingFace, you can pick any of the different LLMs to use, but while you're doing that in your firewall, we'll also expose some of those metrics so that you can take a look at, Hey, are there vulnerabilities here inside of this code?It'll be interesting. The Manchurian Candidate, that's a real thing in LLM world, right? It's a real thing in machine learning, and we need to stay vigilant on you mean by that.>> Well, all LLMs are is just compressed data. That's really what it is. just compressed data. And just like you could have a SQL injection attack from 15 years ago to extract data out, imagine you were to have a prompt attack that could extract information out that was sitting there. Or better yet, or worse yet, imagine you have a prompt attack to instruct the LLM to do something inappropriate, right? And that was seeded in a data set that was trained a long time ago, or even freshly done. If you don't have control of your data, that is a real possibility. And so poisoning attacks is one of the OWASP top 10 security threats. And what we're doing in making sure that you can operate really successfully by encountering and taking care of those problems.>> It's like a ticking time bomb that nobody knows is there from legacy. It's kind of an off the wall question, but it's relevant to a chief data officer. It's not really directly related to security, but it is in the sense that could simplify things. When you think about the typical data pipeline today, you've got many, many, oftentimes dozens of hyper specialized individuals doing data engineering, data cleansing, analytics, you know the data pipeline very well. Do you see, and it's been built up over the last 10 plus years, and some of the most sophisticated data pipelines in the world have these really highly specialized individuals doing something and they're very dependent upon each other. How do you see AI broadly, not just LLMs and gen AI, but just AI, the AI awakening, how do you see it affecting that really complex data pipeline that kind of grew out of the Hadoop world? Is that whole thing going to get blown away?>> I think there's going to be automation across that pipeline for sure. I mean, one of the capabilities we have in security is we can infer a data lineage. We can actually watch data elements move not just within a system, but across system. So, we have this interesting demo where you can see a CSV going to a Snowflake staging table, going to a Snowflake star, then off to Tableau, and then up, all automated, a hundred percent automated. Well, I had to pay people hundreds of hours to build that by hand, and it was always out of date. If I don't need to do that, if I can just have the system there, then I have a live operating system. Again, to hit the marketing term, I'm in command of my data. It's like the notion of building a blueprint that's continuously updating. And I think that's the promise of AI in those data pipelines.>> It really compresses that end-to-end lifecycle. Jack, it's great having you back.>> Good luck with the rest of the show.>> Thank you.>> All right, keep it right there. Dave Vellante for Shelly Kramer and David Lythicum. right back. We're at Moscone West, the live coverage of RSA right back. You're watching theCUBE.