We just sent you a verification email. Please verify your account to gain access to
Data + AI Summit 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Data + AI Summit 2025
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Data + AI Summit 2025.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Data + AI Summit 2025. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Data + AI Summit 2025
Please sign in with LinkedIn to continue to Data + AI Summit 2025. Signing in with LinkedIn ensures a professional environment.
Omar Khawaja, chief information security officer of Databricks Inc., joins theCUBE’s John Furrier at the Databricks Data + AI Summit 2025. Their discussion centers on AI security, executive strategy and the tension between rapid adoption and responsible oversight.
Khawaja draws on insights from over 1,800 executive conversations, outlining the divide between acceleration-minded innovators and risk-aware leaders. He discusses the role of the Databricks AI security framework in helping enterprises strike a balance between progress and protection. ...Read more
exploreKeep Exploring
What is the role of Omar at Databricks and what has been discussed regarding data security at the Data + AI Summit?add
What are the different attitudes of executives towards AI in the enterprise market, and why are both enthusiastic and cautious perspectives important?add
What is the process for identifying and implementing controls and guardrails for managing risks associated with AI systems?add
What strategies can be used to align an executive team to work more cohesively and effectively?add
>> Welcome back everyone to theCUBE's live coverage here in San Francisco for Databricks. This Data + AI Summit. I'm John Furrier of theCUBE. Security is always a top concern, data security, hottest sector in the industry. Omar's here. Field CISO, VP of security at Databricks. We're going to break it down, the myths, the legends, and also the dogma around security and GenAI. Omar, thanks for coming on theCUBE. Appreciate it.
Omar Khawaja
>> My pleasure.>> How many customers have you talked to in the past year or so? I mean, you're out talking the top enterprises. What's the ballpark number?
Omar Khawaja
>> I've probably talked to over 1,800 executives at our customers and prospects in the last year alone.>> So the keynotes today, the products, again, as typical Databricks form, on top of last year they've added the platform, evolution is getting better and stronger. Some great surprises. I love the OLTP. That was a nice native. We're going to dig into that for sure.
Omar Khawaja
>> Yeah.>> The apps and the vibe coding, turning into AI engineering, Jonathan Frankle's. I was on theCUBE talking about that. But also I just love the Agent Brick because I think that highlights the business value and the work of simplifying. Okay, great. The other part of the keynote that was great, besides the Anthropic CEO who basically gave a great master class on some of the context around coding and whatnot, was Jamie Dimon.
Omar Khawaja
>> Yes.>> Okay. Jamie Dimon was on theCUBE, JPMorgan Morgan Chase. They're well known. They have about a $12 billion IT budget or technology budget a year.
Omar Khawaja
>> Wow.>> They do trillions of dollars in transactions. He's well versed, but he's a leader. He's saying, "Lean into AI or you're screwed." That's my word. He didn't say screwed, but he actually was pretty much saying that. That was what happened. But most groups don't have that. So everyone sees the agent as an immediate unlock the value. What are customers thinking right now? How do you, one, talk to them? What's their orientation? Where are they on the attitudinal scale? Are they looking at the solution? Are they acquiescing? Are they indifferent? Are they more enthusiastic? How would you spec peg the general enterprise market?
Omar Khawaja
>> Yeah. I'd say when I speak to executives in particular, there are those that are gung ho about AI and they want AI to solve their problems like the Jamie Dimons of the world. And then there are those that are like, "Are you sure? Do we really want to do that?" "Hold on, slow down," or "No." And so that first group I refer to as the gas people and the second group are the brakes people. And on the one hand, it can be very instinctive to say, we need more gas people and less brakes people. But what I say to the gas people when I meet with them, because they're like, "Finally you're telling it how it is." And I'm like, "Hold on, let me tell you the whole story." And if I'd say to them, "Imagine if I gave you the keys to Lamborghini and a ticket to the Autobahn to go to Germany and drive it, but I told you this car was brand new. There's only one modification we made in the car. We removed the brakes. How fast would you want to go in that car?"
And so you need the "brake people", whether it's security, whether it's compliance, legal, privacy, ethics, all of these people we need. Because if we don't have some of those controls and those governors, we're likely going to end up in a ditch and the positive outcomes are not going to eclipse the negative adverse unintended outcomes. And so my view is, and what I see the most successful organizations doing is they figured out how to get the brakes and the gas to work together. And more often than not, in every organization you have both of these people, but very, very seldom is there alignment between them. It's almost as if there are both in two separate vehicles. And when the gas people are trying to go fast, the brake people are engaging the brakes. That's not how it works. When you have that alignment, when you know that this is a low risk use case, I'm going to go fast, please don't engage the brakes, that makes sense. But if I'm about to do something high risk, there's rain coming up, there's a curve, there's pedestrians, I absolutely want the brakes to be engaged.>> Got to slow down. Again, that's situational awareness. You know when to put the pedal to the metal. If it's straight and narrow, go hard, go on, go on, go fast. And this is where I think we were talking before we came on camera around some people just like, "No, we have to have a high bar and worst case scenario." But there are use cases that get you into AI that are straight and narrow, you can go fast. And there are curves and there are guardrails. We hear that a lot. So okay, there's enough elements in market. Okay. So you believe that we're in a good spot now to have guardrails, brakes, and gas. But the key is situational awareness.
Omar Khawaja
>> The key is exactly that. And what I share with folks is, what you really have to identify is what do those scenarios and situations look like and how do you characterize them where you feel like you're going to be safe? And I've got three kids, and when they were young, I was hyper focused on making sure that they were safe, if they were at the pool, if they're crossing the street, if we're in the grocery store. But the one place I didn't worry about them is when we went to Chuck E. Cheese. Because at Chuck E. Cheese, you have to try really, really hard to hurt yourself. And so how do you give people those sandboxes to play in and say, "Go do AI, you're not going to be able to hurt yourself. There's no production data, there's no connectivity to production systems. There's no ability to exfiltrate data. Go do whatever you want, learn to your heart's content."
But there's going to be other cases that we define, maybe it's based on the data, maybe it's based on the connectivity, maybe it's based on the users to say, "For these, you're going to need more controls." But if you don't have those patterns defined by risk profile, and you don't have controls identified, and the most important thing is SLOs. So for the lowest risk use cases, you need to advertise as loudly as possible what the SLO is. And same with the high risk use cases. So the business says, "I want to figure out how we can deliver this use case and make it a low or a medium," because that gets activated and approved in a month or two versus this other one can take six to eight months.>> It's a classic case of knowing the risk versus no risk. Balancing the innovation and the posture together. You don't want to foreclose the innovation to maintain a posture. So it's a balance.
Omar Khawaja
>> Yeah.>> All right. So let's get into the Databricks application, I mean the AI security framework.
Omar Khawaja
>> Yeah.>> This seems to be the template or blueprint. How would you describe what that is? What is it? What is the-
Omar Khawaja
>> Framework.... >> Databricks AI screen?
Omar Khawaja
>> Framework or recipe.>> The framework.
Omar Khawaja
>> Yep.>> Is it a way to navigate through or is it just more of a scorecard?
Omar Khawaja
>> Hang on.>> Take it.
Omar Khawaja
>> That's exactly it. So the knee-jerk and the instinct for most governance functions, maybe with the exception of legal is, "Here's all of the controls, here's all of the guardrails, and I need to implement them all of the time." Versus saying, "I'm going to do the hard work to identify the use case and determine which risks I care about and which guardrails I need to therefore implement to mitigate the risks." What the Databricks AI security framework does, is it starts by defining what AI is. That seems to be an area that is challenging in and of itself, because if you don't know what the system is, how are you going to be confident in protecting and managing risk around it? So we define AI as being made up of four subsystems, and then being made up of 12 components. Across those 12 components, we then identify 62 risks. And then we identify 64 controls mapped to those risks. So now if I know what my AI system looks like, I can start to say, "You know what? For this use case, I really care about the training data. For this use case, I really care about how we're evaluating the model. For this use case, I really care about the catalog. For this use case, I really care about the inference request or the inference response." But now we've got a mechanism and an architecture for saying, "Which of these do I care about?" And then for each of those, I can then double-click and say, "These are the risks and these are the controls." I no longer have to boil the ocean when I need to do AI.>> Omar, this is what Jonathan Frankle was basically saying about the LLM judges, in different context, but it's the same issue of evaluation. You got to do the work.
Omar Khawaja
>> Yes. There's no shortcut.>> If you want the benefits, you got to... I won't say grind it up. Just like good work. You've got governance, you've got compliance, you got regulations. These are not checkboxes. These are mandatory control points on how AI can scale reliably.
Omar Khawaja
>> Yes. Yes.>> Am I getting that right?
Omar Khawaja
>> Yeah. You're spot on. And the thing that it requires, which is really, really tough for some technology and security teams to do, is it requires engagement with the business. It can't be, "Here's a list of controls, go implement it. When it's ready, then it's ready." It has to be, "We've got to dig in, we've got to roll up our sleeves, we've got to understand what the business is trying to do," and then commensurately say, "These are the risks, and this is what we need to address." I've had multiple CDOs talk to me over the last couple of days about how the CISO basically, anytime something comes up says, "If we don't do this, it's an existential risk and I'm going to get fired." That's not helpful.>> There's an old cliche, it's kind of pejorative, no-op. That's defining a person who's a no-op or not really functional. But then there have been departments that have been no-op. Security has been like, "No, whoa, whoa. Brakes."
Omar Khawaja
>> Yeah.>> The no.
Omar Khawaja
>> Yeah.>> Always saying no. No, then verify. Not trust and verify. So now Jamie Dimon brings up the point where his organization's pretty hardcore, but he's saying, "Identify the situations where you can have controls, do the work and get it done." And then say no at the right time. So versus a hard no, which is like a default, a no-op. No one wants to be a no-op in this world.
Omar Khawaja
>> Oftentimes what we do is we come up with the treatment plan without diagnosing the condition. So if I make everyone get chemotherapy, because the worst thing that you can have is cancer, that doesn't really help you. That actually only hurts you. And the best security programs, this is what they do. They understand what the diagnosis is, and then they treat it accordingly. I don't need to jump to chemo for all. I don't need an MRI for everyone. Sometimes it's as simple as get more sleep or go to physical therapy.>> I sometimes don't get it. I mean, to me it's very obvious. I know you got a hard job because you got to come into the security teams. I mean, you got a fun job, but it's still challenging. Because you got to get everyone into the threshold. You've got production workloads. But right now we're seeing really good adoption in the area of coding and sales and marketing, because those data sets aren't mission-critical. But as you get into the mission-critical workload, you start to see you can get your sea legs, if you will, your feet wet in those areas. And then the data analytics teams have plenty of dashboards, they have plenty of work there. So those three departments seem to be where the success is. How do you see companies taking that leverage and getting into production? Because that's the number one question we see right now is, not enough production workloads. What is the playbook? What best practices can you share around doing that?
Omar Khawaja
>> I'd say every single reason I hear that organizations are reticent to move to production is about risk. Risk of poor quality, risk of exploding costs, risk of insecurity, risk of privacy, risk of brand damage, risk of hallucination, risk of something. What we did is, based on literally hundreds of conversations, we boiled it down to something specific and we said, "What are the reasons that organizations aren't moving forward?" And we boil it down to actually seven different things. So one is they don't have a mental model for what AI is. Too many are copy pasting the deterministic systems that they know and understand, and assuming AI is the same. It's not true. AI is not like a deterministic system that we've been accustomed to for 30 years. I wish it was, it would make my life easier. I could sound like more of an expert. And I tried that initially, that AI is just another application. You give it an input, it gives an output. We've been doing this for 30 years. That's not AI. AI is different. But the next thing is, you have to then be able to determine what are the roles and responsibilities, who's going to do what. And if you assume it's just like the application world, it's not going to work because where do you put the data scientists? Where do you put the data engineers? Where do you put the data team? Where do you put the ML engineers? So you've got to have an operating model that then aligns to it. Then you've got to have a comprehensive list of risks. If you don't know what risks you're worried about, the easiest thing to say is, "No, just don't do it. Stay on bed rest because I don't know what's going to happen." That's coming from a source of ignorance. We want people to be making decisions from a source of enlightenment. So, "Here are the risks, here are the architectural deployment models and the patterns and the risks associated with these threats.">> You're so right on. Omar, you're so right on. First of all, the other comment I would just add to that, and I'm agreeing with you 100%, is that companies from just the data from theCUBE and our CUBE Research, you got to get the muscle. If you don't have the muscles like an athlete, if you don't work it, you become more than ignorant, you become basically not in shape. You're not even ready. You have data readiness is there. But if you're not working the AI...
Omar Khawaja
>> Yeah. And in many ways some of these lower risk internal facing use cases that you mentioned, like marketing and others, those are the place to build muscle. You can't wait for game day to build muscle. So by the time you get to the higher risk use cases, you've already built some of that muscle, which needs to be done enterprise-wide, not just in a single department.>> Omar, thanks for coming on theCUBE. Appreciate it. Love what you do. What's your goal second half of the year? Obviously now the cat's out of the bag, the products are out there. Sounds like you're going to have a very busy second half. What's your goals?
Omar Khawaja
>> It will be. It's going to be a lot of discussions with... My goal is how do I get the executive team aligned, versus them firing at different rates? They should be like a crew team. They should be perfectly synchronized. Every CDO, every board director, every CIO, CTO, CISO I meet with, that's what they're longing for and that's what I hope to help along with my peers at Databricks.>> A good crew team has good, diverse talent. They row in the same direction. They have good harmony.
Omar Khawaja
>> Yes. Yes, that's key.>> AI in the boat. Get in the boat.
Omar Khawaja
>> Get in the boat.>> Get in the boat.
Omar Khawaja
>> Get in the boat.>> Thanks so much.
Omar Khawaja
>> Hey.>> Appreciate it.
Omar Khawaja
>> What a pleasure. Thank you so much.>> Pleasure. Okay. We are live here, theCUBE coverage. I'm John Furrier. We'll be back after this short break.