In this interview from Rubrik’s “Resilience for Everything: Cloud, Identity, AI” series, theCUBE’s Savannah Peterson sits down with Rubrik CEO Bipul Sinha to unpack what “resilience” really means in an era of agentic AI. Sinha explains how Rubrik is approaching AI transformation with guardrails – including visibility into which agents are running, what they’re doing and how to recover quickly when things go wrong – so organizations can move faster without losing control.
The conversation also explores the new risk profile of AI-powered operations: agents that can assume identities, execute business processes and amplify impact at unprecedented speed. Sinha shares why governance, accuracy and cost control become the make-or-break factors when enterprises move from pilots to production, plus what Rubrik is seeing in the threat landscape as AI accelerates attack volume and sophistication.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Resilience for Everything: Cloud, Identity, AI. If you don’t think you received an email check your
spam folder.
Sign in to Resilience for Everything: Cloud, Identity, AI.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Resilience for Everything: Cloud, Identity, AI
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Resilience for Everything: Cloud, Identity, AI.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Resilience for Everything: Cloud, Identity, AI. If you don’t think you received an email check your
spam folder.
Sign in to Resilience for Everything: Cloud, Identity, AI.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Resilience for Everything: Cloud, Identity, AI
Please sign in with LinkedIn to continue to Resilience for Everything: Cloud, Identity, AI. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Zero Hour Horizon Retail: When the Cloud Falls
In this interview from Rubrik’s “Resilience for Everything: Cloud, Identity, AI” series, theCUBE’s Savannah Peterson sits down with Rubrik CEO Bipul Sinha to unpack what “resilience” really means in an era of agentic AI. Sinha explains how Rubrik is approaching AI transformation with guardrails – including visibility into which agents are running, what they’re doing and how to recover quickly when things go wrong – so organizations can move faster without losing control.
The conversation also explores the new risk profile of AI-powered operations: agents that can assume identities, execute business processes and amplify impact at unprecedented speed. Sinha shares why governance, accuracy and cost control become the make-or-break factors when enterprises move from pilots to production, plus what Rubrik is seeing in the threat landscape as AI accelerates attack volume and sophistication.
In this session from the “Resilience for Everything: Cloud, Identity, AI” interview series, Matt Castriotta, field chief technology officer for cloud at Rubrik, walks through an immersive, cloud-focused tabletop exercise grounded in real-world attacker techniques and procedures. Framed around a fictional retail company (“Horizon”), the scenario traces how modern ransomware campaigns exploit identity, misconfiguration and cloud-native services to move at machine speed – turning operational incidents into existential business crises.
Castriotta unpacks...Read more
exploreKeep Exploring
What is the purpose and focus of Rubrik's immersive tabletop exercise?add
What is the context and objective of the tabletop exercise involving the fictional company Horizon Retail?add
What signs of trouble are being observed in the team's data monitoring?add
What was the sequence of events that led to the compromise of the identity provider?add
What are the differences between operational recovery and cyber recovery?add
What is the current situation regarding the company's financial status and the response to the attack?add
What are the risks associated with developers rapidly instantiating resources in the cloud, particularly concerning the handling of production data?add
What are the legal and financial implications of paying a ransom in a data breach situation?add
What is true resilience in the context of data protection and business continuity?add
What strategies can enhance visibility and engagement during data protection presentations or events?add
>> Well, thank you everybody for joining. As Megan mentioned, this is Rubrik's immersive tabletop exercise specific to cloud. My name is Matt Castriotta. I am Rubik's field CTO for cloud. As was mentioned earlier, this is based on true cloud TTPs. TTPs meaning tactics, techniques, and procedures. The actual procedures that attackers use to infiltrate cloud environments is what you're going to see here. This is a fictitious company and this is a fictitious scenario, but these are real tactics that attackers use to infiltrate cloud environments. Gained over experience that our ransomware recovery team has in helping customers recover from ransomware attacks. So we've seen a lot of these scenarios and we've decided to put them into this tabletop exercise that you're going to be experiencing here today. As I mentioned, it is a fictitious company and it is a fictitious scenario. The company is Horizon Retail. They're a retail organization that is cloud forward with a multi-country physical store footprint that is important here in the context of the exercise with mission-critical deployments across AWS. So this will be very AWS centric. For our customers that are multi-cloud, which is a lot of our customers, we will typically ask them to just convert the service names that they hear here into services that they use in the other hyperscalers. A few things that we ask for you to do, the first is to observe. Observe the gaps in Horizon's strategy and how it ultimately led to their know in situation. The second we'd ask you to do, Megan already brought this up, is to participate. We're going to ask you to do some read-throughs. Those are going to be for scenes two, three, and four. Lend your voice to the characters, have some fun with it. We have customers and obviously participants that will adopt accents and will really lean into the whole high school drama class part of this thing. So if you're so inclined, bring some of that energy to the table. And then the last thing we would ask you to do, and this is very applicable to our customer base, is that our customer base reflects on what they've heard and they bring back those learnings into their organization so that they can improve their own cybersecurity posture. As I mentioned, this is going to run over five scenes. Scene one is going to be a video. Unfortunately, I was not able to get the AV going here, so it's going to be played on this laptop. In post, we will send them the presentation and we'll get it all figured out there. So scene one is going to be a video. Scenes two, three, and four are going to be live read throughs. You all have a role to play. You have cards in front of you. We would ask that when the role comes up in the script that you start your reading. I will play the role of narrator. So I will act as narrator in this situation. We're going to do key takeaways after every scene. These are learnings that we learned from every scene. I'm also going to pause after every scene for questions. So if you have any questions or you want to know a little bit more about why we structured a scene a certain way, we're happy to answer that for you. And then we're going to wrap with a little bit of a call to action. This would be a call to action to our customer base on how Rubrik can help in this particular scenario. With that, let's dive into scene one.>> The modern enterprise is a marvel of efficiency. Every transaction, every shipment, every customer interaction is a seamless flow of data orchestrated in the cloud. The business is the application, the database, and the cloud services that run them. But what happens when the cloud that powers it all falls? We are inside the IT operations center, a room that is usually quiet at this hour. The team is starting to shift that is about to become anything but routine. The first sign of trouble appears not as a single critical alert.>> Hey, Alex.>> But as a chorus of discordant data points.>> Can you look at this?>> Yeah.>> The right latency on the primary EBS volumes is through the roof, but read I/O is completely flat. That's weird for a Sunday morning.>> Well, yeah, you're right. It's not a DDoS attack. External traffic is normal. I mean, we haven't even pushed the new codes since last Friday. It's like a single process just keeps writing over and over. What about the S3 buckets?>> Checking now. Oh, wow. I'm seeing thousands of API calls per minute against the critical buckets. Put object, put object, delete object versions.>> Let me see. Let me see. Since the object names, they're all being rewritten. There's a new suffix. Locked. This is not a runaway script. This is hostile. This is an active attack.>> You're calling early. What's going on?>> We are under active attack. I'm seeing what looks like mass encryption across our core S3 buckets. I'm also getting high I/O alerts from the primary RDS instance now too. I mean, customer facing websites, they're throwing 503 errors. This looks coordinated and it is moving incredibly fast.>> Define coordinated. What are you seeing in CloudTrail? Is it one IAM role being used? Multiple? Is activity coming from a known IP range or something external?>> CloudTrail is flooded. It looks like thousands of unauthorized SSE-C encryption calls originating from within our own VPC. I mean, they aren't trying to pull data out. They're encrypting everything in place. GuardDuty is lighting up with alerts for unusual activity from a single EC2 instance. A QA server I've never even seen trigger an alert before. This is bad. You're inside.>> Start a log dump to a secure isolated account now before they can purge them. Use the out-of-band management network, get the CIO on the line, and initiate a major incident management call for the entire crisis team. I'm on my way in.>> By 6:30 AM, the first members of the crisis team are online.>> Give me the situation report. And blast radius is not an answer. I need application names, business units impacted, and a preliminary assessment of potential data loss.>> It's a sophisticated ransomware attack, and it appears to be centered on our US East 1 region. We've confirmed active encryption on the EC2 instances, hosting the web front end, the S3 buckets that hold all our process data for BI and analytics, as well as the primary RDS database that manages inventory and logistics. We've also found a ransom note in the root directory of a dozen servers.>> Let me be clear about what that means. If that RDS instance is compromised, the stores are operating completely blind. The handheld scanners in our warehouses that associates use for picking orders are effectively bricks. The system that optimizes truck routes for daily restocking is offline. The point of sale systems can't even validate gift card balances or process returns correctly from in-store staffing to deliveries and inventory intake. We have to assume every single digital service that touches a physical store is either down or about to go down.>> Yeah, this isn't just a website outage. This is a direct decapitating strike against our core retail operations. I mean, the East Coast stores are scheduled to open in less than 90 minutes. I'm activating the full incident response plan. Get the entire crisis team assembled. I need the CISO and the general counsel on a call in the next 15 minutes. We need to decide if we even open the stores.>> Just to reiterate, we can't just reboot the servers. This isn't a hardware failure or simple outage. This is a hostile takeover of our environment. Every move we make has to be delivered or we could accelerate the damage.
Matt Castriotta
>> So some key takeaways from this scene. I think the first is just that chorus of discordant data points, right? You had EBS latency was high. You had S3 buckets seeing high API calls for reads, for overwrites, for rights into the S3. You had the RDS database that was impacted. You had cloud trail lighting up with a bunch of messages. You had GuardDuty that was lighting up. All of that requires sophistication and coordination, a lot of which our customer bases don't have. They don't have the expertise to be able to take that course of discordant data points and piece them together into a comprehensive solution. I would argue that Horizon Retail did a really good job of that here, of understanding what the scope was. And not only that, understanding how they got in. They mentioned that unpatched QA server was how they actually found their way into the environment. And we're going to learn a little bit more about how they actually got in and what they compromised in the next scene. At the end of the day, cloud attacks are business attacks. We know our customer's environment run in the cloud. We know that business is reliant on the uptime of those services themselves, and it is AWS's responsibility to ensure that the services themselves are performant and available. It is not their responsibility to ensure that their customer's data is protection. That is our customer's responsibility. And when that goes south, the business attacks, the business impact can be substantial. We also know that once those encryptions happen, those encryptions were happening in place using server side encryption with customer keys, which by the way, AWS just announced that they're deprecating for S3 because of this. This is Cloudfinger. This was the attack we saw earlier in the year in January. They are now deprecating the fact that SSE-C is even going to be used within S3 anymore. And when that happens, when they start the encryption, that can happen hundreds of thousands of objects over a period of minutes. How do you recover from that? How do you recover from versions if you needed to recover from versions of S3 objects? How would you do that at mass scale? And that's really the situation that Horizon is faced with here. A machine speed attack moving at 10x that has essentially brought their business to a halt. With that, let's hop into scene two. I'll start with the role of narrator, and then like I said, we'll go around the room. Please take on your roles and feel free to ham it up if you so desire. The time is 8:00 AM Eastern. The first stores on the East Coast are attempting to open their doors amidst chaos. The full crisis team, including the CISO, general counsel, and the line of business leader, is now assembled on the emergency MIM call. The atmosphere is thick with tension as the team tries to understand the full scope of the attack.>> Our forensic analysis of the logs has given us a clearer picture. The initial point of entry was social engineering. They called the IT, helped us and got credentials for a standard employee account.>> A standard account? How did they escalate from there? Our internal segmentation should have contained that.>> It should have. But from that account, they scanned our networks and found a single unpatched QA server. They exploited that to get access to a misconfigured IAM role attached to the instance. A piece of technical debt we never cleaned up.>> And that misconfigured role, what did it allow them to access?>> Enough to compromise one of our cloud admin accounts. And with that, they took over our identity provider. It was a chain of reactions of human error followed by technical oversight. Once they control the identity provider, they can move anywhere. They had keys to the kingdom.>> So with control of our SSO, they could get to the retail ETL role, the RDS database?>> They did. That role gave them complete control over the RDS databases. They also deployed scripts using SSE-C server side encryption with their own keys across our most critical S3 buckets. It's a nightmare scenario because it makes recovery from versions or replicas impossible. The forensics also show they were in our environment for weeks exfiltrating data low and slow the whole time. They didn't just lock our data, they took a copy first.>> I just got off the phone with our regional manager for the Northeast. Customers are abandoning full shopping carts at the registers because the manual checkout process is taking over 20 minutes per person. The media is picked up on it. We have local news vans outside two of our flagship stores. Our store managers are asking for permission to close. The damage to our brand is happening in real time on live television. Based on our hourly revenue rates, we are losing hundreds of thousands of dollars an hour. This is an absolute catastrophe.
Matt Castriotta
>> The time is now 11:00 AM Eastern. The situation continues to deteriorate.>> We have an update. Our external security consultant is now engaged. Based on the attack tactics and techniques as well as the specific ransom note, our external consultants believe with high confidence that this is the work of Scattered Spider.>> And I can confirm that after consulting with the FBI, we have established contact with this group through the channels they provided. The demand is $4 million payable in Bitcoin. The deadline is three days from now.>> We are not paying. This is what our disaster recovery plan is for. This is why we have backups. Head of cloud operations, report on our recovery posture. The EC2 instances are recoverable from AWS backup. What about the S3 data?>> That's a core problem. While we do have backups for a portion of our S3 data, for a critical share, we have been relying on versioning and cross region replication. As you can imagine, versioning is not a backup, and it specifically does not protect against this attack vector. Since the attackers used SSE-C to re-encrypt every version of every object with their own key, the objects are there, but they are indecipherable to us. The replicas in the other regions are just copies of the same useless encrypted objects.>> This is unacceptable. Who signed off on a policy that left our new stores data so vulnerable?>> It was a decision made during last year's capital allocation review to reduce cloud storage costs across the board. The risk was documented and formally accepted with understanding that versioning would mitigate accidental deletion. Nobody anticipated an attack of this sophistication.>> How much data are we talking about? How bad is the S3 data loss?>> It's a severe loss. All operational data, POS, sensor logs, and clean BI data sets for our entire fleet of stores opened in the last fiscal year. A full year of our company's growth has been effectively wiped out. It's almost like these stores never been in operation.
Matt Castriotta
>> The main call ends. The operations analyst leans over to the head of cloud operations.>> My team filled seven tickets over the last year about inconsistent backup validations and coverage gaps in our cloud environment. They're all deprioritized in favor of new feature deployment.>> Those tickets document them and keep them ready.
Matt Castriotta
>> Jeff playing a dual role. Yeah. How many times have we heard that happen? New features being deprioritized for ... I mean, quality of life being deprioritized for new features, right? So at the end of the day, I really think that this is an unrealistic scenario on how quickly Horizon has gotten to root. This normally can take weeks to understand how they got in, to understand what misconfiguration caused them to get in. Because this runs in an hour, we have to condense this and therefore they got it in scene two. Now, they're not going to get it in scene two here, right? Normally, this will take an organization weeks to come to root cause. Usually, getting forensic teams in, incident response teams in, it's a process. But ultimately, how do they get in? You got in through social engineering, right? They fish credentials from someone that gave them access into the cloud environment. From there, they got to an unpatched QA server that had vulnerability and a misconfiguration that ultimately allowed them to access an IAM role that was overly permissive. And we know that overly permissive roles in customers' environments are becoming more and more endemic, especially with AI and the access that AI needs to data. There are this proliferation of overly permissive non-human identities in customers' environments. And this is just another instance of that being exploited. Once they had access to that IAM role, they had access to the RDS databases, they had admin access into S3 data, EBS data, and boom, it was game over. Again, access to the identity, essentially once the identity is compromised, everything is compromised. The other thing that we learned in this scene is that operational recovery and cyber recovery have two completely different things. Backup and recovery in the past was always built on the premise of an operational recovery. I have a problem, it's a known problem. It happened at a certain time. I'm going to rewind the environment back to that time and we'll be good to go. Well, that's not how it works in a cyber recovery scenario. And we're going to learn in the next scene how that can really go sideways in a cyber recovery scenario. So in other words, versioning or other operational recovery techniques to rewind to a known point in time when the data and the identity are still in that trusted zone is not cyber recovery and is not applicable in a cyber recovery incident. And the other thing, and this happens at our customers all the time, today's breach was yesterday's budget cut. They made this conscious decision to deprioritize quality of life and secure their secure environment and deprioritize that over new future development. It's a trade-off that organizations have to make every day. And if management is not in the loop on those decisions, sometimes those decisions can be made in a vacuum and can lead to a catastrophe like we saw here with Horizon. The time is 9:00 PM on Sunday evening. The teams in the IT operation center have been working for 15 hours straight. The initial shock of the attack has worn off, replaced by a grim, bone deep exhaustion. The mood in the room is heavy with the weight of finding a path to recovery, as every attempt so far has ended in failure.
Matt Castriotta
>> I've mounted the 10th EBS snapshot from our backup vaults. I'm running a deep forensic scan on it now, but it takes nearly an hour to scan each terabyte. At this rate, it'll take us just days just to validate the backups for a single critical application. We're looking for a clean needle in a haystack of infect ... Or we're looking for a clean needle in a mountain of infected haystacks.>> Look, I know it's slow, but we have to be certain. They're in our environment for more than a month. They knew our backup retention policies better than we did. They waited until they knew their malware was replicated across all of our recent restore points before they launched the main attack. We're not just fighting the encryption, we're fighting their 30 days of reconnaissance.>> This is a completely manual hit or miss process. Our backup tools have served us well for operational recoveries, but I'm only realizing now that they are greatly lacking in dealing with such sophisticated attacks. They were never designed to withstand a direct malicious assault by an attacker with administrative credentials, and they can't tell us which backups are safe. Wait, I think I have something. An EC2 snapshot from two weeks ago, just before we believe the main malware was planted. The initial scans, they look clean. The root kit signatures aren't there.>> Look, it's risk. It could contain time delayed payload that our scanners are missing, but it's the best lead we'd have all day.>> It's the only shot we have. Let's do it. Kick off the restoration process in a new completely isolated recovery VPC. What's the ETA? If we push it and if everything goes right, we can get it to have a test environment of our core application up by midnight.
Matt Castriotta
>> The time is now 1:00 AM on Monday morning. For the first time in nearly 24 hours, there is a flicker of hope in the room. On a monitor, the core application is initializing in a clean environment.
Matt Castriotta
>> We're online. The app server is up and the database is connected. We're running the first test transaction now. It worked. For about five seconds, it worked. We were cheering. Wait. Wait, no. The dashboard just died. CPU utilization is pegged at 100% and the file system ... I can see it happening. The files are being re-encrypted right now.>> How is that possible? We scanned the snapshot. It was clean.>> The malware must have been dormant. It was waiting for a specific event, a connection to the database, a specific API call. It was just a time bomb and we just triggered it. Shut it all down. Terminate the instances now.
Matt Castriotta
>> The time is 2:00 AM. The brief moment of hope has been utterly extinguished. On the video conference, the faces of the CFO and the general counsel appear. They look grim. The CIO has to deliver the bad news.
Matt Castriotta
>> I'll get straight to the point. The financial consequences are severe. Our financial modeling for Sunday showed a loss of over a million dollars. Our stock is projected to open it at least 8% down, which represents close to half a billion dollars in market cap. Are you now telling us that our only viable backup has just failed and we are essentially back to square one? Time is of the essence here.>> It was a significant setback. The attack is more complex than we anticipated. The team is now working to analyze much older restore points.>> I appreciate that we're all understandably tired and that your team is working incredibly hard. While I value your technical explanation, my primary concern is that the company seems to be in financial free fall, and that's worsening every hour. To put it bluntly, what is the plan here?>> While you formulate one, I must inform you that we have received another email from the attackers. They have seen our failed recovery attempt. They're raising the ransom to $6 million. They feel they have the upper hand and legally they might. The story is now being picked up by national news outlets. We're losing control of the narrative.
Matt Castriotta
>> All right. So a couple things. Horizon did right here. First is having an environment, an IRE or an isolated recovery environment to instantiate what was ended up being an infected system. That is a best practice for cyber resiliency for all of our customers is that they not only have their backups in a separate identity domain so that their backups can't get compromised and that they're made immutable, that should be table stakes, but also that they have a recovery environment that they can instantiate potentially infected resources. In this particular instance, this backup ended up being infected and they were worried about that cascading into other parts of their environment. So they actually did that right. The problem is that they had to go through that process of instantiation and scanning, and that process takes time. They assumed they had a good copy because they went back to a time before the attacker detonated a payload. That does not necessarily mean that you have a good copy. The payload can exist in environment for weeks. It's called dwell time. Attackers usually, during that reconnaissance process, will land malware and will detonate it using command and control server. And usually it's some sort of API call or something that triggers that. So the goal, the primary objective of the attackers, once we'll get into really what the primary objective is in the next scene. But one of the primary objectives of the attackers is to compromise your backups because if you compromise your backups, they have a better chance of getting paid. And that's ultimately what this is all about. The backups were poisoned back further than Horizon Retail had thought. And that your clean backup, "clean," could end up being a ticking time bomb. Again, as I mentioned, that malware can be dormant, can live in backups way before the actual detonation actually occurred. As a matter of fact, it probably did. And they didn't have the forensic information to be able to figure out how far back the malware went. And that ultimately this leads to uncertainty when you go to recover. You're kind of just picking an arbitrary point in time to recover from, and hoping that you don't essentially end up back to square one. The other thing that Horizon didn't do really well here is evict the threat adversary from their environment before they attempted a recovery. They attempted a recovery and what happened, they saw the recovery attempt and they upped the ransom amount. As a matter of fact, we actually will not even ... When we are engaged with customers that have been impacted from a cyber attack, we will not even trust their email communication as being valid. Assuming that, that's been compromised, we'll communicate with them over their private email or some sort of private channel. So always assuming that everything is at risk, that everything has been impacted, including their backups here. And that was an assumption that Horizon didn't make very well. All right, let's move on to scene four. It is now Tuesday afternoon, more than 48 hours since the attack began. The company's stock has plummeted by more than 10%. The recovery efforts have been a brutal demoralizing slog. The team now knows the attackers were inside their systems for at least 36 days. The cyber incident response manager has called a critical meeting with the IT and legal teams to address the scariest question of all. What exactly did the attackers take?
Matt Castriotta
>> We need to get to ground truth on our data exposure. The integrity of our S3 inventory is now in question, especially considering all the lost data. The legal team needs an accurate inventory of the compromised S3 objects. Specifically, what type of data was in those buckets? We must know if we're dealing with a data breach.>> I've been running a deep scan of our S3 environment, trying to reconcile our inventory with reality. I'm running a rejects search for PII patterns across buckets, manifests, and object names. The problem is many of those buckets don't have data classification tags, so I'm flying blind.>> What are you seeing?>> I'm seeing file names like Q3RewardsMembersFullDump.csv and LoyaltyTestDataProdSample.json. The file names themselves are screaming PII and worse, our S3 inventory is a mess. It shows 920 S3 buckets in our production environment. My deep scan has found 965. There are 45 buckets that are completely untracked, unmonitored, and unbacked up.
Matt Castriotta
>> The group reconvenes late Tuesday night with the general counsel and CIO also on the call. The mood is grave.
Matt Castriotta
>> The untracked buckets, they appear to have been spun up for the customer loyalty program that was in development last quarter.>> The customer loyalty program, you managed that project. What was the data scheme for those development buckets?>> It was a development environment. It shouldn't have been in production data, but to probably test a new predictive models, the development team may have used it. They may have pulled a sample from the production customer database.>> I can confirm they did. I've decrypted a single file from the proof of life sample the attackers sent us. It's a CSV file from one of those untracked buckets. It contains over one million records with full names, email addresses, mailing addresses, and birth dates. This is PII. This is now officially a data breach.>> I need to pause everyone for a moment. This discovery fundamentally changes our legal position. The moment PII was confirmed, we moved from a business continuity crisis to a major regulatory and legal event. We likely have 72 hours to notify the California Attorney General under CCPA. We have other notification deadlines in every other state and country in which we operate. We need to retain outside counsel specializing in breach notification tonight. We need to establish a budget for credit monitoring for every customer in that database. The cost of this incident just grew by an order of magnitude, and that is before the class action lawsuits begin. Does everyone understand the gravity of the situation?>> Yes, we understand.
Matt Castriotta
>> So as I mentioned, I kind of alluded to the fact of what the primary objective of the attackers are. This is the primary objective of the attackers, is to find and exfiltrate sensitive data. That can be monetized regardless of whether you decide to pay them ransom or not. Most times they'll say, "Well, pay us the ransom and we won't release it." We're taking a bad guy's word for it. And maybe they'll be good on their word, maybe they won't. The other thing that we saw here, and I think every one of you has run into this in the past is shadow data. We know in the cloud it's very easy to instantiate resources. That's one of the benefits of being in the cloud is rapid development, rapid instantiation. What that means is that when you give autonomy to developers to create things, they're going to create things. And they're going to possibly end up moving production data to those things without properly masking or obfuscating that data before they operate against it. And that leaves your company at risks. You can't protect ultimately what you can't see. In this case, they had 45 untracked S3 buckets in their inventory of which some of those contain private information. And as I mentioned, production data and dev is at a disaster waiting to happen. It happens all the time. And yes, you do need to operate against properly structured data. Developers do need that. And obviously, the data and production is properly structured. The challenge there is that you have to go through a rigorous process of obfuscation or masking. Before you operate against that data, a lot of our customers don't do that. They assume that the security is good. They just copy the data from production into development. All that does is increase your risk surface area. That's it. And that's the name of the game is to shrink the service area for risk by eliminating sensitive information and who has access to it. And then the last thing here, this data breach changes everything. Changes everything for any organization that's been through this. A data breach means regulatory headaches. From regulatory organizations, from the case of financial services, that could be DORA. In the case of healthcare, that could be HIPAA. In the case of retail, that could be PCI if we're talking about credit card data. There are real teeth behind those regulations. It can cost organizations millions of dollars. Not to mention the long tail costs of a data breach, brand damage, customer loyalty at risk. And then everything that you have to do in terms of reporting a data breach. In this country, we don't have a centralized data privacy law on the books. It's up to each state to develop a data privacy law. And it's up to each state's attorney general to enforce it. And it's up to you to report it to each state's attorney's general. And that process can be rigorous and expensive and can take a really long time. As a matter of fact, we have an instance of a healthcare organization that was impacted. They had commented that the stamps that they had to use to actually mail the letters that said that their patient's data had been exposed cost on the range ... That was a line item on the costs for their ransomware attack was something like $2 million or something. So these are the hidden costs that have to be incurred as a part of any sort of attack. So again, sensitive data, primary objective, proactively ensuring that you know what type of data you have and who has access, but reactively being able to understand what was impacted so that you can report that to regulatory agencies or authorities. All right. We'll hop to scene five, which is just a video. Thanks everyone for your participation.>> It has been three days, three days of failed recoveries, mounting losses, and public humiliation. The company's leadership is now assembled for a final agonizing decision.
Matt Castriotta
>> So a full clean rebuild of our environment from the ground up will take at least three weeks. Given that the sophistication of the malware in our backups, that is the most realistic estimate.>> Direct revenue loss currently stands at 3.4 million. Our projections show that we'll continue to lose nearly a million dollars for each additional day of significant outage. We're hemorrhaging money by the day.>> I must advise in the strongest possible terms not to pay these criminals. It's rewarding their actions and funding their next attack on another company. We get a decryption key that might not even work for data that we're going to have to scrub and validate for weeks. This doesn't get us any of our data back, and it absolutely doesn't remove the attackers from our environment. It paints a giant target on our back for every ransomware group on the planet. We have to rebuild. It's the only way to be sure we're clean.>> Rebuilding is a fantasy of control that we can no longer afford. Your clean rebuild will take a month, and in a month there won't be any company left to secure. We will have lost massive shares of our revenue. Our brand reputation will be in tatters and our stock will be worthless. Look, I get it. Principles are important, but they don't keep the lights on.>> From a legal standpoint, our situation has fundamentally changed. We're no longer just managing an outage. We're mitigating liability. The single biggest quantifiable risk we face is the public leak of that PII data. Our CISO is correct. Paying the ransom doesn't guarantee that the attackers won't leak it. However, refusing to pay it makes it a certainty. They will leak it to maximize our pain and make an example of us. The reality is paying the ransom is our only leverage, however small to prevent that outcome. More importantly, we must be able to demonstrate to the courts and regulators that we took every possible step, however distasteful, to protect our customer's data after the breach was discovered.>> The general counsel's right, from purely a financial perspective, paying the $6 million as painful as it is, is the only move we've got. It's an ugly but necessary transaction to ensure that we stop the financial hemorrhaging. It's the only business decision on the table.>> They were right. The security team, the operation analysts, they told us our backup strategy for the cloud was flawed. They told us we needed better visibility into our data. We deprioritized the projects and the funding to meet the deadlines for new features. We made this choice months ago. Not in this room, not today. Now, we're just paying the bill. So I agree with the CFO and the general counsel. We have to pay.>> Then it's decided, authorize the payment. And the first non-negotiable line item in next year's budget will be a complete overhaul of our data security and cyber resilience strategy to be led by the CIO's office. This will not happen again.>> For the record, I'm formally objecting to the payment, but I have advised on the legal rationale for doing so.
Matt Castriotta
>> So would you all have paid? Did they really have a choice?>> Yeah. I mean, CEO is right.
Matt Castriotta
>> If it's going to take three weeks to rebuild the environment, it may not be a company left to it.>> Yeah.
Matt Castriotta
>> Well, they really didn't have a choice. A few things I really liked out of this scene. The first is the CIO's comment about we made this decision months ago and we're just paying the bill. We talked about that dichotomy between new feature development and quality of life and that just being really difficult thing for organizations to tackle. This is a perfect example of that. The other thing I really liked was the CISO's comment about how this paints a giant target on their back. It does. I mean, frankly, you make the payment, of course, they're going to look to infiltrate the environment again. 80% of organizations that get hit with a ransomware attack within six months end up getting hit again. And that's primarily because they either don't do a good job of plugging the hole that caused the attackers to getting in the first place, which goes back to this being unrealistic horizon, figured that out in scene two. That doesn't happen normally. It takes weeks, it can take months, and sometimes you never really get to root cause of how they got in. Yeah. I mean, and at the end of the day, the CISO is right. The attackers will end up selling that tribal knowledge to other ransomware groups on how they got in. They will use that to infiltrate other organizations in the retail space because they know that retail has very common application stack that they operate off of. So they will use those vulnerabilities to attack other retailers. We're seeing with Scattered Spider. Scattered Spider went right through the industry sectors of insurance, airlines, retail. What are they on now? I mean, they're literally tearing through industries, and they're tearing through industries because they understand the attack vectors that are specific to that industry. So in actuality, the CISO is 100% right. You pay, there's a giant target on your back. But then it's back to the CEO's statement. Would there be a company left if they had to rebuild? Probably wouldn't be. So they really had no choice ultimately in this scenario. They probably had to pay. And paying ultimately is a business decision, right? It's a decision that management has to make. And really, ideally, you have humans in the loop, just to use an AI term here, since this is an AI talk, but you have humans in the loop on the decision making process around deprioritizing new feature development, right? That should have gotten input from someone, maybe not at the CEO's level, but someone that's much higher up in the organization to really make that decision. And that decision was not made in isolation. And ultimately, that choice was made months ago. They were just paying the bill. We believe that resilience is ultimately the power to say no, right? To say no to paying the ransom, true resilience. And what is true resilience? It's the power to keep your data safe and your business running. As long as you assume that you have proactive investment and executive alignment, and as we mentioned, those are two things that Horizon didn't really have locked down prior to stepping into this. The ability to ensure that you have backups that can't be tampered with, that's table stakes. That's immutable and credential isolated backups. Immutable, meaning once it's written, wherever it's written to, it can't be compromised using things like object lock for S3. That's going to help ensure that those backups live for the length of their retention period. Credential isolated, meaning that they're separated from the primary identity boundary, and that they're either in a different org or a completely different identity boundary altogether. Giving you the ability to do guaranteed clean recovery. That's really the point here. It's not just about guaranteed recoverability, it's about ensuring that you're recovering clean. Pre-calculating clean recovery points, so that when you do decide to recover, all you have to do is step in, click that button, and you've got a clean recovery point to recover from. You're not taking those backups and scanning them with GuardDuty. You're not taking those backups and instantiating them in an IRE and reiterating on that process over and over again. That's time-consuming and time is money. Ensuring, and this is the most important part here, that you have a robust data security posture, meaning you understand the type of data you have and who has access to it. You understand how that data is propagated through the environment. You understand if that data lives in assets that are untracked, you understand them as configurations in your environments. There is a CNAPP aspect to this too, right? There's a Wiz aspect to this as well. You need to be able to understand if there's impact in the environment, both from an infrastructure configuration as well as from a data perspective. And then having complete sensitive data visibility and management across the estate, not just in AWS. Most of our customers live in multiple hyperscalers. Most of our customers still have an on-prem install that they're working through. Ensuring that you have the visibility across the estate and the only way to do that is for your tool of choice for data protection that has visibility to the data, your most critical data across the estate. So call to action for our group on Wednesday when we do this in front of a large audience in one of the rooms there. Contact your Rubrik rep, we'll learn more. We'll have nice QR code for people to take pictures of so that they can go to our website and learn a little bit more. I have done these zero hours at customer sites directly with a single customer. I've also done them in a format where we have partners come in, we have multiple customers come in. Sometimes it's prospects mixed with customers, and that's really where the magic shines, right? Because you got customers talking to prospects about experiences that they've faced, and it really is impactful, I think, when you get into a mixed group setting like that. So I tend to really like those events over dinner, over lunch. We typically try to do those co-branded events with partners. But again, I have done them for single organizations to bring IT and SecOps together, to be talking the same language for an hour. Those folks don't talk the same language all day. So getting them to come together and talk the same language for 60 minutes is really, really valuable for a single organization. So I'm happy to do those with any of the events. I've done these many times. And with that, that's all we had. So hopefully, you guys ... Thanks. Appreciate it.