We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Zero Trust Cyber Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Zero Trust Cyber Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Zero Trust Cyber Series
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Zero Trust Cyber Series.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Zero Trust Cyber Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Zero Trust Cyber Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Zero Trust Cyber Series
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Zero Trust Cyber Series. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Kevin Tian, Doppel
Inna Tokarev Sela is the CEO and founder of Illumex. The platform enables companies to extract value from structured data, creating a virtual semantic graph for users to interact with in natural language. Illumex focuses on contextualizing data in real-time and offers built-in governance features. By partnering with major data platform providers, Illumex has increased data usage for customers. The company has raised $13 million and has a diverse workforce. Inna's leadership style is described as empathetic. Illumex envisions a future where data interactions are seamless and efficient. Overall, the company aims to lead the industry towards a more streamlined application-free future.
play_circle_outlineWelcome introduction of Kevin Tian, CEO and co-founder of Doppel.
replyShare Clip
play_circle_outlineNavigating AI Threats: Kevin's Journey from Uber to Confronting Deepfake Identity Theft and Digital Security Risks
replyShare Clip
play_circle_outlineThe variety of attack channels increasing due to deepfake technology.
replyShare Clip
play_circle_outlineEmpowering Cybersecurity: Elevating Human Awareness, Combatting Impersonation Threats, and Defending Against Deepfakes with Doppel’s Innovative Platform
>> Welcome back to theCUBE. I'm Gemma Allen, here at our studio in the New York Stock Exchange, connecting Wall Street to Silicon Valley. One area that's getting a lot of attention lately is AI and the risk and threat of deepfakes. Joining me now in studio is a man who saw this threat before many of us even realized exactly what AI was all about. Welcome, Kevin Tian, CEO and co-founder of Doppel. Welcome to theCUBE, Kevin.>> Thank you so much for having me today, Gemma.>> Well, tell me, like I said, a lot of us really did not realize just how serious a problem this idea of deepfake identity thievery could become, right?>> Right.>> You were an engineer at Uber, I believe?>> That's correct.>> Saw this problem ahead of most of us. Tell me a little bit about what made you realize just how serious a problem like this could become and how you predicted the world we now live in 2025. Take me on the journey you've been on.>> Yeah. Well, I mean, the origin story does begin with Uber, right? It's where I met my co-founder, Rahul, our first day in July of 2016, and we always talked about starting a company together. And so, basically, what happened was in 2022, where he was roommates with one of the heads of research at OpenAI, he got to see a sneak preview of this little thing called ChatGPT, right? And at the time there's ChatGPT, there's the clip models, there's all this new deepfake technology getting generated. And it just became very evident that when we talk about AI risks and AI security threats, it's really the whole problem around digital authenticity and integrity that becomes the existential threat to humanity. So, if I'm a bad guy, if I'm a bad AI trying to destroy the world, it's the ability for me to go deepfake, impersonate, phish, commit fraud on any single channel possible.>> You have had a lot of success in the enterprise space, right?>> That's correct.>> Specifically, selling your platform into enterprises that have risks both to their brand and to their execs. Tell me a little bit about what you're seeing, even in the last number of years. How are these threats changing? What does a real-life threat look like for the CEO of a large bank, et cetera?>> I'd say there's three key areas in which the threats are changing. One is just pure volume. A lot of times security people talk about, "Hey, is the threat landscape really changing?" And one answer is like, yes, because the pure numbers that we're tracking at Doppel, where we're seeing the number of alerts go from thousands to tens of thousands to hundreds of thousands, to now basically millions every day, that's a huge factor. I'd say second is going to be around the velocity of these attacks where something will happen. For example, a CEO will go and do an interview with the NYSE or whatever organization. And the next day you'll see immediate attacks just based off that public event. And so, that velocity is just incredible now, and it's a lot easier because of AI. I'd say lastly, I talk a lot about the variety of these different attacks in which channels they're going after. In a sense, deepfakes have existed before, right? People have had the ability to build really high-fidelity synthetic clones at times, but to be able to do it at scale, to be able to do it across channels, whether it's via a phone call, whether it's via a LinkedIn message, whether it's the traditional email, that has rapidly exploded as well. And it's a key piece of our platform at Doppel, the fact that we are a multichannel social engineering defense platform.>> And I think at this point in time, because we've seen Sora, we've seen the success of some of these... Even us, you create cartoon animations of yourself, voice animations, et cetera. There's a novelty element to this, right? There's a novelty element to creating a little doppelganger of yourself online, but obviously, it comes with huge risk. And you talk a lot about the human layer of trust and cyber defense. But what does that mean? Does that mean that you're helping people understand, "That's not really your grandson ringing you looking for $20," or you're helping people understand, "Hey, your CEO, your exec team is being mimicked or copied online and it brings these said risks to your firm," or is it a bit of everything?>> Yeah, so I'll take it one step further. Doppel is the first social engineering defense platform that can detect these sorts of impersonations and deepfakes, that can take them down... So, we'll not only detect them, but we'll say, "Hey, Facebook, that's a Gemma impersonator. That needs to come down ASAP." And then, lastly, be able to even simulate them from both a security awareness perspective or a red teaming perspective. And so, that's the brand new product we just launched called Vibe Phishing, where you can tell an LLM agent, "Let's go attack Gemma," right? And let's go pretend to be Gemma's boss, or let's go pretend to be Gemma's peers. But that's exactly what we do at Doppel. We're the first platform that enables you to find these threats, take them down and simulate them.>> That's so interesting. And I want to talk about the red teaming, purple teaming and what that actually looks like in a scenario setting for a second.>> Yes.>> But first, in terms of the actual process of understanding, okay, this person is potentially impersonating you. How do you, as Doppel, actually execute on the back-end of that? Do you ring up these website providers, have these numbers removed? How do you actually do this?>> Right. So, there's two key pieces there. One is the peer-detection piece, and a lot of that is essentially, especially in the past six months, we've seen huge breakthroughs in AI models, specifically the whole reinforcement fine-tuning framework that a lot of these foundational models are releasing. That's enabled us to basically train AI agents, just like we would train human interns and have them go out and search for these impersonation threats, understand if this truly is a malicious threat, because sometimes we'll even see benevolent deepfakes now, like we see enterprises actually deploying deepfakes intentionally, but that's piece number one. Piece number two then is the actual take-down, which is exactly what you described Gemma. It's our ability to go reach out to different registrars, web hosts, social media platforms, even telephone companies, even advertising platforms. And a lot of our work there is how do you build relationships? How do you build credibility over time that Doppel's a trusted reporter? And essentially, get access via either APIs or hotlines to shut down a lot of these attacks and protect these enterprises and executives from these threats.>> And you seem to have built those relationships very successfully in a short space of time.>> Yes.>> What have you learned? Because from an outside perspective, my day-to-day understanding was it's very complicated to have your name removed from... I've actually had it that happened to me on Facebook before. How have you tackled that? How have you built those relationships so strategically, so fast?>> Yeah, I mean, I think a lot of it comes down to two things. One, you do just got to move really, really fast and how you're responsive to these providers, providing evidence, especially with the volume that we're doing now, where we're doing hundreds of thousands, millions of take-downs now, that's critical. And I think second, it's a lot about the team and the people that we have here at Doppel. A lot of our founding team at Doppel does come from those sorts of companies, like the Facebooks of the world, the YouTubes of the world, the Twitters of the world. So, a lot of our founding engineers came from those companies. And so, naturally, we've got a structural advantage there in terms of understanding how those systems exactly will work.>> And tell me about the technology layer to this. I assume you'd like to get to a point whereby that is even an automated process in and of itself, where you can actually contact through automated workflow and have certain accounts removed, et cetera. Tell me about the tech stack, the tech layer in the back-end of this and how that's, as well, constantly innovating because is it a little bit of a game of cat and mouse from the perspective of bad actors?>> It is. It is. It is. And so, that's the beauty of what we do. There's never a boring day at Doppel. We're constantly building out new detection techniques. We're constantly doing fine tuning with our AI agents. A big part of our tech stack is the automation at scale of this take-down operation or red teaming operation, for example, and I know we'll chat more about that later. We've actually just launched a case study recently with OpenAI that details exactly our AI agent architecture. How exactly do we train agents to scan all these different parts of the digital landscape, whether it's social media, whether it's marketplaces, whether it's traditional websites, and then figure out, "Hey, this is really a threat to a particular enterprise, their employees, their customers and their executives." So, specifically, that AI agent architecture. Of course, there's traditional filters, upstream rules-based filters, machine learning-based filters as well, but that AI agent architecture's been the key breakthrough to really scale this thing and essentially mitigate a lot of these threats in minutes instead of hours or days.>> Wow. So, tell me a little about the people, process, technology element of this, right? Because with cyber, there is of course always that human element, ensuring that people also understand their cognitive biases, how they think about their day to day from a risk perspective. We've seen some very serious situations with staff just not really fully understanding the actors at the end of these emails. But if you go out to private or public customers, what exactly is it that your customers are getting? Is it a dashboard? Is it some daily update to say, "Here's from a social listening and social monitoring perspective, what's happening around your brand, your key execs"? How are assets managed? Break it down for me.>> It's a great question. I think the easiest way to think about it is customers are paying us for outcomes, right? Outcomes, where we are reducing their risk to either their executives, maybe it's their customers and so it's a fraud problem. Maybe it's their employees, and that's where a lot of the phishing and social engineering happens as well. And how we measure those outcomes is in a couple different ways. One, it's our ability to, like you said, it's to do all the listening, it's to do all the scanning, it's to do all the detection work to find these threats. But the beauty of it is, we're not just here to find it, but we will actually shut them down. So, instead of that phishing email going out, we have already shut down that email server. We've already dismantled that particular domain. And so, that's where we can really measure how much risk we're helping them reduce from a security perspective. There's additional benefits. Well, of course, with AI automation, instead of you having to do this by throwing bodies at the problem, there's huge ROI to actually doing a full-automation solution. And then, of course, at the end of the day, where we a white-glove solution as well, we've got some of the best security experts in the world who have been doing threat hunting for decades and decades on end, and they're our customers' experts when it comes to, "Hey, is this really a threat or not to our organization?">> When you are talking to these large enterprises with some very high-profile executives, and I've seen you've got some very well-known customers, even from a red teaming, purple teaming perspective, is there a level of shock? Do you think that at this point in late 2025, a lot of folks don't fully understand still just how serious and crazy this is? Do you have any crazy story or anything you can share with us?>> Yeah. Well, I think the shock or the aha moment for almost every executive that we pitch is, of course with their permission, we'll actually do a deepfake of them as a voice agent. And why that's so powerful is that it's different than just a demo where you see a deepfake of your executive because it's actually a deepfake that can think, that can respond, and it can have a synchronous real-time conversation. And so, some really cool or scary things that we've done from a red teaming perspective for example is we've actually ran an evaluation with a Fortune 200 company where our AI agents were actually talking to their help desk for 20, 30, 40 minutes. Imagine that, right? Where you've got people who, of course, their job is to be helpful. Their job is to pick up the phone, respond to customers, respond to employees, helping them reset passwords, things like that. And they're talking to an AI agent, thinking that it's a human for 20, 30, 40 minutes.>> That's insane.>> It blew my mind when I saw it. And I think that's what people don't realize is this technology still is getting better, it's not done. Literally in the past three to six months, the improvements we've seen in the ability for deepfakes to, again, not just be a good deepfake, but to actually think and respond. And for example, I'll even ask the deepfake agent to try to throw them off, "Who won the World Series this year?" And the deepfake agent will have a natural answer. They'll be like, "I'm not here to talk about sports. I'm here to-">> Is the bad actor science behind that distraction? Is it that the longer you keep somebody on the phone, the more they begin to trust you?>> Well, it's the more they begin to trust you, the more information they reveal. So, if you look at some of the biggest breaches over the past 12 to 24 months, it's been through social engineering where a bad guy, like a Scattered Spider, will call different parts of the help desk, collect different information from each agent, and then ultimately take the aggregate information to go compromise some of the biggest enterprises in the world. So, yeah, the conversation length is absolutely correlated with the information they're extracting.>> Wow. Well, Kevin, I know that there's a lot of customers see a lot of value in this. You also have some very exciting news, which is why you're here in New York, I believe. Tell us about what's going on for Doppel and the team this week.>> Yeah, so this week we're excited to announce our series C, just six months after our series B. So, we've tripled our valuation, and of course, that's a reflection of our success over these past six months going to market with enterprise. But we're very thankful that Bessemer Venture Partners, our already existing insider investor, has decided to lead this round, in addition to a lot of our other existing investors, including Andreessen Horowitz and new investors, including George Kurtz from CrowdStrike.>> Wow. Okay. And I have to ask $70 million, what are you planning to spend it on? Where is all this investment going to go?>> That's a great question. Team, team, team.>> Love it.>> Right? We got to keep scaling our go-to-market. There's a lot more people that we can go out and help protect. Second, there's a lot more product that we want to build. There's either new products or new technology breakthroughs that we're integrating into our core tech stack to enable our customers to realize even more value, and ultimately, protect their organizations even more. But at the end of the day, it starts with a team and hiring the best in the world and that's our number one focus.>> Love it. Well, Kevin, close us out. I mean, clearly, you're somebody who saw a risk and had some futuristic vision far before many of us. Tell us what's your final call to some enterprise customers watching this?>> I'd say the biggest thing is that we're just getting started at Doppel. The bad guys are continuing to get better and our number one job is to work with you, right? Understand where your risks are today, where the risks are going forward, and how do we just keep building? I think in cybersecurity, there's always been established paradigms in terms of what the verticals are, what the products are. For me and Rahul coming from Uber as non-traditional cyber founders, we see an opportunity to build something that has never existed before, and ultimately, benefit our customers against the biggest threat with AI and that's social engineering.>> For sure. Well, Kevin, we wish you all the best. Hopefully, have you back here soon, maybe announcing another series->> Maybe in another six months, right?>> You're definitely breaking the record here.>> Yeah. Yeah. Well, thank you so much for having us, Gem. It's always a pleasure and always exciting to just be on the show.>> Thanks for coming on theCUBE.
I'm Gemma Allen, here at the New York Stock Exchange with theCUBE. This is our Cyber Leader series. Thanks so much for watching.
>> Welcome back to theCUBE. I'm Gemma Allen, here at our studio in the New York Stock Exchange, connecting Wall Street to Silicon Valley. One area that's getting a lot of attention lately is AI and the risk and threat of deepfakes. Joining me now in studio is a man who saw this threat before many of us even realized exactly what AI was all about. Welcome, Kevin Tian, CEO and co-founder of Doppel. Welcome to theCUBE, Kevin.>> Thank you so much for having me today, Gemma.>> Well, tell me, like I said, a lot of us really did not realize just how serious a problem this idea of deepfake identity thievery could become, right?>> Right.>> You were an engineer at Uber, I believe?>> That's correct.>> Saw this problem ahead of most of us. Tell me a little bit about what made you realize just how serious a problem like this could become and how you predicted the world we now live in 2025. Take me on the journey you've been on.>> Yeah. Well, I mean, the origin story does begin with Uber, right? It's where I met my co-founder, Rahul, our first day in July of 2016, and we always talked about starting a company together. And so, basically, what happened was in 2022, where he was roommates with one of the heads of research at OpenAI, he got to see a sneak preview of this little thing called ChatGPT, right? And at the time there's ChatGPT, there's the clip models, there's all this new deepfake technology getting generated. And it just became very evident that when we talk about AI risks and AI security threats, it's really the whole problem around digital authenticity and integrity that becomes the existential threat to humanity. So, if I'm a bad guy, if I'm a bad AI trying to destroy the world, it's the ability for me to go deepfake, impersonate, phish, commit fraud on any single channel possible.>> You have had a lot of success in the enterprise space, right?>> That's correct.>> Specifically, selling your platform into enterprises that have risks both to their brand and to their execs. Tell me a little bit about what you're seeing, even in the last number of years. How are these threats changing? What does a real-life threat look like for the CEO of a large bank, et cetera?>> I'd say there's three key areas in which the threats are changing. One is just pure volume. A lot of times security people talk about, "Hey, is the threat landscape really changing?" And one answer is like, yes, because the pure numbers that we're tracking at Doppel, where we're seeing the number of alerts go from thousands to tens of thousands to hundreds of thousands, to now basically millions every day, that's a huge factor. I'd say second is going to be around the velocity of these attacks where something will happen. For example, a CEO will go and do an interview with the NYSE or whatever organization. And the next day you'll see immediate attacks just based off that public event. And so, that velocity is just incredible now, and it's a lot easier because of AI. I'd say lastly, I talk a lot about the variety of these different attacks in which channels they're going after. In a sense, deepfakes have existed before, right? People have had the ability to build really high-fidelity synthetic clones at times, but to be able to do it at scale, to be able to do it across channels, whether it's via a phone call, whether it's via a LinkedIn message, whether it's the traditional email, that has rapidly exploded as well. And it's a key piece of our platform at Doppel, the fact that we are a multichannel social engineering defense platform.>> And I think at this point in time, because we've seen Sora, we've seen the success of some of these... Even us, you create cartoon animations of yourself, voice animations, et cetera. There's a novelty element to this, right? There's a novelty element to creating a little doppelganger of yourself online, but obviously, it comes with huge risk. And you talk a lot about the human layer of trust and cyber defense. But what does that mean? Does that mean that you're helping people understand, "That's not really your grandson ringing you looking for $20," or you're helping people understand, "Hey, your CEO, your exec team is being mimicked or copied online and it brings these said risks to your firm," or is it a bit of everything?>> Yeah, so I'll take it one step further. Doppel is the first social engineering defense platform that can detect these sorts of impersonations and deepfakes, that can take them down... So, we'll not only detect them, but we'll say, "Hey, Facebook, that's a Gemma impersonator. That needs to come down ASAP." And then, lastly, be able to even simulate them from both a security awareness perspective or a red teaming perspective. And so, that's the brand new product we just launched called Vibe Phishing, where you can tell an LLM agent, "Let's go attack Gemma," right? And let's go pretend to be Gemma's boss, or let's go pretend to be Gemma's peers. But that's exactly what we do at Doppel. We're the first platform that enables you to find these threats, take them down and simulate them.>> That's so interesting. And I want to talk about the red teaming, purple teaming and what that actually looks like in a scenario setting for a second.>> Yes.>> But first, in terms of the actual process of understanding, okay, this person is potentially impersonating you. How do you, as Doppel, actually execute on the back-end of that? Do you ring up these website providers, have these numbers removed? How do you actually do this?>> Right. So, there's two key pieces there. One is the peer-detection piece, and a lot of that is essentially, especially in the past six months, we've seen huge breakthroughs in AI models, specifically the whole reinforcement fine-tuning framework that a lot of these foundational models are releasing. That's enabled us to basically train AI agents, just like we would train human interns and have them go out and search for these impersonation threats, understand if this truly is a malicious threat, because sometimes we'll even see benevolent deepfakes now, like we see enterprises actually deploying deepfakes intentionally, but that's piece number one. Piece number two then is the actual take-down, which is exactly what you described Gemma. It's our ability to go reach out to different registrars, web hosts, social media platforms, even telephone companies, even advertising platforms. And a lot of our work there is how do you build relationships? How do you build credibility over time that Doppel's a trusted reporter? And essentially, get access via either APIs or hotlines to shut down a lot of these attacks and protect these enterprises and executives from these threats.>> And you seem to have built those relationships very successfully in a short space of time.>> Yes.>> What have you learned? Because from an outside perspective, my day-to-day understanding was it's very complicated to have your name removed from... I've actually had it that happened to me on Facebook before. How have you tackled that? How have you built those relationships so strategically, so fast?>> Yeah, I mean, I think a lot of it comes down to two things. One, you do just got to move really, really fast and how you're responsive to these providers, providing evidence, especially with the volume that we're doing now, where we're doing hundreds of thousands, millions of take-downs now, that's critical. And I think second, it's a lot about the team and the people that we have here at Doppel. A lot of our founding team at Doppel does come from those sorts of companies, like the Facebooks of the world, the YouTubes of the world, the Twitters of the world. So, a lot of our founding engineers came from those companies. And so, naturally, we've got a structural advantage there in terms of understanding how those systems exactly will work.>> And tell me about the technology layer to this. I assume you'd like to get to a point whereby that is even an automated process in and of itself, where you can actually contact through automated workflow and have certain accounts removed, et cetera. Tell me about the tech stack, the tech layer in the back-end of this and how that's, as well, constantly innovating because is it a little bit of a game of cat and mouse from the perspective of bad actors?>> It is. It is. It is. And so, that's the beauty of what we do. There's never a boring day at Doppel. We're constantly building out new detection techniques. We're constantly doing fine tuning with our AI agents. A big part of our tech stack is the automation at scale of this take-down operation or red teaming operation, for example, and I know we'll chat more about that later. We've actually just launched a case study recently with OpenAI that details exactly our AI agent architecture. How exactly do we train agents to scan all these different parts of the digital landscape, whether it's social media, whether it's marketplaces, whether it's traditional websites, and then figure out, "Hey, this is really a threat to a particular enterprise, their employees, their customers and their executives." So, specifically, that AI agent architecture. Of course, there's traditional filters, upstream rules-based filters, machine learning-based filters as well, but that AI agent architecture's been the key breakthrough to really scale this thing and essentially mitigate a lot of these threats in minutes instead of hours or days.>> Wow. So, tell me a little about the people, process, technology element of this, right? Because with cyber, there is of course always that human element, ensuring that people also understand their cognitive biases, how they think about their day to day from a risk perspective. We've seen some very serious situations with staff just not really fully understanding the actors at the end of these emails. But if you go out to private or public customers, what exactly is it that your customers are getting? Is it a dashboard? Is it some daily update to say, "Here's from a social listening and social monitoring perspective, what's happening around your brand, your key execs"? How are assets managed? Break it down for me.>> It's a great question. I think the easiest way to think about it is customers are paying us for outcomes, right? Outcomes, where we are reducing their risk to either their executives, maybe it's their customers and so it's a fraud problem. Maybe it's their employees, and that's where a lot of the phishing and social engineering happens as well. And how we measure those outcomes is in a couple different ways. One, it's our ability to, like you said, it's to do all the listening, it's to do all the scanning, it's to do all the detection work to find these threats. But the beauty of it is, we're not just here to find it, but we will actually shut them down. So, instead of that phishing email going out, we have already shut down that email server. We've already dismantled that particular domain. And so, that's where we can really measure how much risk we're helping them reduce from a security perspective. There's additional benefits. Well, of course, with AI automation, instead of you having to do this by throwing bodies at the problem, there's huge ROI to actually doing a full-automation solution. And then, of course, at the end of the day, where we a white-glove solution as well, we've got some of the best security experts in the world who have been doing threat hunting for decades and decades on end, and they're our customers' experts when it comes to, "Hey, is this really a threat or not to our organization?">> When you are talking to these large enterprises with some very high-profile executives, and I've seen you've got some very well-known customers, even from a red teaming, purple teaming perspective, is there a level of shock? Do you think that at this point in late 2025, a lot of folks don't fully understand still just how serious and crazy this is? Do you have any crazy story or anything you can share with us?>> Yeah. Well, I think the shock or the aha moment for almost every executive that we pitch is, of course with their permission, we'll actually do a deepfake of them as a voice agent. And why that's so powerful is that it's different than just a demo where you see a deepfake of your executive because it's actually a deepfake that can think, that can respond, and it can have a synchronous real-time conversation. And so, some really cool or scary things that we've done from a red teaming perspective for example is we've actually ran an evaluation with a Fortune 200 company where our AI agents were actually talking to their help desk for 20, 30, 40 minutes. Imagine that, right? Where you've got people who, of course, their job is to be helpful. Their job is to pick up the phone, respond to customers, respond to employees, helping them reset passwords, things like that. And they're talking to an AI agent, thinking that it's a human for 20, 30, 40 minutes.>> That's insane.>> It blew my mind when I saw it. And I think that's what people don't realize is this technology still is getting better, it's not done. Literally in the past three to six months, the improvements we've seen in the ability for deepfakes to, again, not just be a good deepfake, but to actually think and respond. And for example, I'll even ask the deepfake agent to try to throw them off, "Who won the World Series this year?" And the deepfake agent will have a natural answer. They'll be like, "I'm not here to talk about sports. I'm here to-">> Is the bad actor science behind that distraction? Is it that the longer you keep somebody on the phone, the more they begin to trust you?>> Well, it's the more they begin to trust you, the more information they reveal. So, if you look at some of the biggest breaches over the past 12 to 24 months, it's been through social engineering where a bad guy, like a Scattered Spider, will call different parts of the help desk, collect different information from each agent, and then ultimately take the aggregate information to go compromise some of the biggest enterprises in the world. So, yeah, the conversation length is absolutely correlated with the information they're extracting.>> Wow. Well, Kevin, I know that there's a lot of customers see a lot of value in this. You also have some very exciting news, which is why you're here in New York, I believe. Tell us about what's going on for Doppel and the team this week.>> Yeah, so this week we're excited to announce our series C, just six months after our series B. So, we've tripled our valuation, and of course, that's a reflection of our success over these past six months going to market with enterprise. But we're very thankful that Bessemer Venture Partners, our already existing insider investor, has decided to lead this round, in addition to a lot of our other existing investors, including Andreessen Horowitz and new investors, including George Kurtz from CrowdStrike.>> Wow. Okay. And I have to ask $70 million, what are you planning to spend it on? Where is all this investment going to go?>> That's a great question. Team, team, team.>> Love it.>> Right? We got to keep scaling our go-to-market. There's a lot more people that we can go out and help protect. Second, there's a lot more product that we want to build. There's either new products or new technology breakthroughs that we're integrating into our core tech stack to enable our customers to realize even more value, and ultimately, protect their organizations even more. But at the end of the day, it starts with a team and hiring the best in the world and that's our number one focus.>> Love it. Well, Kevin, close us out. I mean, clearly, you're somebody who saw a risk and had some futuristic vision far before many of us. Tell us what's your final call to some enterprise customers watching this?>> I'd say the biggest thing is that we're just getting started at Doppel. The bad guys are continuing to get better and our number one job is to work with you, right? Understand where your risks are today, where the risks are going forward, and how do we just keep building? I think in cybersecurity, there's always been established paradigms in terms of what the verticals are, what the products are. For me and Rahul coming from Uber as non-traditional cyber founders, we see an opportunity to build something that has never existed before, and ultimately, benefit our customers against the biggest threat with AI and that's social engineering.>> For sure. Well, Kevin, we wish you all the best. Hopefully, have you back here soon, maybe announcing another series->> Maybe in another six months, right?>> You're definitely breaking the record here.>> Yeah. Yeah. Well, thank you so much for having us, Gem. It's always a pleasure and always exciting to just be on the show.>> Thanks for coming on theCUBE.
I'm Gemma Allen, here at the New York Stock Exchange with theCUBE. This is our Cyber Leader series. Thanks so much for watching.