We just sent you a verification email. Please verify your account to gain access to
Cloud AWS re:Invent Coverage. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Cloud AWS re:Invent Coverage
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Cloud AWS re:Invent Coverage.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Cloud AWS re:Invent Coverage. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Cloud AWS re:Invent Coverage
Please sign in with LinkedIn to continue to Cloud AWS re:Invent Coverage. Signing in with LinkedIn ensures a professional environment.
Ken Exner, Chief Product Officer at Elastic, discusses the strategic vision and momentum of GenAI adoption. Elastic has evolved into providing semantic search and generative AI applications. Companies are transitioning from experimenting with generative AI to building their first applications and finding success. Elastic helps customers ground LLMs on their private data to power generative AI applications. The focus is on simplifying the development of RAG applications and providing out-of-the-box integrations for semantic search. Elastic's platform has expan...Read more
exploreKeep Exploring
What has Elastic evolved into since its beginnings as a search engine, and how are customers now using natural language question answering and generative AI applications through their search platform?add
What is the Elastic AI Ecosystem and how is it supporting developers in navigating the landscape of AI and helping them scale?add
What are the conditions that make security and observability ripe for disruption by generative AI?add
What is the importance of relevance in the context of building generative AI applications and search engines?add
What are you excited about for 2025, both for Elastic and the future of enterprise AI?add
>> Hi everybody. Welcome back to theCUBE's coverage of AWS re:Invent 2024. We're finishing the year strong and our next guest, he knows a little bit about Amazon having spent more than a decade and a half at the company. Ken Exner has been the chief product Officer at Elastic for almost 30 months now. Elastic is a company that just came off a very strong fiscal year Q2. They beat expectations. They're making GenAI a tailwind with its hybrid search strategy. We're really happy to hear about several customer proof points and metrics on a very upbeat earnings call. It's all about AI ROI. Ken, welcome to the program. Good to see you.
Ken Exner
>> Hey, very good to see you. Thank you for having me.
Dave Vellante
>> You bet. All right, let's start. Big picture, how do you describe Elastic's strategic vision and this GenAI momentum that you're seeing?
Ken Exner
>> Sure. Elastic, of course began as a search engine and we've been doing search for a long time, for more than 12 years. One of the things that we've seen is that search has evolved from lexical search or text-based search into semantic search, which is customers wanting to do natural language question and answering to sort of what I call conversational search or moving towards generative AI applications that use search in order to ground LLMs and use the power of LLMs to build search-powered applications. It's been an exciting time for us as we help customers take their and use it to build generative AI applications using our search platform. It's been exciting not only to see our customers do this, but to be able to do this ourselves within our security and observability solutions.
Dave Vellante
>> Yeah. So you are seeing, as we said, strong momentum in GenAI adoption, your vector database, you got tools for building. You just mentioned semantic search. Search is like the killer of enterprise app and RAG, Retrieval Augmented Generation apps. What's driving this momentum? How are you helping customers accelerate their GenAI initiatives and strategies? Because a lot of customers, frankly, are struggling.
Ken Exner
>> Yeah, I think last year was a lot about figuring out what customers wanted to do with generative AI. Everyone was scrambling, trying to figure out what was their generative AI strategy. They were getting budgets for starting to do some experimentation. I think what we saw this year in 2024 is companies starting to take some of those ideas and budgets and start building their first generative AI applications, starting with an experiment. Something internal or something like a customer service application or an internal workplace search application and starting to get their first forays into generative AI. We've seen them take these to production and start having success. What we expect going forward is to see this move across the enterprise and customers to start finding other use cases. Once they develop one thing that works well, repeat that, try something else. Move into marketing automation, move into sales automation. I think 2024 was the year of experimentation and getting something out. 2025 is going to see us move towards, moving across the enterprise and seeing more use cases for generative AI. And at Elastic, we're thrilled to help customers figure out how to use their private data to power these generative AI applications. One of the things that we do is we help ground LLMs using Retrieval Augmented Generation on companies' private data. So for a lot of companies that already use Elastic, this becomes a very simple thing for them. They already managed their data with Elastic, they already index their data. Now they can use that same data and that same Elastic search engine to ground their LLMs, start building generative AI applications.
Dave Vellante
>> I think it's a really important point that you're making. The big five foundation model vendors, they're training their systems on the internet and the internet data, it's all public data, but your customers, they want to train on private data, they want to fine tune on that private data to drive competitive advantage. That data's not going to seep ideally into the public domain. And so it's really pretty straightforward actually to build a RAG experiment. But recently, I think I would say in an effort really to simplify the development of RAG applications, you've launched what you're calling the Elastic AI Ecosystem. My understanding is you're featuring a curated set of curated stack set of technology providers. I wonder if you could elaborate on how this ecosystem is supporting developers in navigating the landscape of AI and particularly helping them scale. That's the really hard part about GenAI.
Ken Exner
>> Yeah, so one of the things that we've been doing over the last, I'll say 18 to 24 months is working with our ecosystem partners to start working on integrations so that customers and developers have out of the box integrations ready to do RAG applications, ready to do semantic search. So we have spent a lot of time with the three CSPs integrating with not only their models but their tools. So we are integrated, for example, into Vertex AI from Google, and we're integrated into Azure OpenAI studio from Microsoft, and you can use us as a vector database within those tool sets, those tool trains. Similarly, we have been integrating LLMs and other tools into our ecosystem and we've been doing all of this work over the last two years. One of the things we realized, we have this partner ecosystem of all these great integrations and we wanted to celebrate that and sort of tell customers this story about all these integrations that we've made possible that allow customers to use us together with OpenAI, with O'Hare, with Anthropic, with all the different tools that they love like LlamaIndex and all kinds of different tools that make it possible for you to build generative AI applications. Those integrations are already there for our customers. So the AI ecosystem announcement was kind of a celebration of the partnership and the integrations that we have already been working on and already made possible and our commitment to continue doing this going forward.
Dave Vellante
>> We love products here at theCUBE. Great products like the iPhone catch our attention, but we really love platforms. I mean, you've evolved, Elastic has evolved from its roots and of course as you mentioned, started in search with a great search product, but now you're really a platform. You've got observability, you attack security use cases in addition to what you're doing in enterprise search. So how do you see the product portfolio evolving over the next several years, particularly as it relates to the importance of AI, machine learning? You mentioned all the integrations, but how do you see that platform evolving?
Ken Exner
>> Yeah. One of the things I mentioned earlier is that we started in search, but we expanded into observability and security. One of the reasons is that customers started building these types of solutions on top of us and we started seeing this pattern of customers building logging solutions or security analytics or threat hunting solutions on top of us. We realized there was an opportunity for us to help customers by giving them out of the box solutions that did that automatically and were ready to use so that they didn't have to go build an observability solution on top of us. It was ready to use. So practitioners could start using us. They still have the flexibility of the platform, meaning that they can drop down to the underlying APIs, they can drop down to the source code and customize and have flexibility to do all kinds of different things that the solution doesn't provide, but they have the ease of use of the solution. One of the things that we've been focused on over the past couple of years is starting to really take advantage of generative AI to automate some of the workflows and processes within the observability and security solution. I am very, very bullish that we're going to see a lot of automation happening in both security and observability because of generative AI. If you think about it, the conditions are ripe for this because you have a lot of specialized knowledge in both the security and observability that's sort of built up over years and you have a lot of things that are manually repeated over and over again. So manual workflows and specialized knowledge, I think all those things tell me that these two spaces are ripe for disruption for generative AI, that's going to take a lot of the specialized information and use that, pull that from an LLM and also take some of the manual pattern matching that typically happens in these two fields and use generative AI to do the pattern matching for people. So we have been moving aggressively into generative AI in both observability and security and starting to automate some of the workflows in kind of magical ways that surprise-
Dave Vellante
>> You know, the technical side of companies and organizations, whether it's IT or developers, they're going to be driving automation first through their operations and their workflows and that we think is going to seep through to the broader enterprise, and that's really where we get this major productivity hit. I want to come back to observability. Markets like observability and security, they're crowded, they're very competitive. You've got major players vying for attention, a lot of go-to-market investments. What are Elastic's big differentiators from your point of view? How are you driving innovation to really stand out in these markets?
Ken Exner
>> Well, one differentiator is kind of what I was just referring to, which is using the power of our Search AI platform to start automating experiences within observability and security. I'll give you one example of something we did this past year. Earlier this year, we launched something called Attack Discovery in our search solution. What Attack Discovery does, it takes all the different alerts that a security analyst gets in a given day and a given security analyst will typically have to go through a couple of hundred different security alerts that usually are false positives or things that they had figured out, is this a real issue or not? Are any of these part of the same tag? What we do is feed all these different alerts and a lot of the context into an LLM using our RAG technology and then automatically plot out the attack chain. So we throw out all the false positives and then we take all the other alerts and explain to the analyst how they're related and what the attack path is. It's kind of a magical experience for our customers because we take hours of work, tedious work of sifting through these alerts and automatically map the attack chain. I've actually seen analysts cry when they see this because it's such a magical experience to see all that automation happen. So we're doing things like this to really, really change the game for observability and security using the power of our RAG and Search AI platform.
Dave Vellante
>> I get it. They get misty because the last thing they need is more false positives and paper cuts. My understanding is Elastic is if not one of the most widely downloaded vector databases in the market. And my sense is, correct me if I'm wrong, but it's because you treat it as a feature of the platform, not necessarily a separate market that you're trying to build around vector, but how do you maintain your edge and what feedback are you hearing specifically from developers about how they're using your vector database in the broader GenAI context?
Ken Exner
>> Yeah, so we've been a vector database since 2019. That's the first time we started supporting the storage of dense vectors and running queries against Elasticsearch as a vector database. So we've been at this for a long time. And in that time, we've differentiated by making it possible to use our vector database together with all the other capabilities of a search engine. So if you want to use different connectors to different data sources, we have over 250 different connectors. We have a privacy and security model, we have RBAC and ABAC and audit logging, all the things that an enterprise expects from a vector database or any kind of database that they use. We also have been investing in performance, the performance of our vector database. Just this past year, we launched a number of different innovations and quantization that allow us to compress the vectors that are being stored in our vector database and get better memory utilization. So constantly pushing for better performance. But I think the most important thing is relevance. And this is where we kind of shine because being a search engine, the most important thing for a search engine is relevance. And if you're trying to build a generative AI application and you want to pass context to an LLM, you want to ground it on the most relevant content. The consequence of not doing this right mean you give bad answers to your customers, or if you're starting to build agentic AI, you're going to start having your AI application act on behalf of bad information. So it's critically important that you've passed the most relevant content because relevance matters. And this is where we've been working on not only vector search, but also hybrid techniques. Being able to combine vector search with geospatial search or being able to combine BM25 search with vector search or combined graph traversal with vector search. And also things like re-ranking. So taking all the results and then re-ranking at the end. All these different techniques that we've been working on provide for better relevant results because we know that the end of the day the thing that matters the most to developers is what is the relevant result that's going to be passed to an LLM for building their application.
Dave Vellante
>> I'm going to agree more with your comments on relevance and you mentioned agentic, lot of this agentic buzz, it's not going to work unless you have relevant data and trusted data. So totally aligned with that. When I talk to customers of Elastic, they talk about the scalability, the power, the versatility. I'm interested in user experience, especially for less technical users. How do you see addressing usability without compromising the historical features of the product and the appeal of that product that developers and power users love?
Ken Exner
>> I think our approach to this has always been to layer up from primitives. So start with the primitive capabilities, the primitive APIs that are very powerful and flexible, and then start providing abstractions on top of that for ease of use and simplification. But by doing this, by layering up, you give the power of the underlying API primitives, but you also give conveniences of higher-level APIs. An example of this is earlier this year we launched something called Semantic Text, which is a higher-level API for doing semantic search. If you look at the semantic search workflow, what you're typically going to do is you're going to ingest data. You're then going to chunk up that data into different chunks that you can then run inference on, and then you're going to, after running inference store, the vector and embeddings that are produced by that inference model in a vector database. But that entire process of choosing a chunking strategy, of choosing an inference model, of running that inference model, all of that is complexity that developers may or may not care about. For a developer that cares about it, they can drop down and use those underlying APIs. But for a developer who doesn't, they can use the semantic text API, which automatically takes care of all of that for them. That's our approach is to layer up from these base primitives that provide the most flexibility, but provide conveniences on top that are in abstraction above. I think developers win both ways. They have a higher-level experience, but they can also drop down.
Dave Vellante
>> I love that, and that's a great lead-in to the AWS discussion. But before I get there, I wanted to ask you, you've got cloud-native architectures, obviously the last decade of taken off, but you've got hybrid and on-prem deployments as well. How are you balancing your platform to serve both of those markets? What are the opportunities? What are the challenges you're seeing as organizations? They kind of rebalance and make those architectural shifts and then they maybe come back a little bit? Where are we at today and how do you help customers?
Ken Exner
>> Yeah, so we actually have three different ways customers can consume Elastic. They can use it in self-managed mode, which is they're going to download it, rent themselves on their own hardware or run it themselves on cloud, but they're going to manage it. You can also use us with Elastic Cloud Hosted, which is a managed version of this. So we will provision instances, install Elastic and keep it patched, but it's a shared responsibility model where a customer is responsible for cluster health and per scaling. And then most recently, we launched Elastic Cloud Serverless, which is a fully managed version of Elastic. You can think of it as a SaaS-like experience. So it's completely our responsibility. So it's versionless, we take care of the upgrades, we take care of cluster health. All of it is fully managed for you. And we do this across all three CSPs in more than 60 regions. When I say that we offer Elastic your way, I really mean it. We have more than 60 regions across three CSPs and three different ways of consuming Elastic. Now it's hard for us to do this. It is hard to deliver software that is downloadable, that is open source, that is a fully managed stateless version of that as well. But it's been the engineering challenge for us over the last couple of years. How do we do this? We're very proud of the work that we've done to be able to deliver Elastic your way as a fully managed SaaS offering, as well as a hosted offering, as well as a self-managed offering.
Dave Vellante
>> With that consistent experience, nobody said it was going to be easy, Ken. So I want to get into the AWS partnership. I think it's actually symbolic that you spent so much time at AWS and now you're here at Elastic, you've had an evolving, I'll say, relationship with AWS. At one point, it was contentious, it moved from competitive to really now heavily collaborative. How's that partnership really shaped your strategy at Elastic? Maybe some of the key milestones, lessons learned along the way, and what's the state of that relationship today?
Ken Exner
>> Well, I guess a milestone would've been me joining.>> Yeah, there you go.
Ken Exner
>> I spent 16 years at AWS, as you know, prior to Elastic. Amazon has a lot of customers and partners that they are in competition with. They're in so many different businesses that it's almost impossible not to compete somewhere with your partners and your customers. And they know that and they're able to compartmentalize, and I know that they know that because I used to be on the other side. So we have been moving towards a spirit of collaboration and are starting to really, really be able to co-sell and starting to work together. In my first year here at Elastic, we actually signed two strategic collaboration agreements with AWS, one on the co-selling side and one on the generative AI side where we're doing work to integrate Bedrock with Elastic, us integrating with them and them integrating with us. So we have two strategic collaboration agreements that are moving this partnership forward. AWS is a big part of our business. A lot of our cloud sales happen through AWS. Each of the CSPs is an important partner for us, a really, really critically important partner for us. And we know that. They are a big part of our channel. A lot of our cloud sales happen through the marketplaces, so we need to make sure that this is always a viable channel for us to sell and that we're working together with our CSPs and the relationship is going incredibly strong right now. So I'm very proud that the issues we've had in the past, AWS is a strong partner, and I think it's both ways.
Dave Vellante
>> I think a lot of times this gets lost in the media narrative, and as a bit of a historian in the industry, when you go back to the sort of '90s and the PC era, it was largely a zero-sum game. Intel won the microprocessor war and Dell with the PCs and Seagate and disk drives or Oracle and database and Microsoft in applications and operating systems. But the cloud changed that. It used to be number one would take it all, number two, maybe make a little money, number three would barely break even. And now you see the big three clouds, all huge, growing, making money. Relationships between Elastic is a great example of the snowflakes of the world. Yes, they compete, but also the markets are now so big and there are so much opportunity for innovation. So I think about Elastic, you're deeply integrated within the AWS marketplace, within services OpenSearch. How are you approaching the co-selling efforts specifically and what does that mean for customers?
Ken Exner
>> As I mentioned, the AWS marketplace, as with the other two marketplaces, are a significant portion of our cloud sales. We use that as a channel and we use our partnership with AWS in the marketplace to do co-selling. So we do quite a bit of co-selling to joint customers, and we're able to sort of separate where we compete and where we go to market together. We've had a number of different success stories. We've had a number of different large financial institutions, for example, that are joint customers, and we work together to make sure that they're successful running Elastic on AWS. And we're able to do that. Just because we compete in certain areas like OpenSearch competes with Elasticsearch doesn't mean we can't collaborate what we don't compete. We have a ton of different examples of that working extremely well over the past couple of years. So I'm optimistic that the cloud providers, they know that their partners matter and that they have to work with them. And we know that AWS and the cloud vendors matter to us and we work with them.
Dave Vellante
>> Yeah, and the whole credit system in all of these clouds just makes it transparent to the sales teams and the go-to market. They love it and everybody wins, including the customer. We talked about how you differentiate in that crowded observability and security space. Do those differentiators, do they carry through akin to the AWS ecosystem, similar differentiators? Because that's also, everybody's competing for attention. Reinvent is like the Super Bowl of events. So do those carry through? Are there other nuances that we should paying attention to?
Ken Exner
>> It's hard to break through the noise or reinvent though. If you found anyone who has figured out how to break through the noise or reinvent to rise above and get noticed, let me know because it is such a busy week in the tech industry and such a busy week where AWS is launching half their things that they developed in the year that week. So I think it's an exciting time for tech, but it's also a very noisy time. I think for us at Elastic, it's more about staying true to the things that our customers are asking us for, providing a more fully managed experience, providing a better, more performant, more relevant vector database, providing higher level experiences that abstract away the pain and continuing to march through that, continuing to constantly add things. I think this is a marathon, not a sprint. So we're going to be constantly trying to make sure that we're iterating on what customers are asking us for and launching things constantly.
Dave Vellante
>> And our secret, of course, Ken, to break through the noise is digital, right? Because everybody has an event and then they think event ends when the event ends, it doesn't. The digital carries that through. So that's why we so appreciate your partnership and your sponsorship so we can serve our community. But I want to end, reflect, if you will, on this past year. How would you characterize Elastic's progress in the broader tech landscape? And then looking ahead, what are you excited about for 2025, both for Elastic and the future of enterprise AI?
Ken Exner
>> Maybe going back to where we started the discussion, I think this year was where people started to experiment. We saw a lot of our customers, customers that we had been working with or completely new customers to Elastic, starting to build their first generative AI applications and starting to realize some success with that, starting to realize it wasn't that hard. It was actually a very tractable problem. It was actually easy to have a generative AI application built using private data without having to do model training or fine-tuning. You could do that with reinvent, you could do that with a Search AI platform. So given that success that we're starting to see, given how we now have more than 1,500 enterprises using us as a vector database for these generative AI applications, what I expect to see is continuing to add to the number of customers, but also seeing those customers over 2025 and start to expand their use cases beyond their first prototypes. Starting to look at other ways to leverage generative AI now that they've gotten comfortable with it, now that they've gotten a taste of generative AI and have had their first success starting to expand into other use cases. I think that's going to be an exciting time for us in 2025. And as agentic AI starts to take hold, as AI has agency, I think it's critically important, critically important that customers start to ground their generative AI applications, have the most relevant results because you don't want agentic AI making bad decisions. So I'm excited for the power of our Search AI platform to help tame that problem and make sure that our generative AI applications are safe and smart going forward.
Dave Vellante
>> Yeah, these systems of agency are great opportunity for firms like Elastic and its customers. Ken, really appreciate the conversation. Thanks so much for coming on the program.
Ken Exner
>> Thank you, Dave. It's been fun.
Dave Vellante
>> Yeah, you're welcome. Okay, and thank you for watching everybody. Keep it right there for more great content on theCUBE's continuous coverage of cloud innovations and AWS re:Invent 2024. Be right back.