We just sent you a verification email. Please verify your account to gain access to
Cloud AWS re:Invent Coverage. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Cloud AWS re:Invent Coverage
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Cloud AWS re:Invent Coverage.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Cloud AWS re:Invent Coverage. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Cloud AWS re:Invent Coverage
Please sign in with LinkedIn to continue to Cloud AWS re:Invent Coverage. Signing in with LinkedIn ensures a professional environment.
TheCUBE's coverage of AWS re:Invent, now in its 12th year, features Arvind Jain, CEO of Glean, discussing the importance of AI and data in creating value. Glean's product is built on AWS infrastructure, using Bedrock LLM engines and agent architecture to integrate with Bedrock for AI agent development. The key theme is the integration of AI into modern applications, making it ubiquitous like databases. Inference is seen as a core building block, enabling developers to leverage AI capabilities seamlessly. Glean's platform connects with enterprise data to offer...Read more
exploreKeep Exploring
What platform was your product built on, and how critical is AWS infrastructure to your technology?add
What are your thoughts on the importance of inference as a core service building block in product development, especially in relation to AI capabilities?add
What are important considerations for an AI platform to be user-friendly and efficient?add
What is Glean and how does it work within an AWS instance within a VPC for organizations connecting their enterprise data and knowledge systems like Salesforce, ServiceNow, Workday, Confluence, and GitHub?add
>> Welcome back everyone to theCUBE's coverage here. As we wind down day zero, this is the first day of four days of CUBE coverage here at AWS re:Invent, our 12th year. We watched Amazon Web Services grow from the tinkerer place where you kind of put your own stuff together and if you were a founder, you want to start a company like renting out your living room or your bedroom, Airbnb, Dropbox, all the tsunami of that wave grew on the cloud. Now a whole other level with data and AI is going to create massive value. And one of those entrepreneurs here, multiple eggs, a very successful entrepreneur, Arvind Jain's been on theCUBE before, CEO of Glean. Hot company, doing a lot of cool stuff in the enterprise area as well as just providing great value. Arvind great to have you back on theCUBE. Last time you were with NYSE. Well not as good backdrop here, but we're in this little cubby in the press area. Again, it's our little secret cove, but we're doing our part.
Arvind Jain
>> Thank you for having me.>> Give us the update. AWS, obviously you partner and you got some news with Amazon. Your customers, they want to find their data, they don't care where it's stored. I mean, talk about the relationship you have with AWS.
Arvind Jain
>> Yeah, so Glean, we built our product on top of the AWS platform. We are very multi-cloud, so we run on other places too, but the AWS infrastructure is actually critical to our technology. How it's built. Specifically right now some of the new things that we've added is utilizing the Bedrock LLM engines as well as their agent architecture to tightly integrate Glean with Bedrock so that end users can actually build really amazing AI agents either directly on their better occasion framework or building those agents within the Glean no code app builder platform.>> Yeah. And for the folks who haven't seen the video on YouTube, check out on theCUBE, we did a big piece talking about their technology. We can dig into a little bit, but I think what's leading into this event is the anticipation around Matt Garman's keynote. I wrote a post where he sat down with me before the event. He said, "AI should not be viewed as a separate entity but as an integral component of modern applications. This allows their strategy to embed AI functionality seamlessly in the services making AI capabilities as ubiquitous as databases or storage systems." And then the quote is, "I actually think they're just applications," he said. "Inference is the next core building block for AWS. It's about integrating into the fabric of what you're doing, data workflows, S3, EC2 databases all working together."
Previously he said a quote, "That the way that we're building generate AI software will be completely reinvented because the transformer brought things up to kind of the old neural network stuff and levels up everything." Okay, we're using kind of the 80s, from the 80s and 90s. Now new stuff comes on top and now new software's coming on top. So what he's teasing out is, is that there's new software coming to take advantage of this and this correlation to databases is interesting. So if you're a developer, I mean database is like your lifeblood. You live by your database.
Arvind Jain
>> Yeah.>> So if inference becomes a new thing for developers, like what Serverless did,-
Arvind Jain
>> Yeah.>> Was an amazing thing. Inference could be that unleashing point of developer value for things. What's your reaction to that, his perspective, and what do you expect to hear from the keynote? And just if you believe that to be a core service building block, I should say not a service, a core building block is like compute.
Arvind Jain
>> Yeah.>> So inference.
Arvind Jain
>> Yeah. Well, he's absolutely right. AI is going to be a core capability of every product that you buy in the future. I mean, it's as simple as that. We will expect every product that we buy to be spawned. Right. And that smartness and those products are actually coming from those core AI capabilities that you are building within your product. And so he's right, that inference becomes one of those things that if you're a startup, you're building a new product, you actually think from day one that how do I build the UI, the interfaces in a way that those core capabilities of reasoning that you get from LLMs is part of your stack. And we see that all the time with our product too. If you think about our product, we didn't actually set out to build an AI application. We actually were first solving the problem of people can't find anything in their work lives. So we built a search product and we were able to actually use inferencing as a core part of our overall product technology.>> Yeah.
Arvind Jain
>> Right. And that has allowed us to actually build a much better search and question and answering product,->> Yeah.
Arvind Jain
>> We're now instead of just surfacing the right information to our users and they're looking for information, we're able to actually answer their questions using all of their enterprise knowledge and by combining the power of inferencing with the data integrations and the access to enterprise knowledge that we have.>> You live in the valley, so you know it's like. For the folks that don't live in Silicon Valley, you go to parties and people talk about stuff. So I was talking to someone about your company Glean that, "Well, John, why do you Glean so much?" I go, "Well, they're not a search engine. I mean they say they do search, but it's a whole nother thing now. It's not just search, it's finding what you're looking for, which is kind of search like, but they do so much more." So now why I bring that up is I want to get your thoughts because you come from a search background at Google, but search is a core thing that people do. They search for, an application will search for data to either auto machines to machines, and I said to someone, "Remember back when Google started, you type in a keyword, it basically was a spell checker for you." Did you mean this? So that was actually some reasoning it was doing saying it learned from misspellings that you meant that word.
Arvind Jain
>> Yeah.>> And again, it's a trivial example, but it shows that early days of organic search, those things were being worked on, probably your group.
Arvind Jain
>> Yeah.>> But that brings the next question of it's doing work on your behalf for a better discovery navigation experience, whatever you're doing, what you're looking for. Are you looking for this? Did you mean this?
Arvind Jain
>> Yeah.>> If you take that to now with all the data available, you're moving on from the enterprise search thing that you've done, now you've got a whole nother layer. You built this from day one. There's intelligence behind it, reasoning, reinforcement. So there's a lot more capabilities not to, did you mean this?
Arvind Jain
>> Yeah.>> It's much more. Take us through how you see this next progression because agents is going to be working on subtasks, vetting data. If inference becomes this building block, a lot of stuff can be done on behalf of the user, the application, the database. Take us through how you see this piece because if this happens, then new software will be written that we've never seen before.
Arvind Jain
>> Yeah, absolutely. So I think first let's talk about the search application itself. You're right, that search has been the largest AI application in the world for more than two decades. But thinking about the search product itself, we can already see how it's actually becoming more capable every day with the power of these LLMs and inferencing. Right. Because number one, you come and ask a question, we are actually answering that question right away for you. We're not actually making you go through a 30-page document and try to find that answer within that document. LLMs can actually reason and sort of go one step further to help you. But if you think about our platform overall, the fact that Glean has connected with all of their enterprise information and it has understood what information is accessible by what people inside a company. It understands people and their roles. It understand the knowledge and at the semantic level. It understands the connections between them. The sole building, that deep understanding of the enterprise now actually has allowed us to actually offer a platform. It's an AI platform that we offer to our customers and as well as our ISP partners, like other SaaS applications where you can actually tap into Glean to build all kinds of AI, interesting AI applications where inferencing is again the core value, like connecting. I think you always have to remember it's inferencing on something of course. Right. So when you think about the Glean AI platform, like what we are delivering is the combination of access to your enterprise data in a safe and secure way coupled with the power of inferencing, which can then serve as a really powerful horizontal building block for any AI application that you're going to build,->> Yeah. He mentions things like EC2, S3 data workflows because if you have the intelligence to be smart enough to find something,-
Arvind Jain
>> Yeah.>> You could be smart enough to know which LLM to use. For example, LLM routing is now being discussed. I read a couple of papers on that in the past couple of weeks.
Arvind Jain
>> Yeah.>> This becomes abstracted away if it's just a function call, right? It's like inference. I'm a developer, just make it work.
Arvind Jain
>> That's right.>> I mean, I'm hand waving, but that's conceptually what you're driving to, right?
Arvind Jain
>> That's right. The AI platform has to be easy to use. I mean, as you said, it's utility. You don't know with S3 what's happening behind the scenes. All we get is a simple API. I can put some data in that system. I can get that data back. Similarly, when you think about the AI platform, it's about, well, I should be able to send a command to send a request. I should get the data back. Now you know what LLM was used behind the scenes, what made sense for my current requests?>> Yeah.
Arvind Jain
>> The system should pick the small model that is fast and quick and cheap for audit data mines that it's a complex task and I need to use one of those.>> I'm smiling because I'm kind of a data geek. One of my degrees was in databases back in the 80s, but back then the concept was data about data.
Arvind Jain
>> Yeah.>> So we're living in a world where there's data about data, and so you have age and so this is what's happening. It's a data world where learning about the relationships between the data, it's not just a database thing, it's everything. So this is now almost a melting pot of analysis. It's kind of challenging. So as someone who's an Amazon, I'm sure you probably agree with that, but if I'm an Amazon customer, I'm not used to that. I did a lift and shift. Now I'm expanding some high-level services. I want to then get my apps going. How are you helping those customers? If I'm the Amazon customer, what's my Glean solution look like? Take me through that because I want to make the data work.
Arvind Jain
>> That's right.>> And I want,-
Arvind Jain
>> Yeah. So as a AWS customer first you'll actually buy Glean and you'll run Glean within your AWS instance within your VPC. And what you're doing is you're connecting Glean with all of your enterprise data and knowledge. So if you're an organization, like you may be using systems like Salesforce and ServiceNow, Workday, Confluence, GitHub. Right. So you've connect all of those systems to your Glean platform. Now you have this horizontal AI data layer where all of their enterprise data has been brought together. Glean built an understanding of entitlements and permissions. It also understands who the people within your company are, what information they actually make use of versus not. And once this layer is built, now you can actually go about building different enterprise AI applications. You get one by default from us, which is a ChatGPT-like assistant, but more powerful because it can answer questions using company knowledge, but you're actually getting a platform which allows you to actually now build automation in a variety of different business processes. For example, you may want to automate how when people find IT tickets in your enterprise, how do you get AI to actually answer most of them? How do you get AI to answer most of your HR tickets that your employee base is filing? Or make AI come and work on tickets that customers are actually reporting back to you. The list is endless. Think about your engineer's,->> It's process.
Arvind Jain
>> And exactly. It's all these processes. Engineers like, well, you want to make sure that your code quality is high. So whenever somebody uploads a new code comment in GitHub, you want AI to actually work on that and be the first reviewer of that code. Make sure that your style guides, your standards are being met. So all these applications that you can now build and you can build them seamlessly because you don't have to worry about how to bring all that data together and build these and then sort of figure out how to actually make inferencing work on that data to power these applications. All of that stuff is already done for you through the Glean platform.>> I was joking with Dave Vellante. We were kicking on our CUBE podcast, every Friday we do. Check it out, CUBE pod plug. It's like DevOps, it was infrastructure as code. It was like, oh yeah, and DevSec off. That created Cloud, APIs and all that good stuff. Now we're in business as code because you look at the value proposition is more productivity.
Arvind Jain
>> Yeah.>> And then Dave say, "Well, we won't need developers." Like, okay. That's a good comeback, but let's take through that. Okay. Businesses code means I could be a user and not have to know SQL to say, give me a SQL query to find the database or just command with a prompt to do something when I'm searching and searching for an answer. I get that, but the role of the developer still will be around. So in your vision, what's the role of developer as this progression of businesses code concept happens where real people are going to start acting like developers and interacting with data and computing services? The real developers, what are they do? Are they go down the machine level? So there still will be coding. Yeah, you got coding assistants that does debugging and the heavy lifting boring stuff that nobody wants to do unless they like debugging, which I haven't met anyone who loves debugging unless whatever, I'll go there. But coding.
Arvind Jain
>> Yeah.>> I got coding assistance. So I'm accelerating my coding.
Arvind Jain
>> Yeah.>> What are they going to be working on? In your mind, how is the role of developer because they're going to unleash the value, they're going to write the new code,-
Arvind Jain
>> That's right. Well, I mean see, first you have to think about in the new world, you're making everybody a developer. Developer doesn't mean that you have to write C++ or Java code. You're building stuff. Right. And AI is doing one really good thing, which is it is actually giving the power to a business user who doesn't know how to code. You're allowing them to actually build complex applications, do complex data analysis by just expressing those things in English. So that's sort of like, think of them as the sort of new developers that are actually going to be coming on board, like building interesting applications without knowing necessarily the traditional programming languages. And it's no different from how you think about developers today. We were talking about two or three decades back, like I actually started, when I started coding, I was actually writing code in assembly language and,->> Registers.
Arvind Jain
>> Yeah. And I was doing that, right,->> Word dumps.
Arvind Jain
>> Yeah. And in that time there were other developers who were actually writing code at a higher level. Right. Somebody was in C, somebody was in Java.>> Yeah.
Arvind Jain
>> And somebody was in Python and somebody only did HTML and web scripting. So there have been levels right off. You basically have developers at all levels of complexity. And what I'm saying is that now you're adding one more layer on top, which is a developer that actually only writes in English or a natural language. Right. So that's not one way to think about it. But now,->> There'll be tiers of developers because some people go low level to get value.
Arvind Jain
>> Exactly.>> I mean, like memory. On a thread, I was kidding, but I wasn't kidding. I was serious because there was a discussion on a Madrona thread , one of the managing directors and we were talking about memory,-
Arvind Jain
>> Yeah.>> Memory of your prompts. I'm like, oh yeah. Remember back in the 80s when memory management, the squeeze and you swap memory management out because you had limited memory.
Arvind Jain
>> Yeah.>> But the memory he was talking about was AI doesn't want to refactor the GPU cycles.
Arvind Jain
>> Yeah.>> So offloading is becoming a new thing for hardware. Everyone talking about acceleration, but no one's talking about offload. Why would I want to write my GPUs to pick up my train of thought, so to speak on my open chat?
Arvind Jain
>> Yeah.>> Because why would I want to waste GPU cycles?
Arvind Jain
>> Yeah. And if you think about it, like if you take all the developers in the world, it's a reverse pyramid. On the top, now you'll have the non-coders, like who can actually build applications by expressing those applications, yeah, in natural language and to go down to ultimately you also have people who are actually going to be building these models, people who are going to be actually squeezing in that last bit of the compute power of the GPU.>> Yeah.
Arvind Jain
>> Right. And building these efficient models, building efficient inferencing. So yeah, so you'll have the full range. But I think the, what I think is that there will be that movement of a lot of developers will start spending more time on higher and higher level abstractions. They'll have coding assistants that can actually write the actual code for them, but,->> That's great insight. I'll tell you why. Because one of the things we've been riffing on is there are people going that level, to squeeze out more performance. Because like I was saying to someone at the NYSE, a financial person like, "John I don't understand." I go, "Remember high frequency trading?" Like, "Ph yeah, put the data center as close to the trade as possible so I can get an edge to get the trade in." I go, "Yeah, that's exactly what I'm talking about. The high frequency value of having that data in the model is an insight opportunity."
Arvind Jain
>> Yeah.>> That's why they're doing it and one, performance. And also if they're build model builders, the competitive edge for the value is low level, but that's not the average person.
Arvind Jain
>> That's right.>> This is what's happening.
Arvind Jain
>> Yep.>> Okay. Well, you guys are doing great. You just raised, how much did you guys raise total as a company?
Arvind Jain
>> We raised a little over $600 million so far.>> Yeah. So you guys got plenty of dry powder growing. How's business? Good.
Arvind Jain
>> Business is growing very fast. Yeah. We are fortunate to be at the time where the technology that we spent the last six years building is something that everybody's very eager to actually try out and also serves as a core foundation for any AI application that you're going to build in our enterprise.>> What's your goals for 2025? You had to go for next year, more go-to-market expansion, more technical innovation. What is your plan for next year?
Arvind Jain
>> So all of that, we're significantly growing our go-to-market team as well as we're hopefully going to double our R&D team as well. The other thing that we're also focusing on is international expansion sites. So we have some efforts right now in Europe and Asia where we're going to just significantly expand those as well. Overall, we find ourselves in a place where we have a really good product. We have a lead in the market when it comes to work AI and enterprise data connectivity. So you got to sort of,->> Keep sustaining that pace.
Arvind Jain
>> Yeah.>> Don't look back.
Arvind Jain
>> Yeah.>> Keep everyone in the rearview mirror.
Arvind Jain
>> Yeah.>> I mean, these super cycles, you want to extend the lead as much as possible. That's the goal.
Arvind Jain
>> Absolutely.>> In the first four couple of years of these super cycles, the winners are decided. Arvind, thanks for coming on theCUBE. Appreciate you.
Arvind Jain
>> Thank you so much.>> Okay. I'm John Furrier here, wrapping up day zero. We'll see you tomorrow. The big day kicking off. Thanks for watching.