We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders QA2. Signing in with LinkedIn ensures a professional environment.
Lalit Kundu, co-founder and chief executive officer at Delty Inc., and Massi Genta, principal and chief executive officer of Metabob, join Howie Xu, chief AI and innovation officer at Gen, during theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders 2025 event to explore how AI is streamlining software development. The conversation highlights how intelligent assistants and code refactoring tools are changing the way engineers work.
Genta shares how Metabob targets debugging and legacy code challenges, while Kundu details how Delty’s AI staff eng...Read more
exploreKeep Exploring
What is Metabob and what does it do?add
What are the advantages of using Metabob in addition to Cursor for dealing with complex code bases?add
What role does Delty play in the software development process and what challenges does it address?add
What is the distinction between the roles of junior engineers and staff engineers in the coding process?add
What will the role of humans in coding look like in the future?add
What role does Delty play in the technical decision-making process and how does it compare to traditional methods of architecture planning?add
What are the main challenges and opportunities in the current landscape of AI development in software engineering?add
>> Hello everyone. This is Howie Xu, chief AI and the innovation officer at Gen. We do a lot of things. Among other things, we do the AI native browser. If you are watching this show, send me a note. I'll give you the free activation code. But today, I'm so glad to talk to two entrepreneurs. In the last few days, we have been talking to founders, CEOs who have raised hundreds of million dollars in the past. Now, you are the two smallest startups, but rising stars Massi from Metabob and Lalit from Delty. So good to meet you.
Massi Genta
>> Yeah, good to ->> Massi, why don't you start? Tell us a little bit about what Metabob does.
Massi Genta
>> Yeah, so thank you for having us. My name is Massi. I'm the CEO of Metabob. We're an AI too, that helps developers debug and refactor code. We specialize on legacy code. So whenever there is a large amount of code and very unique use cases that LLMs alone are not able to either fix, refactor or generate new features of, that's where our tool come into place. It can be deployed both on-premise or on the cloud. So we are integration of like, we're on VS Code, but obviously if you use Cursor or other code generation tools, most of the time you can use Metabob to just->> Well, I can't help asking immediately. If you have Cursor, why do we need a Metabob?
Massi Genta
>> Well, I guess so we get there immediately. Well, Cursor is amazing tool for code generation but for people that uses it, you notice that when you deal with either complex code bases, not just large but just very unique use cases, just the way LLMs function as a sequential token, they struggle to create new features or to identify in the refactoring process what they... They kind of tend to get into the loops. And so you usually need the developers to look into that and trying to resolve the problem. Now, there is where we come into place.>> So especially when there is a large code base?
Massi Genta
>> Not just large. It's not just the size, but it's more like unique use cases. So that's why also right now we tend to function very well when it comes to legacy code because legacy code is not just the size that matters, it's also the fact that it's very outdated. So it's hard right now, like LLMs are trained on usually model processes. So when they look at all code or something that doesn't fit their code, they tend to again get into the loops and tend to have hard time to figure out how to build those features.>> So pretty much you want it to be the steroid for Cursor?
Massi Genta
>> We can say so. So we act in the background. We provide a dish, like different type of inputs to LLMs. We use a mix of graph neural networks and LLMs, but the GNN itself enabled us to read code as a computer rule. So our input is really the code property graph, which is something that LLMs don't do.
LLMs look at code in the way humans write code. While we don't look at code in that way. We look at code as a computer will look code and that's enabled us to understand how data flow through the projects, understand the full architecture and what's more likely to change over time. So in terms of that, why Cursor is not able to do that? Again, amazing tool, but our point of view is that LLMs alone in the future of coding, it should be a multimodal like application. It can't just rely on LLMs alone.>> Okay. We'll get to the technical side a little bit more.
Massi Genta
>> Yeah. Sorry about that.>> Let's actually get to Lalit. Tell us about Delty. And in particular, I'm curious, because you just recently graduated from YC, right?
Lalit Kundu
>> Yeah.>> Congratulations for raising that around.
Lalit Kundu
>> Thank you.>> But there are so many companies out there already. Cursor, Augment, even me, I interviewed so many co-founders the last few days. Why do you come into this field, but start with what do you do?
Lalit Kundu
>> Yeah. So Delty is an AI staff engineer teammate. It helps you design systems and it guides your junior engineers and coding agents. I want you to think of software development in three big buckets. Right in the middle is your coding part, where you change this verbose and esoteric syntax, you modify it, you create new syntax. Then there's activities to the right of code, which is how can we build, deploy, tests, and safely roll this out to our users. And then there's this big problem on the left which is, hey, what code should we be writing? Just figuring out what software or what code should be producing is going to be a big, big bottleneck to produce the 10X amount of software using the same number of people. The bottleneck is not producing syntax that compiles and works->> Well, it is the bottleneck today in the software industry, but you wanted that to be no longer the bottleneck.
Lalit Kundu
>> It won't be. It's pretty soon won't be. The problem is going to be, can you decide what to build? Can you align on the architecture? Can you figure out how the product requirements will be satisfied? What technologies, frameworks, APIs, data models do you need to define? And all that work is not as simple as retrieving and re-ranking 25 code files and passing it to LLM. I'm oversimplifying what Cursor does, but humans play a big collaborative role to the left of code in figuring out what code will be produced. And that's why we are betting big on having a teammate, a staff engineer that understands your architecture code base, but in addition it is fully aware of how your team does engineering, how is it evolving? Because a lot of this work happens in meetings, documents, whiteboard sessions and group chats. This is not the kind of work that happens in a code editor.>> So in your model, you talk about three segments. One is the what to build. The middle one is how to build it. And then you are saying that how to build is going to be free or going to be frictionless in no time. And then there's another portion to operationalize it, to make sure things are working. So those are the three portion. So the middle one is what you're attacking in the sense that you see that the staff engineer's job can be automated?
Lalit Kundu
>> Yeah, staff engineers do the middle part only 20% of their whole time. So I was a staff engineer myself at YouTube. I was leading a team of 45. I spent, including the other 10 engineers that I was leading, we all spent less than 20% of our time writing code or reading code. I was responsible for two million lines of code. I was telling people every day how those lines of code should evolve. I had never looked at those lines of code. I'm not capable of reading two million lines of code in my time, but that's the kind of job staff engineers, senior engineers do, which is mainly to the left off coding and a little bit to the right of coding as well, but not the coding. The coding is the job of a junior engineer, which is going to get automated away pretty fast. LLMs->> Wait a second. You said junior engineers, but then at the same time you are seeing that to do staff engineer's job. How can I reconcile the two statements?
Lalit Kundu
>> Yeah, so junior engineers produce code, right? Staff engineers decide what code should be produced.>> Got it. So your Delty is not automating away staff engineers, but automating away the junior engineers job. That's how you look at it.
Lalit Kundu
>> Yeah. So we are augmenting all that work that you do to the left of code. So we help you brainstorm different architectural choices. We'll help you make the right framework data model, how you can build a product that your product manager or your marketing team needs, and then we seamlessly hand off to coding agents like Cursor or Cloud Code or Devin for that example.>> So you said you were a staff engineer at Google, you led 45 engineers, many of them maybe like L3, L4, L5, right? Level 5 engineer. What's going to happen to them once you have Delty everywhere?
Lalit Kundu
>> We are starting with augmenting. There's a reason staff engineers get paid $700,000 a year. You cannot replace them. You cannot fully automate them away when you have systems that are serving billions of users and making you billions of dollars. So there's going to be a phase where you start by augmenting. Pretty similar to how coding agents augment the work of a programmer, we are starting early, we have a head start. We are augmenting the work of a senior and staff engineer. But over time, to answer your question directly, every human engineer is going to evolve into a senior or staff, the kind of work every human engineer will do. It looks like the job of a senior and staff engineer today. So everyone's going to go through this rapid cycle of upskilling where they're doing high leverage work rather than doing the laborious job of producing syntax.>> So today's L4 engineers, you think you are going to promote them to be senior staff engineer once they have tools like the Delty. That's what you're saying?
Lalit Kundu
>> Yeah. I wouldn't know. I wouldn't claim that they would get paid as much as staff engineers. The job will evolve to look like what I did at YouTube pretty soon for an L4.>> Cool. So Massi, you were talking about your stuff is actually complementing a lot of those tools, the Cursor the world. So now you listen to, hey the middle part, that writing code is going to be free or almost free. Where is Metabob fitting to that framework?
Massi Genta
>> Well, it's actually from what he was talking about it. We have a similar point of view. Obviously, the hardest part when it comes to coding and it's harder to automate is really understanding the architecture, like the design application. It's not necessarily writing the code. That's already, it's mostly automated and we're just going to a direction where it's going to be fully automated. But just as it was mentioning the challenge is to understanding how that application is built. And how to build an application fully automatically, that's where we come into place. The goal in the future is to just provide in English straight guidelines to an agent and then the agent will build the entire application. We're pretty far from there because again, just as I was saying in the beginning, LLMs right now, especially for unique use cases or for complex application, they tend to go into loops and most of the time you need an engineer, a staff engineer and senior person to look it up and see and try to support it. In our case again, to tell you a little bit about our approach. As I was referring earlier, our tool look at the code from the code property graph to understand how... We really look at how a computer will look at code. So not in a form of text like LLMs do, but in a form of a graph. And that's enabled us to understand, first of all, what are the most relevant area of a code and to be able to predict what the next interaction of the code will be. So our tool, it's really like, as an LLM will do, will complete when you give them a task, they don't think really ahead, they just look at completing the task. For us, we are a predicting tool. So we help them to be able to predict what will come next and what are the area of the code that are more likely to change over time and that's really would->> So once Cursor of the world or Delty in the future has that sort of additional enrichment data, then they can do a better job?
Massi Genta
>> Yeah. Our goal is to enable them to build full application just by providing, in plain English, guidelines. We are not an agent, but just as you say, we are supporting agents. It's a tool running the background. It's almost looking at almost like a database you can see ourselves, but not just like a large base->> Because with your graph neural networks you have more kind of a richer data than larger language model is able to.
Massi Genta
>> It's just different. I think we complement very well LLM's weaknesses. I wouldn't say richer or not richer, just the way... It's not a token-based like training. It's a little bit different. We look at different type of data that an LLM will do. And so really what we send to the LLM are our nodes that we send is just to again complement what we strongly believe is LLM's weaknesses as of today, which prevent them to really build application from scratch, right?>> Right, right, right. It's like self-driving car is not perfect today yet. If you have navigator, if you have the map, it could have complemented. Before, the fully autonomous coding agent is perfect, which still sometime away.
Massi Genta
>> Yeah, of course. Yeah, we help them direct LLMs to exactly what to look for. That's exactly the way you look at us. Whenever LLMs look at, specifically for unique use cases or things where they struggle to, they get into this loop I was talking about it, we help them to tell them what are the code areas that they need to look for and to prioritize those code areas. Because LLMs cannot do that properly and so yeah, that's the way do it.>> So Lalit, you probably would agree, right? With a Cursor or you or those autonomous coding is not perfect yet. They're still some time away, but what's your vision? When do you think a coding agent is going to be really good enough to do that, this middle thing, coding is going to be free. When is that going to be?
Lalit Kundu
>> I think pretty soon from what I've seen. I am not an expert in that. I would ask Michael from Cursor how far along they are. But if the laws of scaling upholds, we are seeing the reasoning capabilities of foundation models getting better over time. So it's only a matter of maybe a few months or six months to one year where we see a wild increase in terms of the capabilities of these agents. So that's what we are betting on in fact.>> With that, like you said, ask Michael and he's probably going to be bullish, why do you enter this race at this moment? Because Cursor is everywhere, it's the talk of the town, there is Augment, there is the Factory, there's so many other players. So why do you enter this race right now?
Lalit Kundu
>> Yeah. The answer would go back to 2008 when BlackBerrys were, they were all the rage. They had these physical keyboards and they were the top of the market. It worked really well for a use case that people had at that point in time. But the disruptor's advantage that Steve Jobs had was he could jump, take a huge leap into the future and design for the future. And that's what we are doing. We are betting on a future where human engineers do not open a code editor. They do their work collaboratively with other humans and machines in meetings, documents, group chats and whiteboards. And if you look at the current UX of all these tools, like Windsor Cursor and so on, it's very primarily optimizing a flow that exists today and that's the problem incumbents have. That's the problem Blackberry had. You have millions of users, you have 500 million ARR, you need to satisfy the needs of today. And as the innovator's dilemma, that doesn't allow them to take a huge leap into the future. Whereas for someone like us, we can build something that only will augment the flow but is ready for the future that will happen.>> In that future world, just for me to understand it, Cursor is still optimized for developers. You still need the editor, but for you, you don't have editor anymore.
Lalit Kundu
>> So humans will not use a code editor in the future. Two years down the line, the amount of time every software engineer spends in a code editor is going to be 20% of the time. You will have these copilots, which are staff engineers on your team that you go talk to with a problem at hand. Like you're a product manager, you go to Delty, you're like, "We want to build this feature. Tell me what it'll take. What are all the right decisions I need to make?" Delty makes the decisions for you and then coordinates a team of coding agents to build the software under the hood. 20% of the time, things break down and you have this expert, very lean-edged team that goes and takes a look to figure out what is going on. But most of the times, humans are not going to be opening a code editor as they do today.>> So what's the skill set needed to drive Delty, your product in the future? What's the skill set? Well, you need to know the language, right? English language as example.
Lalit Kundu
>> Yeah.>> What else do you need to know?
Lalit Kundu
>> Yeah. You need to understand code as architecture. You need to not-
Lalit Kundu
>> Do you need to have a computer science one-on-one, sort of the concepts?
Lalit Kundu
>> Oh, you're talking from people that interface with Delty?
Lalit Kundu
>> Yeah, yeah.
Lalit Kundu
>> Yeah. I think you need to be technical. You need to understand the implications of...
Lalit Kundu
>> Trade-off, latency versus-
Lalit Kundu
>> Performance. You also need to know your product and business really well. A lot of technical decisions are made based on what do you think the product will need one month down the line, six months down the line. So we are going to see the evolution of this NGPM hybrid role where you're a little bit to be able to interface with a staff engineer Delty, but you also have a really good sense of the product and the business needs and what the users want. How do you distill from what your customers need to be able to interface with the staff engineer? So you're like the bridge.
Lalit Kundu
>> So the sort of the skill set I need is some technical skill set and then some product sense, a combo role, then I'm going to drive Delty and then get a lot out of it.
Lalit Kundu
>> Exactly, exactly. But right now, we are augmenting engineering teams. We have product managers using Delty as well as sort of this deep technical expert on their team that understands the overall architecture where what is the technical roadmap of the team and so on. But over time, that is exactly the evolution.
Lalit Kundu
>> Right. So yes, you talk about a lot of the tools today still need a coding editor, but it is also a trend that many of those tools started moving towards the direction of English is the prompt rather than the code editor. From that point of view, are you guys are all converging towards the similar direction from that point of view?
Lalit Kundu
>> There will always be an aspect of what is the scale of software you're building. If you look at tools like Replit, Lovable and so on.
Lalit Kundu
>> Replit's president was right here.
Lalit Kundu
>> Yeah. Yeah.
Lalit Kundu
>> And with Replit, English is the interface.
Lalit Kundu
>> Yeah. It's really great for building small scale software apps. But when we talk about the scale of YouTube as an example, you have 500 microservices. That's just like one simple team of 100 engineers. At that scale, Replit by itself will not be able to operate. You need to augment human engineering teams. Right now, it is a full replacement for an engineering team. It's built for non-technical staff to be able to build software. And AI agents are quite far from that. There will always be a need for a human engineer to be able to make the right technical decisions.
Lalit Kundu
>> So you still need people who are good at computer science to drive Delty, right? Making decisions, working with the bots, right? Replit is going towards a different market. Knowledge worker, simpler software, but you are trying to get Delty to build a next generation YouTube.
Lalit Kundu
>> Yeah, enterprise-grade software. That's what we are going after. Our customers have been around for 10 years or so. Serious enterprise-grade software. That's where you need this sort of omnipresent staff engineer. I couldn't have kept up with the amount of software and product evolution that my team went through every week, but Delty can. That's the advantage of these AI agents.
Lalit Kundu
>> Okay. Let's switch the topic slightly differently. Within your own company, you are producing this AI agent for coding, for debugging, but what do your engineers use? Do you already get to a point in that engineers not writing code but supervising the parts or what stage are you in?
Massi Genta
>> Just as you say, I think the junior engineers of our team definitely rely a lot more on AI agents. So like the Cursor. Obviously Copilot. It's quite used. From a senior standpoint, again, it's a lot of review of those. So most of the time is spent-
Lalit Kundu
>> Well, they were doing reviewing anyways. But before, it's a human-generated code and now it's bot-generated code.
Massi Genta
>> Yeah.>> So would you say overall, your engineering staff, including junior, including senior, more than half of the code you generate in the last two weeks is by AI than by humans?
Massi Genta
>> That's a good question. Maybe half, maybe a little less.
Lalit Kundu
>> When do you think it would've moved to vast majority? What's your crystal ball?
Massi Genta
>> Well, in terms of just extend it to PR. Probably we're already there really for most part.
Lalit Kundu
>> Technology wise, we're already there.
Massi Genta
>> Yeah.>> But in terms for you?
Massi Genta
>> In our team, that's what we're working on right now. I think we're doing it in a pretty unique way where again, just as I was saying earlier, when you don't have as much talks and some data that you can feed into LLMs, like for us. In some cases. It depends what you're building. It really depends on the application you're building. For the most complex one, LLMs tend to still struggle a little bit when it comes to generating proper code for more simple-
Lalit Kundu
>> But you are moving towards the vast majority being written by AI in what? This year or in a year or two or...
Massi Genta
>> Well, this year already, we're getting there. In the next few years, as I was saying, it depends. Like for building full applications, then we're still a little far from it in my opinion. Like several years down the line. We see agents are getting better and better, substantially better and better. But obviously, most of our developer use our tool to debug LLM-generated code as well. But again, we need to get to a point where it's not just LLM and I feel like we have went through a time where LLMS are very new, just as it was saying. And the first couple years, every time there is a new technology, there is a general feeling and sentiment that that is going to be able to do everything. People were like, okay, OpenAI or any single agent is going to be just by LLM, solve all the war problems really. But now we are at the point where after a couple of years, we've seen leaders in the space. They're starting to identify the weaknesses of LLMs and trying to now tackle those weaknesses by again, a multimodal approach, which is clearly the future. And so we need to get to that point and now it's the next steps to get to what you're saying is going to be like what models, what application is going to complement LLMs better and how to build that architecture. And that's where we come into place. That's what we've been working on for the past several years. Before LLMs, if you talk to anyone in the space, they will tell you GNN was the talk of the day. Then all of a sudden no one care about it anymore. And now, people are getting back into that.
Massi Genta
>> It's a pendulum. So basically you are saying that with the today's AI agents, you still need a steroid.
Massi Genta
>> LLMs alone are not going to be enough. They're amazing for the use cases by the way that it work, but just there are the weaknesses. So now it's all about finding the right approach to complement the weaknesses. And it's not just going to be GNN, it depends on the application, but there is going to be several different models and applications that go along with it. How far are we for that? Again, for simpler application, probably pretty close. For very complex application, I do believe it's definitely doable, but we're definitely more far away from that.
Massi Genta
>> Lalit, so your company is relatively new, right?
Lalit Kundu
>> Yeah.
Massi Genta
>> Fresh minted YC company. What's the AI agent usage within your own team?
Lalit Kundu
>> Yeah. So I would tell you this, 90% of our code is written by Cloud Code and a combination of Cursor. 90% of technical decisions around what code to build are made using Delty in our team.
For our customers, I'll give you an example or even for us. This would've taken us maybe a few days to decide what is the architecture going to look like, how do you want it to evolve in the future. But you come to Delty, it understands your whole architecture already.
Massi Genta
>> So give me two example of the technical questions you have for Delty.
Lalit Kundu
>> Yeah. So just actually yesterday I was brainstorming with Delty. We have sort of a bottleneck where we do code base indexing and all our retries are in memory. Because this is what happens when you're quickly building an MVP. We needed a more robust approach. Typically, if you go to a staff engineer with this question, they would ask you, "Okay, what are your needs? What does your current architecture look like? What different technologies do you have available? How much time you have to implement this?" There's a lot of trade-offs to be made. I just, using a voice command, talk to Delty about these things and it told me four different approaches, very quickly helped me visualize how the new architecture is going to look like, what are the trade-offs to be made. And it even recommended me an approach that worked for our use case. And then I go to Cloud Code, I'm like, "Hey, I made my plans on Delty. We have an MCP server." Cloud Code immediately pulls in the whole set of plans that we've made and gets to coding.
Massi Genta
>> So in the future, do you foresee that your users still use a Delty and cloud or Cursor at the same time or do you see that a Delty be the one console that people need to use?
Lalit Kundu
>> The latter. I think that is our bet.
Massi Genta
>> But today, it's not there because you just started.
Lalit Kundu
>> Yeah.>> It already does some architectural decision for you at least that you get to choose, right?
Lalit Kundu
>> Yeah.>> And then you go to Cursor or Augment of the world to code it. And then 90% of the code is already written by AI.
Lalit Kundu
>> Yeah.>> In some ways, I see, having talked to multiple founders of the team, the AI coding companies, it sounds like the future is already here, right?
Lalit Kundu
>> Yeah.
Lalit Kundu
>> I was actually talking to the AWS CEO, Matt, last week. He said, "Well, it's AI. Most of the code, the 90% of code is going to be written by AI." And I ask him, "Are you talking about two or three years or are you talking about five years?" He said. "Two or three years." At that moment, I thought, okay, he's being pretty aggressive. But based on what you just said, it sounds like two or three years is not ambitious. It's actually going to be what? Later this year.
Lalit Kundu
>> Yeah. Absolutely. The bottleneck is a lot of engineering leaders are rightfully skeptical. They have a big stake in the game. They need to carefully adopt AI. So some of our customers that we go to, we slowly roll it out to them because we don't want to break their existing system.
Massi Genta
>> So let's close off this session with the advice to the engineering leaders or executives. What do they need to do? Because clearly the technology is quite good or almost there, but as a VP engineer, as a CEO, as a executive of the company, what should they do in order to get to future faster? What's your advice to them?
Lalit Kundu
>> Yeah, I'll go back to the BlackBerry example. Do not be stuck on using BlackBerry as your mode of doing work. You have to play with tools that are clunky right now, but if you don't play with them soon enough or fast enough, your engineers will not re-skill, your engineers will not be productive enough.
Massi Genta
>> How hard is the re-skilling in your opinion? Because you managed the 45 people at Google and imagine just for them, their state two years ago versus re-skill them. How hard? How fast?
Lalit Kundu
>> It's relatively harder. It's hard to teach old dogs a new trick. You need to dedicate some time, you need to play around with it. You almost need the curiosity of a child when it's handed a new toy. Even if the toy doesn't do whether it's supposed to, you still play with it, you get familiar with it. And you need to dedicate time for your engineering teams to be able to learn these new tools.
Lalit Kundu
>> Very interesting. Because every engineer is already busy anyways. Now, you ask them to learn new tricks. It's not easy, it's not comfortable, but once you get out of the comfort zone, you feel like the future is here.
Lalit Kundu
>> Yeah, and you have to get on this gravy train or else you'll be stuck and get eaten up by disruptors like us. Yeah.
Lalit Kundu
>> Massi, close that off for us.
Massi Genta
>> Yeah, well, I agree. Most of it, I think also really greatly depends on the industry. For instance, my company, we work with government, we work with banks. And as I was saying, one of the core value using our tool right now on-premises like we can debug and refactor most legacy code. But to this point is obviously you need to be willing to adopt. In some instances, I think what I've seen trying to... One of the limitations when it comes to AI is for companies to provide data to vendors or to provide better tool. We are in the very beginning of it. I can name them. We have very large enterprises that we work with and the process is pretty lengthy to adopt the technology and especially for big companies, it's a little painful sometimes.
Lalit Kundu
>> It's not just model is the bottleneck, the context is the bottleneck .
Massi Genta
>> Yeah. Just like compliance regulations and to really see the value of it. Obviously again, it's a chicken-egg situation. I understand from a company standpoint, especially if you are in a sensitive area, that can be government or banking or finance, you want to make sure that your data is safe and secure. But you need to find, in my opinion, alternate way to work with your vendor and your partners to really... Because the more data you can provide and the more support you can provide to your partner, the more the value of the tool in AI will definitely be able to be shown. We have some case study we're going to publish with some of our customer that gave us full access to really help us for our on-prem model to be trained on their annotations or their code review history and the results astonishing. Especially when it comes to enterprise code where perhaps open source code is not as clear or is not as... It's harder to just train general models for. I know it's kind of a niche suggestion, but I think that's quite relevant right now for companies. It's like trying to innovate in that way, try to work with partners in a way to also increase a little bit of level of trust by implementing compliances measurement to just speed up that process.
Lalit Kundu
>> And now just to report the next few weeks. Metabob is going to release a new version of the VS Code plugin so that you can be truly deliver this steroid for Cursor, right?
Massi Genta
>> Yeah. Yeah. So there's not-
Lalit Kundu
>> The reality is people's choice for... Be the providing staff engineer level of the technical decisions or whatnot. Thank you, Massi. Thank you, Lalit. Last few sessions, we've talked to founders of the AI coding agent, AI debugging software companies. This is the new world, this is the new software development of the world. I think as you said, the bottleneck is the mindset. The bottleneck is the data. So if we are willing to overcome that, we are actually can get into the future very fast. Thank you everyone for listening. This is AI native world. If you need AI tools, embrace it. Thank you, everyone.