Metabob revolutionizes AI code analysis and optimization through innovative applications of cutting-edge technology. In this insightful session, Dave Vellante of SiliconANGLE Media hosts Axel Lönnfors, chief operating officer at Metabob, at the Rosewood for theCUBE + NYSE Wired event. Lönnfors discusses advancements in AI code analysis, providing a glimpse into Metabob's use of graph neural networks to streamline code optimization and refactor substantial legacy systems.
The Metabob platform leverages AI by integrating graph neural networks with large language models, effectively modernizing and detecting anomalies within extensive codebases. Co-hosted by theCUBE Research, the discussion explores how Metabob’s capabilities assist companies, ranging from government agencies to Fortune 500 firms, in managing their technical debt. Lönnfors details the enterprise-driven approach and the journey towards achieving product-market fit.
Key insights from the conversation include the importance of accurate anomaly detection and automated fixes for maintaining operational efficiency. Lönnfors emphasizes Metabob’s unique position, highlighting its focus on preserving code context to prevent issues such as 502 errors. They assert that customer satisfaction and value delivery remain the company's guiding principles, steering Metabob towards greater integration into AI-driven development environments.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Abraham Alvarez, Verkada
Metabob revolutionizes AI code analysis and optimization through innovative applications of cutting-edge technology. In this insightful session, Dave Vellante of SiliconANGLE Media hosts Axel Lönnfors, chief operating officer at Metabob, at the Rosewood for theCUBE + NYSE Wired event. Lönnfors discusses advancements in AI code analysis, providing a glimpse into Metabob's use of graph neural networks to streamline code optimization and refactor substantial legacy systems.
The Metabob platform leverages AI by integrating graph neural networks with large language models, effectively modernizing and detecting anomalies within extensive codebases. Co-hosted by theCUBE Research, the discussion explores how Metabob’s capabilities assist companies, ranging from government agencies to Fortune 500 firms, in managing their technical debt. Lönnfors details the enterprise-driven approach and the journey towards achieving product-market fit.
Key insights from the conversation include the importance of accurate anomaly detection and automated fixes for maintaining operational efficiency. Lönnfors emphasizes Metabob’s unique position, highlighting its focus on preserving code context to prevent issues such as 502 errors. They assert that customer satisfaction and value delivery remain the company's guiding principles, steering Metabob towards greater integration into AI-driven development environments.
play_circle_outlineEnhancing Safety: Verkada's Advanced Technology Solutions for Integrated Surveillance, Access Control, Alarms, and Environmental Monitoring
replyShare Clip
play_circle_outlineStreamlining Security: Addressing Market Demand for Integrated Systems and Enhancing Event Management through Effective Communication and Insights
replyShare Clip
play_circle_outlineImportance of AI and cloud-based solutions in modernizing security infrastructure for better responsiveness.
replyShare Clip
play_circle_outlineRevolutionizing Intelligent Systems: Integrating Edge Computing and Cloud for Enhanced Safety and AI Performance with Proprietary Technologies
In this enlightening video from theCUBE series at NYSE Studios, Abraham Alvarez, Vice President of Product at Verkada, shares insights into the convergence of physical artificial intelligence and robotics. This discussion is part of theCUBE's exploration of how AI transforms security and safety in physical environments.
Alvarez, a leading expert in product development at Verkada, discusses the innovative approaches Verkada takes to enhance safety and security. Under their leadership, Verkada focuses on creating interconnected systems that enhance com...Read more
exploreKeep Exploring
What is the focus and mission of Verkada as a company?add
What are the challenges customers face with fragmentation in security systems, and how is AI helping to address these issues?add
What are some examples of recent incidents that highlight the need for improved security technology?add
What advantages does a company gain by having full stack control over its cameras in the context of addressing customer needs in rapid modernization and accelerated computing?add
>> Welcome back everyone to theCUBE's coverage here at the NYSC CUBE Studios. Of course, we have our studio in Palo Alto, California, connecting Silicon Valley Tech with Wall Street and money as the physical AI robotics series kicks in. We're continuing coverage leaders building out the physical digital convergence of our world. Physical AI is hot. Right now, as agentic builds out, physical's right behind, but certainly robotics is there. Abraham Alvarez, VP of product as Verkada is here. Thanks for coming on and congratulations, you guys just closed a new round of funding this month. Total 700 million, $5.8 billion valuation, billion and a half or so increase, step function up. Congratulations.
Abraham Alvarez
>> Thank you, John. Excited to be here. Thank you for having me.
John Furrier
>> You guys are doing some pretty cool things. Us, we love the edge piece. We've been teasing that out. It's going to be a big part of our coverage. You guys play a big part of the hybrid distributed computing architecture and with computer vision, with software, with cloud. So it's cloud meets AI is coming super fast. Set the table. You're the VP of product. You got the keys to the kingdom. What is the product? What's the value proposition?
Abraham Alvarez
>> Yeah, absolutely. So Verkada is a company that really focuses on solving a super important problem in the world today, which is safety in the physical world. And the way that we're building and trying to solve that is that we are making the communities where we work and where we live safer. And so we're building products that make all of these places safer to live in, safer to be in. And we started with cameras at the very beginning. And then from there, we started to expand into things like access control, right. Being able to open like door locks and we expanded into alarms and environmental sensors and intercoms, workplace, you name it. Essentially, we are building the operating system and connecting all these things together to make it really easy for our customers to be able to better secure their places and their facilities.
John Furrier
>> Talk about the physical aspect of this because there's a security piece, there's a safety piece, but there's also the proliferation of cameras. You see Ring doorbells being used as evidence on events that people want to get data on. You're starting to see cameras that have been on light poles and streetlights. These have been kind of one-off siloed. I don't want to say siloed, but they've been basically more of an IOT point solution. As we get more connected. How do you guys see that piece of it? Because you're starting to see more of a distributed computing paradigm for cameras and software. Explain this nuance 'cause I think this is a transition we're going to see with certainly computer vision.
Abraham Alvarez
>> I think you hit the main problem and the main challenge that a lot of our customers are facing, which is that fragmentation tax. For decades, security meant logging into five different systems just to try to understand an event. And those days are over now. Our vision is to be able to manage the perimeter from the inside of the building all the way to the perimeter of that building and make it super easy to be able to understand what is going on with a single click. Things that used to take like months or weeks of investigations, we've made it really, really simple through the use of AI and other tools for our customers to get value from. And I think that fragmentation that you mentioned is a really big challenge that everyone is facing where they're trying to have that unified view, but they're finding challenges when they're going through separate companies or different products that don't communicate well with each other and not working in tandem to give them those outcomes that they so want.
John Furrier
>> You know what's interesting is obviously on the news, there's heavy in coverage of incidents where if they had good camera safety, things could have been avoided. But also post activity, whether it's an event, a horrific event or whatever, we saw some things that like Brown University had a similar challenge. You see the collateral damage of not having the data in time. And this is where I think AI could be a real value. Reinvent this year, AWS, you saw kind of like AI being much more tailored to domain specific data. You're starting to see now the thinking around, "Hey, we could actually use data differently." And obviously computer vision has its own challenges on the ingestion of data. This is a technical opportunity and a problem to overcome. Talk about how you see that because if you get the architecture right, the unification of data, the availability, that could also be part of certainly agentic capabilities and added value AI capabilities. Talk about this piece of it because you're starting to see real world examples where people are starting to scratch their heads saying, "Look, we need a better solution."
Abraham Alvarez
>> Absolutely. I think it's a little bit unfortunate that the products that we are building are becoming so important in the world today. You mentioned some of the headlines that we've seen over the last couple of weeks or the last couple of months, right. We're all familiar with the heist at the Louvre, right. Two guys just coming in and taking in millions of dollars from an outdated system that could have easily prevented some of these things. We saw that unfortunate tragic killing off in an office in New York, right. There's organized retail crime rings that our retail is struggling to try to figure out how to face and react on. And then there's been the countless senseless shootings. And so all of these headlines, I think underscore just how needed better technology is and how customers really should look at modernizing their security infrastructure to make sure that they can better respond to the threats that they're seeing today.
John Furrier
>> When you look at the technical problems, you mentioned you guys are cloud-based, but also cloud is also hybrid. So you're seeing on prem and edge, which we love. We can talk about that until the lights go out tonight. But talk about the architecture of hybrid and why you guys have the solution that works across because the data domains and the data estates or the data gravity will be dependent upon what the workflow looks like. For example, when you've taken video images in, that's fresh data, hasn't been trained yet, but certainly you can do inference on it. You're starting to see the interplay between the AI models and then what's available in say the database or the architecture. How do you frame that? How do you talk to customers who are on this rapid modernization or accelerated computing path?
Abraham Alvarez
>> Absolutely. And honestly, I think that's actually our unfair advantage, which is that we have full stack control of our cameras. And what does that mean? Well, that means that we actually design our own cameras from scratch. We're going in and looking at the, not just the outside, but also the inside, choosing the chip set, choosing the imager, the processor. We write the firmware on all of our devices, which means that we have full control of the device on that edge. And because we selected the top of the line, AI processing chips for our cameras, that gives us the power and the ability to be set up not just for the AI that exists today, but the AI that's coming in the future that we may not even realize what that's going to be. And one of the big things that we actually found is that when you start to run these big models, we're all familiar with the LLMs or the VLMs and all the large language models that are so popular today. We found that when you're running these really big models on the cloud and you're simply taking in a simple video stream and running on that, it actually doesn't work very well. But what we found is that when you're actually running smaller models and you're running that directly on the edge, and by that I mean they're running directly on the camera itself where they are installed and you combine that with the big models in the cloud, that's where we've actually seen the best outcomes and the greatest accuracy for all of our customers. And because we've designed our own cameras and because we have the low level control of the firmware right and how we're taking in those pixels, we can really optimize our cameras to provide the best possible outcomes for our customers. And that's why I think that's one of our big unfair advantages that we have over folks that are either focusing on purely trying to create as many camera SKUs as possible or folks that are on the other side that are honestly some of the more innovative folks that are really trying to focus on the AI and the tooling around how do we give you actionable insights from this data, but they have zero control over the camera, they're just getting a generic video stream. And being able to combine those two is I think where the magic really happens and where we can actually make a huge difference. And our customers have been able to realize that value as a result of coming on board and using our system.
John Furrier
>> Well, it's obvious you guys obviously got validation, the funding numbers, obviously the valuation and the success. If you look at the success of this AI wave, all the winners, if you go back just say four years and three years, all have engineered down to the silicon level. Basically the alpha coders would go deep, lower in the stack to not only squeeze performance, but also tie in the software stack. NVIDIA has been doing this for over a decade with CUDA. Now you got CUDA-X. I mean, that's obvious in the AI side of it, but that's the trend. Speak to the product roadmap that you guys have. What's next? Because we think the edge will hyper converge with spectrum wireless and unlicensed and license. You're going to see probably AI factories sistering up against a wireless mesh transceiver like capability, which will bring bandwidth, intelligence, and an AI factory like capability to devices connected to that kind of hyper converged edge. Take us through, first, do you believe that? And two, what's your roadmap look like as you guys continue to have that low level software stack integration?
Abraham Alvarez
>> I absolutely believe that. And I think, again, I think that's where the difference is going to be. The folks that are just simply trying to use generic video streams and trying to make sense of those are going to be very limited because they do not have that low level control directly from the chipset or the image quality or the tuning of the device itself. It's really bringing all those different pieces together, that full stack control that I was mentioning from the chip all the way to the cloud and really optimizing that entire flow, that's what really is going to create the biggest difference. And I feel that that's going to be the expectation of a lot of customers when they're expecting these things to work. I think there's a lot of hype today with AI in terms of like what things can and can't do. And one of the biggest things that we really focus here and that I push on my teams is it's really all about actionable data, right. If it's actionable, our customers get value from that. We don't want to have just a checkbox feature. We don't want to create all these features that nobody's really using. Let's really focus on the features and products that are going to create the most value for our customers and make sure that they're actually using them and getting that value and that's how we win. And so the only way to do that is by being able to bring in that convergence that we were talking about earlier of optimizing the actual device itself on the edge and every process in between all the way to the cloud to be able to provide those outcomes for our customers.
John Furrier
>> Well, you guys clearly have the North Star easy to use and deploy, make that highly functional through the intelligence, which is awesome. I have to ask you about kind of where the puck is going or connecting the dots. As you look at some of the architectural shifts, let's just take in agentic, you're starting to see things like certainly on large scale AI factories that we cover, which kind of is adjacent to the physical AI and robotics, the idea of networking becoming a very core fabric of connected systems, KV cache, Dynamo for NVIDIA, you see, like I mentioned, wireless spectrum, whether it's spectrum or unlicensed for backhaul, the wireless and networks can do different things. It might coordinate between radios and cameras, integrate with models, maybe create a mesh backbone between radios. These are things that we're seeing. These are new emerging capabilities. How do you see that affecting your edge devices on the roadmap? Do you see them becoming more smarter, tokenizing? What do you see that next wave coming technically? Because these are things that are under development. There's a lot of reference implementations of how to take advantage of more intelligence in a network configuration.
Abraham Alvarez
>> Yeah, absolutely. And I think that's where the entire industry is moving, where it's no longer about having a single product. It's about how that product really operates and works together to give you that actionable information. And usually that's going to require very, very heavy integrations or some things that just are not going to work across the board. And so that's where we really focused on if I'm a user, how do we make it as simple and as easy as possible to get value from the system in the easiest way? I like to say to our team, if it requires training, let's go back and revisit that to make it easier so that anyone without training can get value from our system because again, that's how we're going to be able to go ahead and win. And it's really what you were talking about earlier, right, around how do we make sure that we are starting to bring in different sensors, different nodes? You talk about architecture, but as we look at the entire building, and I know you guys are covering a lot of robotics and things like that, it's almost as if you're creating the building as a robot, where all of the pieces of the building are now starting to work together in tandem and you can add intelligence on top of that so that your systems can give you those actionable insights. And what I mean by that, well, earlier on, as I mentioned, we started with a company with a couple of cameras, but now if you start to be able to pair those cameras with your access control, right. Being able to open a lock, open doors, close doors, get that information around what's being opened, if people are tailgating, you can actually start to do some really interesting things. You pair that now with environmental sensors to be able to have better sites into manufacturing, for example, right. Even formaldehyde and different things that are happening, you can actually do some really interesting things that you can never imagine before. And that's only because you're bringing in all those things together and allowing them to work for you instead of just being those separate silos. And I think that's the future of where this industry is going. And I think trying to do that or build that with all the different companies and all the different pieces is a really, really tough proposition because that data sharing or even the API platforms are not as robust as they should be. And that's really the main reason why we actually ended up starting to build all that stuff ourselves because we'll play fine with everyone, right. I mean, we'll take in any third party camera or anything like that and try to make it smart, try to use some of that AI that everybody loves from our end and try to bring that capability onto them as well. But it's really when you bring those things together that we see that magic happen and we think that that's, again, one of our competitive advantages is that it just works. Our customers know that when they come in, they start using the products, the building actually starts working for them instead of them having to spend weeks, if not months, trying to configure all these systems, trying to get them to talk to each other and not really getting the results that they expect.
John Furrier
>> It's a great product, great architecture. Love the platform. I love the building aspect of it, but you guys are not only doing good work in just buildings, such as cameras and doors and alarms, other safety things, you're making the physical environment smarter, more intelligent and safer, but it's not just buildings. I saw some things with fleets, anything with a sensor you guys can play with and play into that world, buses, anything in the physical world. Where is the limit right now for you guys in terms of the product? And you guys can hit anything that's connected to the internet or got connectivity. How do you guys look at your space in terms of customers? 'Cause tracking buses is one thing, doing buildings is another, but is that just agnostic to you? How do you look at that physical world because essentially you're physical AI, which is everything?
Abraham Alvarez
>> We are, and that's where it's super important to make sure that you maintain the focus, because if you try to do everything, you end up sometimes not doing anything well. And so we are really hyper focused on making sure that we are securing people and places in a privacy sensitive way. And that means that our main customer is really a customer that has many distributed facilities or buildings, and they're trying to secure that and have that unified view. For those customers, I think we can be the best in the world in that space. And that's where we've been really focusing on how do we bring all of those things together to be able to give them that best possible experience. And it's really only through that customer feedback, right. Are customers talking about the challenges and the pain points that they have with trying to manage their facilities and security and the disparate systems, and we actually continue to expand and provide some of these additional capabilities and different verticals that we've been going into as a result of, again, just trying to provide that value for our customers. Now where that limit is, I don't know. But I know that we've been, again, really hyper focused on the actual buildings and the facilities themselves. Although now, as you mentioned, we're starting to get into buses just because a lot of our school systems have been clamoring about how they are really missing a tool that can integrate with their internal and outdoor security cameras and being able to see all that in a single place. And so as a result of that, we're starting to get into some verticals that traditionally we would not have gotten into, but at the end of the day, it all comes back to that securing people in places in a privacy sensitive way. And that's our mission, that's what drives us, that's what allows us to make the decisions in terms of, should we go into this market or not? And at the end of the day, we look into the, is this something that we can be the best in the world at? Right. 'Cause there's a lot of players out there, there's a lot of companies and they're all doing really great things, but the biggest question is, can we do it better? Can we enable our customers in ways that does not exist today? And is bringing those things together with the things that we already have, does that make one plus one equals three, right. The sum together greater than the individual pieces. And if the answer is yes, then of course we're going to be interested in looking into that because there's obviously a lot of opportunity in this industry.
John Furrier
>> So when I say fleets, that would maybe play into ... We used to have a term back in the old days called the extended enterprise, and that was essentially connect to a remote office or do something. But if I had, say, a distribution warehouse, I could secure that. And then if I have a fleet that's part of that enterprise, that would be a fit for you guys, right?
Abraham Alvarez
>> Absolutely. We are fortunate where we're across industries, right. It's not just schools, it's not just workplaces, right. And we're actually fortunate that over a hundred of the Fortune 500 companies use our product today. We have over 17 offices around the world and we're continuing to expand just because of the result of our customers seeing the value and we're in hospitals. Anywhere that you've really frequented, right. Even concerts or big stadiums we're in as well. Anywhere again that you have those challenges and those problems of, how do I secure the facility and the folks that are inside and how do I bring in together all these different sensors through AI and software to be able to get those actionable insights? That's where we really shine. And that's where, again, I think that we can absolutely be the best in the world. And that's the reason why we've been fortunate that we've been able to expand from the US to worldwide to be able to go ahead and do some of these cool things that we get to work on.
John Furrier
>> Well, I really appreciate the mission, great experiences and the safe environment, love the mission. I have to ask you a final question. What's the coolest things you're working on right now that you can share? Give a little taste of some of the cool projects or stuff you're building.
Abraham Alvarez
>> There's so much stuff. So if you haven't gotten a chance to take a look at the unified timeline that we just recently announced, I'll definitely urge you to do that because this is where people think this is science fiction, right. This is what our customers have been asking for, forever, which is, "How do I have a unified view of an individual coming into my premises from the time they arrive on the edge of my property to the time that they go inside and leave?" And what we end up doing is creating this thing that you simply just go into the video and you click on a person or you click on a car and that automatically gives you all the unified clips of that person, not just across a single camera, but across your entire fleet. And that brings it together with a map to be able to show you where that individual or that person was seen to give you that investigative thing in literally one single click. Now there's an insane amount of AI and technology that went into creating that, but the value is that it's super easy to use and it's now accessible to all of our customers to get value from that. And so you can see that that's sort of like the foundation and the building block that's going to enable us to get where we're going, which is how do we, again, bring all of these things together and provide you additional value so that you don't have to spend all this time trying to put all these things or disparate pieces into one cohesive story.
John Furrier
>> Great example of software stack, working with the devices, rolling it up with some AI. Great example. Super cool. Eric, thank you for coming on theCUBE, part of our physical AI and robotics series. Again, the edge is going to get more functionality. So again, more and more cool things should be popping up and keeping us safe. So appreciate your time.
Abraham Alvarez
>> Thank you for having me. It's great.
John Furrier
>> All right. I'm John Furrier, host of the queue, part of the NYSC Wired program, CUBE original here, part of the NYC Wired community, an open community of leaders working together to create a great future. Thanks for watching.