We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders. Signing in with LinkedIn ensures a professional environment.
Sviat Dulianinov, Bright Machine & Benji Barash, Roboto AI & Brian Gerkey, Intrinsic & Lindon Gao, Dyna
Sviat Dulianinov
CEOBright Machines
Benji Barash
CEORoboto AI
Brian Gerkey
CTOIntrinsic
Lindon Gao
CEODyna Robotics
Sviat Dulianinov, chief strategy officer at Bright Machines Inc.; Benji Barash, co-founder and chief executive officer at Roboto Technologies Inc.; Brian Gerkey, chief technology officer at Intrinsic; and Lindon Gao, chief executive officer at Dyna Inc., join theCUBE’s Dave Vellante during theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders 2025 event to explore the evolution of robotic software and intelligent automation. The conversation covers everything from data infrastructure to foundation models for real-world robots.
Sviat Dulianinov, Bright Machine & Benji Barash, Roboto AI & Brian Gerkey, Intrinsic & Lindon Gao, Dyna
search
Dave Vellante
>> Hi, welcome back to
our Palo Alto studio. My name is Dave Vellante, and
John Furrier is also here. This is theCUBE + NYSE Wired: Robotics & AI Infrastructure Leaders. This is our media week.
We've been going all week. I just flew in, John's
been holding down the fort. We've taken some deviations from robotics, but this time we're going deep into it and really excited to
have Sviat Dulianinov, who is the Chief Strategy
Officer at Bright Machines, and then Benji Barash is Mr. Roboto, the co-founder
and CEO of Roboto AI. Brian Gerkey is the CTO of
Intrinsic, which is an AI and robotics software platform. And Lindon Gao is with Dyna Robotics. They do foundation models for robots. We're going to get into
it, and we're going to focus on the future
of robotics software. Robots are all the rage. We see them all in Jensen's talk and we see them at Mobile
World Congress or MWC, and everybody gets excited. It wasn't that long ago
that robots actually couldn't even climb stairs. We've come a long way in the
last 10 or 15 years, isn't it? What is the state of robotics and robots? It feels like near-term, and it's here. Robots that do this are really, really near-term and good. Robots that are humanoid,
I'd love to hear you guys. But what is the state of
robotics? Why don't you start?
Sviat Dulianinov
>> Yeah, happy to. I think
we had a small discussion before we started that
panel also on this one, I think we need to
differentiate on the use cases and applications. I think for many things, for some things, you see Waymo on the
streets of San Francisco, if you consider it a robot and you're like, oh, that's
super impressive, right? It navigates and drive itself. On the other side, you have
humanoids that you mentioned, and I think it's a
little bit more nascent. And then you have different use cases, more precise, less precise. I believe that more
precise applications going to take more time to develop to the state that are going to impress us. More precise, I mean you can
do surgery, you can do assembly of electronics, several micron precision. I think less precise, hopefully going to become faster moving. And you can see a lot of, for example, robotic appliances at home when
they also clean from windows to the floor and other things. I think we just need to
differentiate what we're looking at, and I think other
panelists can add to this.
Dave Vellante
>> And Benji, we're all
expecting a big step up from the experiences that we have today. What's your take on all this?
Benji Barash
>> Yeah, I think it's a
really good question. I always come back to this concept of Moravec's paradox in robotics, where people just have
very high expectations for what robots should be able to do. It's very easy for me to sit here and gesticulate, use my hands
and pick up a mug and drink and do things, but it's
still surprisingly difficult for robots to do a lot of things that we take for granted as humans. There's a bit of a reality gap there. But it's definitely the case that robots now are getting
much better at doing what we would call more general purpose generalism activities. Things before that might've
been hard-coded, just teach you how to do one single thing well. Now, there's some generalism that robotics companies
are able to start baking in where they can do more
things, more adeptly.
Dave Vellante
>> Brian, I've heard Jensen, a couple of years ago we were in
an analyst conference and he was schooling us. And he said, "Well, there's
only so many movements that a human can make. It's not really infinite."
And that surprised me. I was like, whew, there's
kind of an infinite number of movements that we can make. Because I thought he was
kind of trivializing it, but we might argue with Jensen. What are your thoughts on that?
Brian Gerkey
>> Yeah. I take the point,
there's some limits to what a human body can do and what a human mind could understand. But I think that even if
that's true, that still mapping that onto a robot is a huge task, and that's going to
take a long, long time. I think that there's a lot
of promise in, for example, the humanoid form factor, and there's a lot of promise
in taking those general purpose systems and applying them. But that's going to be a long journey. And I think what's going to be interesting along
the way is figuring out, as a community, as an
industry, how do we figure out what are the reusable pieces
that are going to fall out as we pursue that? Let's say we take a
reusable humanoid robot as our north star, let's
just say that's the goal. Then, what we want to have happen is we go
there is we get these useful today applications. We get the things that fall out that we can go put into
factories, warehouses, eventually homes that are going to do useful things for people today. I think that's the challenge
for us as a community.
Dave Vellante
>> And Lindon, you guys build
foundation models for robots. Of course, you need a model, and it's obviously a specialized model. Let's go around and dig into
what each of you all do. Maybe give us a pitch.
Lindon Gao
>> I think foundation model for robotics is literally the most frontier part of foundational models right now. You start with the language models, and then you go into
multimodality, which is image and video generation. And body AI is a combination of image and videos alongside with all the sensors that is interacting
with the physical world. This is one of the most
complicated part of robot research right now. And the really interesting
aspect about the body AI state today is that the general state is we lack
a lot of data in this space. It's not like a language model or a video model where you could
go on YouTube, you could go to Google and you could scrape the entire website for the data. Right now, body AI is severely
lacking this amount of data. The current state and
foundation model is actually understanding, what are
the different approaches that we could potentially
take to extrapolate and generate as much of
these data as possible? There's a couple of approaches. First, is through teleoperations where humans are operating the robot to collect the data in the physical world. There was a lot of people
trying simulation route. Most recently, world model
is another very hot topic, where leveraging foundation
models in combination with robot collection data we are able
to generate additional, extrapolate additional datas
that we haven't collected, but directly generate
it in the role model. A lot of all of us are
taking various approaches, and hopefully we'll be able
to accelerate the growth in foundation model very soon.
Dave Vellante
>> Thank you. Brian, explain
what Intrinsic does. I'm curious as to, when I think of robot, I think about a company like Amazon and I think about supply chain. They have the resources and they build specialized capabilities. My thinking here is this is
robotics for all, not just purpose built for Amazon. But tell us about Intrinsic, and I wonder if you could -
Brian Gerkey
>> Sure. Intrinsic, as you mentioned
earlier, we're building an AI and robotics software platform. I'm a robot software guy. I've been doing this for 25
years in various guises building a lot of open source
software that's been used by a lot of robotics companies. And the way that we think
about at Intrinsic going after robotics is that it is
primarily a software problem. That there's certainly a lot of work to be done on the hardware side, no doubt. At the same time, there's a lot of good hardware that exists today. And there's a lot of... If you go into, if you look at industrial robots, you look at collaborative
robots, you look at the sensors that are needed to do a
lot of even high precision assembly tasks, I would argue in many cases
the hardware is pretty good. And what the missing piece
is the software that lets you effectively and efficiently
build an application. And that's the problem
that we're trying to solve. The way that we think
about our platform is we're targeting an application builder. We want to provide something
to a developer ecosystem where they have the domain knowledge, they understand very deeply
the problem they're trying to solve and the hardware exists, and what they're missing
is the right software tools that let them develop and
deploy that application. What we focus on a lot
is the infrastructure, the developer tools. We also in-house build a lot
of first party capabilities, including foundation models, but we're not provincial about that. What we fully expect that
folks like Dyna are going to come along with their own great models, and we'd like to be there to help them deploy that into the world.
Dave Vellante
>> Okay. Benji, this brings me to data, which is your wheelhouse. How should we think about the
data that's fueling robotics?
Benji Barash
>> Yeah, absolutely.
- And talk about what your company does as well.
Benji Barash
>> Sure. Yeah. I think
the best way to start that discussion is Lindon mentioned before that robotics has a data problem. The data problem he's talking about is that there's a shortage
of good diverse data to train very large
scale foundation models to do generalist tasks and policies. The problem we're solving actually is that there's also too much data right now in robotics as well. A lot of the robotics
companies we work with, they start scaling up
their fleets of robots, and robots very quickly
generate sometimes terabytes of data in just a few hours of operation. And that's because they have
all the sensors on them. There might be cameras, LiDARs, radars, inertial measurement units,
actuators, batteries, all of that stuff is constantly
producing data on these robots. And unfortunately, most of the time in the real world
robots don't work, sometimes. They have issues, they have edge cases. If you work in a safety critical or reliability critical setting and a robot fails, it could hurt people. I used to work on the
drone project at Amazon. If our drones fell out the
sky, bad things would happen. What we're really trying to
solve at Roboto is robotics companies actually are producing
a lot of robotics data. And they have to be able to
find the problems, they have to be able to analyze it,
search it efficiently. And that's a really new set of problems because, actually, all the
existing data platforms today don't work well with robotics data. They can't work with this very large- scale multimodal sensor
data that gets collected. And this all feeds back into the kind of work that Lindon's company is doing. Once you have your data under control, once you can search it and
analyze it efficiently, you can extract slices of
it to say further fine-tune and improve models that you might use.
Dave Vellante
>> Did you work on a Saildrone,
Benji? Was that one of your-
Benji Barash
>> No, no. Prime Air, the drone
delivery project at Amazon.
Dave Vellante
>> Ah, yeah, yeah, yeah.
Okay. Okay. And then Sviat, tell us more about Bright Machines.
Sviat Dulianinov
>> Sure. We've been on the
market for seven years, and we've historically applied software and hardware together, what
we call the full stack. We use machine learning, AI
enabled software to orchestrate and run what we call a micro factory or a line that builds
something, some kind of product. We focused on always complex
electronics assembly. And right now we focus on
what we call AI backbone, or everything that goes
into AI infrastructure piece, the data center. Mostly CPU, GPU based servers, networking, data storage equipment that goes into current what we call... What Stargate is building, or Amazon or Microsoft are building. And that is a higher precision assembly. I fully agree, by the way, that
robotics is developed enough to be good, but the
secret sauce is software. You need to react in a live
manner if some changes happen. And actually, on the line in
real manufacturing production minor things happen, and then humans are not
that good in adapting and they make mistakes and
they're not as precise. But if you fix it with smart
software driven automation and really high quality robotics, you can get your quality
really to high levels and build this 75 micron precision operations building those really high value items like GPU servers
that could cost you $300,000. That's what we focus on. We
build servers with robotics and AI enabled software.
Dave Vellante
>> Are these robots and the software that powers them, they comprise multiple agents? Are they the agent itself?
How is that evolving?
Benji Barash
>> I can take that. That's
a really good question. It's evolving as we speak right now. I think robotics, up until
recently, you'd have lots of discrete subsystems on your robot. You'd have a planning subsystem, navigation controls, perception. And actually, these days the
trend is now moving towards having an end-to-end
system, an end-to-end model that's deployed on the
robot that is actually able to perceive and understand
the world and actuate and move through it all with a single application of model that's deployed.
Brian Gerkey
>> It's a very live question. I would agree with that completely. I think that there was a time
when in the early days of AI and robotics, you would've
built a monolithic system. You would've had a system where
you said, look, we're going to take in all the sensor inputs. We're going to construct
a model of the world. We're going to make a plan, and then we're going to decide
what to do with that plan. That was horrifically
slow and inefficient, and it was almost
impossible to make a robot that could do anything reasonable. This is decades ago. And then, in the 80s into the 90s people
started saying, "Well, what if we construct the software? We architect it as a distributed system with asynchronous parts that
are talking to each other. We can close some fast loops here, close some slower loops up here, and we got much, much better behavior. "
Now, we're potentially coming full circle and taking that traditional
decomposition approach and just smooshing it all together into this end-to-end system. Which shows a lot of promise, it also is very hard to introspect. It's hard to know what's
going on inside it. When it does something wrong,
how do you know what it did? How do you validate it? These
are questions that we're going to tackle because the potential
upside of having that end- to-end system in terms of the performance that you in principle
can get is pretty high.
Dave Vellante
>> You're saying it's a bit of a black box. When something goes wrong, you got to get somebody in a little PhD with a lab coat to figure it out. Is that with the current state? Is that-
Benji Barash
>> That's what we're trying
to help with at Roboto, basically, because these
systems are going to do a lot of things and you're
going to have to try and figure out why they did them. And understand if a problem happened, if it's going to happen again. And if you can find a way to make it not happen
again potentially as well.
Dave Vellante
>> Thinking about foundation
models for Roboto, how hard are these things to secure? And I think of the enterprise,
nobody wants leakage into other LLMs or other models. I want to keep my data for myself. How hard is it to secure these things?
Lindon Gao
>> This is a really interesting question, because securing foundation
models in itself, is actually a foundation model
safety community out there. Which is securing foundation model. There are two ways to think about it. One is at the data layer, how
do we make sure that the data that we use to train the
model doesn't get leaked? And the second part is actually, securing also actually means safety. Which is, how do we make sure that your robot arm doesn't just take a knife and it stabs you? I think ultimately it
really just comes down to securing the data and the data piece. The more control you have over your data, the more security you
apply on your data level. Such that whatever data that is used to train the foundation
models, it doesn't get tainted with unexpected behaviors. And that's really the
most important piece.
Dave Vellante
>> How self-sufficient are they? Do you assume always
that they're connected, or do you assume they're not connected? What's the fundamental principle there?
Brian Gerkey
>> It depends on the use case.
In many cases, you don't get to decide that as the person
building the system, it's going to end up being your customer
who has the requirements on whether they're okay with
it being connected or not. I would assume Sviat has
the same challenge in manufacturing settings. In a lot of cases, if somebody
puts something into their factory or their warehouse,
they don't want it. It's a bug, not a feature for it to be connected to the internet. They prefer that those
systems be air gathered for industrial security reasons, and they're not used to having
continuously pushed updates and a cloud backed approach. Now, that's starting
to change as they start to see the benefits of what
you can get with a system that is at least intermittently connected. Because then you can start to say, okay, well if it's intermittently connected, when you're connected we'll
give you an update to that model that you're running
inference on at the edge, and that's going to
improve your performance so you can start to
drive in that direction. But in a lot of the places where robots can be
effectively deployed today, and they're not yet, it's pretty challenging in my
experience to get to ensure that you have constant
connection to say a cloud system.
Benji Barash
>> Especially in ag tech as well. A lot of the ag tech robotics
companies we work with, their robots are out on a
field in the middle of nowhere and there's no reliable internet connection for those devices.
Dave Vellante
>> When I think of a Tesla or a Waymo, I would presume a lot of that data doesn't go back. Maybe it's 5%. Or, I don't
know what percentage it is. If a deer runs in front of
the car, that's an event and it gets sent back,
otherwise it's ephemeral. Is that the right way to think about it?
Benji Barash
>> Yeah, that is definitely
right. Yeah. But they are probably on the far end of maturity there. Most robotics companies still are not able to determine when something
interesting might be happening on their vehicle or robot, and know when to be selective
about when they should send the data back up to the cloud. You're definitely right, between Waymo and Tesla, they've
got that figured out.
Dave Vellante
>> What do you guys think about Waymo? Is that a miracle that is
happening, or is it not? Because it's this brute force
thing with a bunch of LiDAR, and Elon's trying to create
a miracle with cameras and neural processing units, and there's two schools of thought there. Have you guys been in them?
Have you guys ridden a Waymo?
Brian Gerkey
>> I haven't ridden in a modern
Waymo, I have to tell you.
Dave Vellante
>> tried.
- You tried it or never tried?
Sviat Dulianinov
>> I tried it. Yeah, I tried
it, but it's not only Waymo. There's like things, Zoox right now San Francisco. You can call Tesla half autonomous.
Benji Barash
>> There's Wayve in London as well. They just started -
Sviat Dulianinov
>> Wayve in London. Yeah, we went a Wayve pretty well. I think it's a lot of training and they don't have enough
data to train the system. And then they still use... The car itself isn't the
same technology that existed before, it's more about the
software that runs the car. I think we can still look
at this as a miracle. It was entertaining. You saw
robots navigating not on the live stream but at other
places like cleaning facilities or cleaning your house before
that, so it's the next scale. The way I look at it as
the next level of that. It's awesome, but it existed for some time, it's just
getting more developed.
Dave Vellante
>> Waymo is a precursor to
actually a robot that's going to mow my lawn and clean my house and do my dishes. Is that-
Lindon Gao
>> Technically, it's actually
a much harder problem.
Dave Vellante
>> The latter or the-
- Actually the latter.
Dave Vellante
>> Yeah, yeah. I would think so.
Lindon Gao
>> Yeah, because typically for self- driving is you make
sure you stay on course, you make sure you don't run to anyone. And I think that part is
very difficult to make safe.
Dave Vellante
>> Versus open-ended. I mean, it's-
Lindon Gao
>> When it's open-ended it has
to clean your house, it has to wash your dishes, it
has to fold your clothes, that's when it becomes
really, really difficult. And that's the part that we still need a lot more data to solve that.
Brian Gerkey
>> Don't forget you've
got robots in your house that wash your dishes and do your laundry, they're called a dishwasher
and a washing machine. You've actually got machines
that do a lot of that. You're asking for another
robot that will tend to that existing robot, right? Because there's the loading and unloading is the part
that you don't want to do.
Dave Vellante
>> The folding. I hate the folding.
Brian Gerkey
>> Yeah, we actually do folding very well.
Lindon Gao
>> Yeah, yeah. We actually
do folding very well.
Dave Vellante
>> Really? Explain that.
How did you train that?
Lindon Gao
>> Yeah. We built a foundation model. Our thesis from day one, actually, when we started building
Dyno, was that having a robot that is mediocre at everything
is not nearly as useful as having a robot being
good at a couple of things. That's very, very critical. And one of the really interesting
tasks that we started out with is soft body manipulation. Because it's actually inherently a very, very difficult thing for
traditional machine learning companies to do because soft
body has unlimited states. But interestingly for foundation models, it's actually something
that's much easier. And what we have realized is that the most important thing is
not just folding a shirt well, it's actually to the speed
to which you fold it. Also, the ultimate quality of the output. And also, for everyone
who you have millions of shirts out there, being able to adapt to various different kinds of shirts is also very, very interesting. And a function of that
is really just having a massive amount of dataset. Also, being able to have high
quality dataset that you use to train for the model. And the model trains itself over time.
Dave Vellante
>> What's the north star you're going for? You're going for a general
purpose robot that does a lot of different things for individuals, or are they going to be specialized? The laundry robot, the dishwashing robot, the outdoor weeding robot.
Where are you guys headed?
Lindon Gao
>> Well, the general goal for everyone is to get the general purpose robots. I think that's the most inspiring, the most interesting part. But actually, when you compare
robots to language models, language models are very
useful if it knows everything. But robots are not always
most useful if it knows everything, you just needed
to know a select few things. In your house is just your house chores. In a factory it might be just
assembling a few components. Our general thesis is actually that being good at a few things and being very good at it with
high throughput, high quality of output, are the most
important thing right now as we think about early
stages of landing on body AI, and that's what we focus on.
Dave Vellante
>> When I think of
software-defined systems, I think well it's just generic white boxes and then the software is
where all the brains are. Is that how we should
think about software- defined manufacturing, or is it different?
Sviat Dulianinov
>> It's more about... Look,
manufacturing is a really interesting industry overall. And talking about general purpose, I think that's a great aspiration. My personal belief is we're
going to solve 75, 80% of the tasks with
use-case-based applications and use-case-based robotics, and then 20% with maybe
humanoid and general purpose. Because might be just
easier to do it this way. In manufacturing, automation
existed for some time. Think about automotive
and how you build cars, but historically it was
just pre-programmed. You just sit down, write a program, and it does this repetitively
in the same manner, one task. Software defined, that means
that it adds flexibility to it. The way we think about the
line, we don't program it, but we more teach it what the product is, what the components are,
and what the action is. Or we call it a skill,
how you assemble it. And that defines when the machine thinks and sees the new server,
whatever it is, it knows that it needs to put
together CPU memory modules and heat-syncs and other things. You don't necessarily pre-program
it, but more teach it. And that defines the
manufacturing versus just standard programming when you define
each step from the very start. And then next time you need
to change something, you need to bring your team and
reprogram the system.
Brian Gerkey
>> I think that's really key. A lot of the manufacturing in
tasks can be automated today. In principle, if you've got enough money and time you could automate
just about anything. You could build crazy Rube
Goldberg machines which will automate just about anything. And that's what you do see
sometimes in very high- volume manufacturing, you
build totally custom machines because you're going to run
them for a very long time. Where that breaks down is
when you've got higher- mix situations, you want to have a variety, you want to have... Even in the AI server case,
you might have different SKUs that you want to run on the same line. I think that's something
that you guys support. Being able to deal with that kind of variability is something
that is, frankly, only going to come in a cost- efficient way from having
software essentially define what the capabilities of the system are.
Dave Vellante
>> And that has huge implications
for the scale economies What's the price point have
to be for a consumer robot to actually be adopted? I'm sure you're thinking
about it all the time. It's like, remember when
flat-screen TVs first came out, they were ridiculously expensive and now they're dime a dozen.
Lindon Gao
>> This is something that we
actually thought a lot about as we were thinking about
building our robots. And generally, where we always ends up land at is low five
digits for consumer robots to even pick up remotely enough traction. Because when you actually think
about the amount of chores, the amount of things that you
could automate, these tasks on a standalone basis, for example, for a robot that cleans your house. We could also hire a crew that could come and clean our house every week for a couple thousand dollars.
Dave Vellante
>> Got to be that crossover point.
Lindon Gao
>> Yeah, exactly. That's when
you started thinking, okay, so if I hire a cleaning crew that comes to my house four hours a day every week for $1,000, extrapolate that for 52 weeks, you probably land in the
range of mid-five digits. And that's where we think it needs to be.
Brian Gerkey
>> I think where you're going
to start to see these roll out is probably more in situations that are not directly
in your home or my home, but rather in situations
that are like homes. They've got a decent
amount of variability, but you've also got a good
density of people there. And you might have trained staff, so think about more like
facilities, offices, hospitals, other places where basically
you can get more return on a single robot that's being
installed there than you can if you're asking every individual
consumer to buy their own.
Dave Vellante
>> What about on-shoring? Well, all this tariff talk
and bring manufacturing back. How feasible is that in the context of how much automation is doable? You hear Peter Navarro say, "Oh, we should build iPhones at home. " And then you do the math and it's like, that
doesn't make any sense. Where does it make sense?
Sviat Dulianinov
>> I think it does make sense
for a number of products. That's one of my favorite
topics, actually, this year. Obviously, because we're in the right mix right now and the right timing. But the problem in the
US is, yes, you want to bring it back for many reasons. Including, by the way, security an IP. Because if you build something
as expensive as a GPU server, which is $300,000, you
don't want to lose IP to some country in Asia. You're bringing here. The question is, you don't have enough
people that are skilled, so they haven't been doing it for the last 50 years assembling
things, so they don't know how to exactly do it.
And the second problem is, you don't have the scale
of those skilled personnel. The scale, meaning you're not in China or India, you don't have a billion people that could do the job, so you
have to turn to automation, robotics, and software. And I think to our previous
discussion, we believe that robotics is really closed. You have a number of brands and robotic hands that you can use for assembling and
building those products. And then you have to improve,
mature more the software piece and AI models to enable that robotics to replace portion of the people and then upscale other people who are going to run those lines. I think for products,
not maybe like iPhone, but let's say what we focus on, servers. There is a way to build the new generation of factories here in the
United States that are going to be powered by software and by robotics. And that will enable
building similar volumes. And actually, better quality
here in the US using robotics and a fewer number of
employees versus China, Taiwan, or any other country.
Lindon Gao
>> I also agree with this. And this is the really
exciting part about foundation models, actually, in robotics. Which is, because of foundation
models, now we're able to use a lot more low cost hardware. Traditionally, your typical UR KUKA arms, cobots, would cost 50K each, and a pair would cost over 100K. Now, with foundation model robots, it's a couple thousand dollars per arm, and that actually unlocks a
different dimension of things that we could potentially
leverage robots to do. So maybe for high precision things, it might be a little more difficult to use a low cost hardware. But actually, for general
things like packaging, folding boxes, even very simple assembly, putting screws into any devices,
those things are starting to become possible with foundation model. And that part is very exciting.
Dave Vellante
>> Have you guys sized the TAM? It's got to be many, many
trillions of dollars. It's almost just
mind-boggling how big it is. You probably don't care
because it's just huge. But are there TAM figures on this?
Benji Barash
>> Jensen at GTC said it's the next multi-trillion dollar market.
Benji Barash
>> Yeah, he did say that.
- What you just said. And he would know, I guess.
Dave Vellante
>> Agentic enterprise software
is probably multi-trillion, so I would think that robots could be, and software defined
robots could be an order of magnitude even greater than that because you're talking about the physical and digital worlds coming together. Let's end, each of you. And
we'll start, Lindon, with you. Imagine, share with us your
vision of a steady state, where you'd like it to be. And how long do you think it's
going to take to get there?
Lindon Gao
>> What do you mean by
steady state? How long-
Dave Vellante
>> Your north star for your
company and your industry, and what does that look like? And is it near-term, midterm, long-term?
Lindon Gao
>> Well, I think in the very imminent term, which is in the next couple
of months, what we're aiming to do is deploy a lot more
foundation model robots into the production environments to start adding value immediately already. And what I really extrapolate, when I start extrapolating a year out or two years out, I'm
actually extremely optimistic that we could potentially get a lot closer to general purpose than
we original belief. If we actually look at
how rapidly GPT was, OpenAI has been developing
over the last three years. We went from GPT three to
four, to reasoning models, test on compute,
optimization and so forth. And body AI is going through
the same curve right now. And within the next two years,
I think we'll see some major, major development in the industry. And that's where I'm most excited about, which is you're not just going to see this in a few applications, but you're going to see robots
a lot more in the field.
Brian Gerkey
>> Brian, share your vision.
- Sure. In Intrinsic, our north
star is democratizing access to robotics. We want to make it possible
for basically anybody who has a problem that
they would like to solve with some kind of robotic
system, to have access to the tools that they
need to solve that problem. And we think that's achievable
in the next several years. I think that's where what
I'd like to see is for it to be possible for somebody who probably in a business context. They've got the business they're running, they've got some physical
work that they need done. They're probably facing a
labor shortage, which a lot of the customers and partners
we're working with are. And they're trying to figure
out, how do I get this work done given the resources
I've got available? And they're able to take
the existing robot hardware, which we talked about
earlier, is doing pretty well. They can take software, they
can take great foundation models, and they can basically
use the developer tools and the infrastructure, for
example, that we've built. And they're able to put together
that application, test it, confirm that it works well, and then deploy it and operate it. And we should just see
a huge proliferation of these kinds of applications. Because the one thing that
roboticists, if I may say, are pretty bad at, it's figuring out what robots are good for. We're good at building the robots, we're good at building
the software for them. At least, speaking for myself, I'm terrible at picking,
what should the robot do? What I like to do is
put the tools out there and let other people figure that out.
Dave Vellante
>> Yeah, that's interesting.
I remember when I first met Dean Kamen. You guys know Dean Kamen, right? And he had these competitions. And they're like, "What are you are going to do with all this stuff? " But he helped get it all started. All right, Benji, presumably you're trying to democratize the robotic data.
Benji Barash
>> Yeah, absolutely. All the
robotics companies that we work with right now are going through a lot of the same challenges. They've built some number of
robots, maybe a small fleet, and they're trying to
have a much larger fleet. They want to get their
robots deployed in more environments at bigger scale. And actually, the dirty secret of robotics still is it's
the hardest part to do that. To actually scale up a robotic deployment and have many, many robots actually all working is really challenging. We saw how difficult it was at Amazon. Amazon's got a million deployed
robots at this point in fulfillment centers, and also some last mile delivery capabilities. But just going through that
choke point, that's the chasm where most robotics
companies still die and fail. We're trying to help these
robotics companies go from a few robots, a few deployments,
up to many deployments and scale up with confidence so they build a reliable system.
Dave Vellante
>> Do they have to achieve
Byzantine fault tolerance?
Benji Barash
>> In some cases, it really
depends on the industry. Of course, if they're working in surgical robotics, it's got to work, right? If it's drones, they've got to work, they can't fall out the sky. If it's self-driving cars, you
don't want to have a crash. If there's a safety and
reliability connotation, it has to be a pretty high number
of nines of reliability. But if it's a Roomba in
your house, if it's a robot that's going to fold your
T-shirts for you, if it's going to load the dishes, the
failure cases may not be as important as impactful.
Dave Vellante
>> Sviat, why don't you bring
us home with your vision?
Sviat Dulianinov
>> Thanks. Thinking about Bright Machines, our vision is we want to see
a few factories here in the US that are enabled by the
technology that we bring to the table with software
and robotic lines. And this is what we call
AI factory of the future. Historically, we've
deployed more than 100 lines around the world in different factories, but there's a vision that you can run the full factory with it. And I just want to add that
it's not only about us, I think there's a lot of
other models and companies and robotics that are
going to be part of it. We're going to use AGVs,
we might use humanoids. Who knows when you build
stuff? It's going to be a mix. I'd love to see, in
several years, there is holistic work, cooperation of
different types of robotics. Use case based applications,
the moving vehicles, the humanoids that produce a great result, economically speaking. And a great value for the industry and the community here in the US and other countries in the future as well.
Dave Vellante
>> The work that you guys
are doing is life changing, it's industry changing. Thank you for that, and
congratulations on getting your companies up and off the ground. Thanks for coming on theCUBE,
really appreciate it. All right, and thank you for watching. This is Dave Vellante for John Furrier, theCUBE + NYSE Wired: Robotics
& AI Infrastructure Leaders. This is a media week, day three. We'll be right back right
after this short break.