We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Physical AI & Robotics Leaders. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Physical AI & Robotics Leaders.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Physical AI & Robotics Leaders
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Physical AI & Robotics Leaders. Signing in with LinkedIn ensures a professional environment.
Chris Stephens, field CTO of Groq, joins theCUBE's coverage of the NYSE Wired Robotics and Artificial Intelligence Media Week to discuss the company's innovative approach in leveraging AI and robotics technology. With a significant role at Groq, Stephens shares insights on the company's mission to democratize AI inference and highlights their plans for global outreach, focusing on scaling and making inference power accessible worldwide.
In this session of theCUBE, Stephens discusses Groq’s exponential growth and its impact on the generative AI indust...Read more
exploreKeep Exploring
What is the role of the speaker at Groq?add
What is the process of optimizing at the silicon level and up to the software layer in order to provide customers with a simple experience of connecting an API and getting inference, despite the complexity of networking hundreds of thousands of Groq LPUs together with proprietary physical networking and fiber optics?add
What is the process for switching from OpenAI API to the Groq API like?add
What is one way to keep up with the constant developments in the open source community and changing models as an enterprise development team?add
>> Welcome back everyone to theCUBE's coverage here at the NYSE. This is our East Coast studio, access point. Some say access point for our subnet here in New York. Of course, we've got Silicon Valley, and there's the bell closing the option market here. Get the bells here, this is what I love about theCUBE. I've got a great lineup here of robotics leaders, AI leaders, all part of the NYSE Wired and theCUBE collaboration. Again, all the experts, got over 30 interviews. Chris Stephens is here, Field CTO at Groq. Chris, great to have you on. We were just-
Chris Stephens
>> Great to be here.... >> riffing before we came on camera. Wish I had the camera rolling. You like the bell?
Chris Stephens
>> Yeah, the bell was great. I feel like they rang the bell for my being here, so thank you very much for doing that.>> I mean, I timed it all wrong. Had I got it right, I would have had you, "And welcome my guest, Chris Stephens," and then the bell would go off. That would have been more timely.
Chris Stephens
>> Or the mic drop at the end, right?>> Well, great to have you on. We love chatting with Groq. Jonathan has been on theCUBE two years ago when Groq was in the inner circle. People knew what you guys were working on. The TPU at Google, we knew about that. Insiders knew about Groq and the industry's learning more. Since then, it's been quite a growth tear for Groq and the industry.
Chris Stephens
>> Sure has.>> It's been amazing. What's the update? You got the one and a half billion dollars, we covered that on SiliconANGLE a month ago. You guys are shipping basically data center sized racks. 747s.
Chris Stephens
>> Basically data center sized data centers. Right, loading up 747s, shipping them to the Middle East. We had that thing up and running in no time.>> All right, talk about what you do. What's your role at Groq?
Chris Stephens
>> Yeah, so I'm a Field CTO, so our field engineering technical go-to-market resources are on my team, and then I'm a liaison with Groq engineering for customers.>> What does that day-to-day look like? Sales with big customers, you're selling to small enterprises, you're big integrations.
Chris Stephens
>> All of those. I mean, the beauty about Groq is it's all of those things. My day-to-day, sure, that's more on the enterprise scale customers, but you've seen the stats on Groq's viral growth and a million developers on Groq we got there last month. We're really trying to bring the power of inference essentially to everybody. This is part of Jonathan's vision about scaling the company is to make inference available to everyone globally. Part of the impetus behind tying all these threads together, right? The work in the Middle East is another region of Groq reaching 4 billion people that are within a reasonable distance of the Middle East for innovation and development. But yeah, I spend most of my time working with bigger customers or helping. We've got a lot of startups.>> What's the engineering liaison? You meet requirements? Do you work with them on certain projects? What's the engineering like? Obviously chips are chips, you guys make fast, it's like very big systems.
Chris Stephens
>> Yes, we do. But on top of that, the primary go-to-market for Groq is Groq Cloud. Serving all this inference power by way of an API, a mode that people are familiar with interacting with. My team spends a lot of their time really just helping customers cross the chasm into production. I think that's for, there's three factors that play into why we need deep technical people on the ground with our customers right now. I think first, the industry is still so new, right? I mean, we're two years into this generative AI industry in large enterprises, so teams are still figuring out what it is they're supposed to do, how to bring these systems to life. Secondly, the customers themselves again are just learning. You have data science teams that were working in probabilistic systems 18 months ago that are now trying to bring large language models to life, and what does that mean? How do you monitor these things? How do you ensure quality out of these systems? And so on. And then Groq, we're an overnight success story eight years in the making we like to say, right? That we launched Groq Cloud about a year ago, and as everybody knows, it's been incredibly viral and wonderful for us since then, but that's about a year ago. You have these three factors, Groq's a new commodity, new concept in the market. Customers are still trying to figure out what to do, and the industry is still new. So these three things are all moving at the same time. My team are highly technical, almost field deployed, forward deployed engineers helping customers. How do I evaluate models? How do I ensure I'm getting quality? Because these things are all moving so quickly.>> What's some of the things you're seeing on with the uptake? Obviously, by the way, congratulations on the developers. We covered that too. It's a real sign of success. You can't just buy your way into the engineering developer market.
Chris Stephens
>> Right.>> It's pretty much organic, everyone knows that.
Chris Stephens
>> And it has to work.>> Yeah, they'll tell everybody.
Chris Stephens
>> Yeah, exactly.>> The first ones to complain and also, they'll be loyal too.
Chris Stephens
>> Exactly.>> It's the best thing about developers. If you're good, they're good. What are they doing? What are some of the use cases? What do you see the innovation creativity coming from? Are they implementing it into their systems? Are they using it for their apps? Are they consuming it? Are they producing to it? Take me through the relationship.
Chris Stephens
>> I mean, it really runs a really broad spectrum. You can imagine a lot of your global 2000 are building knowledge management systems, internal chat systems doing search and document summarization. We have customers in financial services. You can imagine the firms that are paying attention to the bell ringing here today, right? The massive amounts of data and information they have. A lot of that is stored in documents that sat in permafrost for decades inside of big companies and mining across that for patterns and insights. Imagine you're running a large portfolio and you just want to ask questions of your portfolio, right? There was a terrible earthquake in Southeast Asia last week. What's our exposure to industries that might be affected by the earthquake? Just asking your portfolio those questions. We're seeing a lot of that type of use case, but not just, yes, financial services is the example I'm giving, but you can extrapolate that into just about every other industry. That use case is pretty standard. There's a lot of really, really interesting things happening right now in multimodality. We just launched a partnership with PlayAI to bring text-to-speech to Groq, so now you can have a full voice-to-voice experience using Groq. As a customer, I want to be able to engage, maybe again it's with a portfolio, but I want to be able to engage naturally with voice and I want to have my response come back to me naturally in voice. So we're seeing, it's really a wide spectrum.>> I mean, it's a wide spectrum with the fact that those numbers, not everyone has the same general purpose use case because AI is really much a customized thing for people and their environments, beauty's in the eye of the beholder.
Chris Stephens
>> That's right.>> If I've got an app, I'm doing certain net workflows, I might have a different data model, I might have certain completely different things, but just using the service I think is the key thing. I want to get your thoughts on something that Jensen Huang, the founder of NVIDIA, said to the analysts, only brief, we had one-on-one with him. He said, "Inference people think is easy, but it's not, it's really, really hard."
Chris Stephens
>> Yeah.>> They think training's harder than inference. They always do inference. NVIDIA does training. I mean, I never said that, but I think I've heard people think that, I mean it sounds easier. There's no real training. I don't use lot of GPUs.
Chris Stephens
>> Yeah.>> So they think instantly cost equals hard.
Chris Stephens
>> Yeah.>> Inference is maybe less costly depending how you look at it, but it's hard. Can you explain scope the levels of degrees of difficulties with inference beyond just prototyping? We're talking about real inference, scale, maybe it's edge, it's going to be intelligent. What's the levels of difficulty to make it work great?
Chris Stephens
>> If you just think about it from Groq's perspective, but the same would apply for the rest of the inference space is Jonathan and team spent years optimizing at the silicon level, and then up from there, and then now we're optimizing at the software layer to make that available. You go all the way from silicon all the way back up to this cloud stack and an API so that a customer can have, in our opinion, you should have a very easy experience, connect an API and get your inference. But hidden inside of there are hundreds of thousands of Groq LPUs networked together with proprietary physical copper networking and fiber optic on the backend. You get into the hardcore, almost like we were talking before, old-school systems engineering. Again, it's at the chip level and up. For someone my age, I've been at this technology thing for 25 or 30 years or so. You didn't spend a lot of time thinking about that. I'm a data scientist in my background, so you thought about higher up the stack and now really understanding how these systems work, why they're designed the way they're designed, what does it take to put a model into Groq and the complexity of that and understanding that we're trying to abstract that away from a customer so they don't have to worry about it. But it is terribly complex.>> I mean, it's definitely accurate. I mean, I think you're right on. First of all, great approach to do that. We complimented them on theCUBE many times, but we've been saying that and seeing evidence, the fact that the best AI companies are getting closer to the hardware lowest level possible, silicon on chip. I just talked to a great entrepreneur, photonics on the chip, machine learning on the chip. Another company we've talked this week about, and they're hardcore. I mean it's not like, look how much code I've just wrote. They write small code, because they're squeezing advantages of it. That proves as a shift just because if you look at the PC industry, you had to have hardware, you had to have an operating system like Windows. Then you had applications like Office and Word. Now, Microsoft had a monopoly. They had the system software in Windows and then had the Office suite. We all know that monopoly. Try to get broken up. It did get broken up a little bit, but underneath was all the engineering involved in the motherboard. Dell did a lot of stuff there, HP, IBM. There's that system, hardware. Windows, if you break that down, it's totally over, wrong analogy. But if you compare the concept of hardware in AI era, you guys have to do all that.
Chris Stephens
>> Yeah. Customers have to understand it at least to some level of depth. If you're running technology teams or application teams in big companies, the deployment mechanism now becomes a factor again, there certainly will be applications that run on CPUs and maybe for a long time, if not forever, there'll certainly be applications that run on GPUs. There'll be applications that should run on technology like Groq. I was on a panel talking about quantum a couple of months ago. As that comes online, right, there'll be applications. So now you have this router that your teams have to understand that this is a workload that really belongs here in the stack and this one belongs on Groq and this one belongs somewhere else. I don't think for a while that that's going to be just one unified stack underneath like we saw in the last 15 years with the cloud, everything was x86 and we abstracted all that stuff away. Like your laptop analogy, right? We abstracted enterprise tech away, and now you have to get back into that level of depth again to understand that if you're going to be successful.>> That's why Dave Vellante and I were talking about this all the time on theCUBE, AI factories is working. I mean that positioning and mental model, oh, I get it. I should have a factory of my business to produce outcomes. But then they go, how do we do it? I have all this pre-existing stuff and cloud's getting a lot of the AI workloads. I mean Jensen Huang said that, people want it as a service.
Chris Stephens
>> Sure.>> But when you want to go in the enterprise where the IP is, data, I mean crown jewels are in the data.
Chris Stephens
>> That's right.>> That's got to be on-prem, not going to see anything not off-prem if you have a choice.
Chris Stephens
>> A lot for sure. But I mean, again, Groq's primary go-to-market mode is through Groq Cloud. There's ways in which that can either be on-prem or mimic an on-prem deployment, but for us it's trying to bring massive scale, parallelized inference, compute.>> Okay. So how do you serve an enterprise? Through your cloud?
Chris Stephens
>> Generally through the cloud, that's right.>> So what's their onboarding look like? Take me through a day in the life. I'm a customer, "Hey, I want to use Groq."
Chris Stephens
>> I mean, the beauty is, this will sound like marketing speak, but it's actually not marketing speak. The API that we have is modeled after the OpenAI API. It's three lines of code to switch. Imagine you've built this complex application on-prem in the cloud, however that might be. You've got your rag embeddings and your vector DB, all this complexity. In there are inference calls that happen somewhere today in order to switch those to Groq, you're calling the Groq API, it's almost that simple.>> Do I have to change any of my pre-existing configuration, or am I just calling Groq for inference?
Chris Stephens
>> You're calling Groq for inference. You're not moving any data. Groq doesn't manage or store any data.>> By the way, if I'm doing vector embeds a certain way, do I have to use your embeds for inference?
Chris Stephens
>> You don't.>> Okay, that's interesting. I did not know that.
Chris Stephens
>> Yeah, so Groq->> Learned something today. Training, it's a little training session here.
Chris Stephens
>> Yeah.>> I can infer from that later.
Chris Stephens
>> Yeah. You asked me what I do every day. This is what I do every day, right? And so yeah, for a customer, we're trying to make it as easy as possible. People are familiar with accessing systems through APIs. That's the general impetus behind Groq Cloud and the API.>> What's the coolest thing right now for customers that they get value out of? Because first of all, the ease of use is critical. Three lines of code, piece of cake, not a lot of disruption to the existing system. I love that piece. It sounds like it's very easy to use, but what am I pointing at? What am I using inference for? Am I using it to whatever I want it to do? Is there certain things that you see now coming out of the Groq value that people tend to harness on now?
Chris Stephens
>> Yeah, like I said a second ago, we're seeing a lot of work now in multimodal use cases.>> Like what?
Chris Stephens
>> Speech. For example, and Wendy's is not a customer. You've probably seen what Wendy's is doing with their drive-throughs so it's like speaking to an AI and it handles the order and all that kind of stuff in the restaurant. A use case where you're speaking to the AI, and that the latency necessary to have a reasonable customer experience where I'm speaking, the AI is inferencing something, might be also calling an LLM or other agents. And then speaking back to me, you can't have 5, 10, 20 second inference cycles in a use case like that. It has to literally be instantaneous.>> Low latency. Yeah, instantaneous, real-time.
Chris Stephens
>> Exactly.>> Near real-time as possible.
Chris Stephens
>> Exactly. And the other thing we talk about build fast at Groq, and of course Groq is fast. We could do a demo and everyone's like, "Wow, that's incredibly fast. I can't believe Groq can run so fast."
But fast means other things I think to large enterprise teams as well. We're big participants in the open source community and we believe in open source and open weight models. And so how do I keep up as an enterprise development team with this constant horse race among the models, right? Today, it was one model, then DeepSeek comes out and then it's not DeepSeek anymore, it's somebody else and there's this constant back and forth.>> Well, you guys do that.
Chris Stephens
>> And we do that for you so those models are available all on Groq. So if you need to choose between a different model, then you can just switch the API endpoint and call a different model on Groq. That's another part of going fast, is the optionality that that brings.>> Well, it's also too, if you think about the enterprises, they're never going to have the levels of sophistication, except for the high-end ones.
Chris Stephens
>> Sure.>> There's enterprises that are just so large that I don't have to put them in super enterprise category.
Chris Stephens
>> Yeah, sure.>> Those guys will have core competency around data. They're going to do stuff differently. But the average enterprise market can't keep up with the cost, and the risk to them is operational risk.
Chris Stephens
>> Of course.>> They create black boxes, someone who's doing it leaves, you got a black box right there, or just use a service. You guys take care of the five wheel of innovation, whether it's models.
Chris Stephens
>> That's another part of building fast, right? If you think about, again, this enterprise team that you're describing, they make a bet on a certain model, right? I mean at this pace, three weeks later, that model is obsolete, right?>> Well, and models also have different characteristics. Some are really great at reasoning, some aren't. Some are better at first token out.
Chris Stephens
>> Exactly.>> Some might-
Chris Stephens
>> And so then you're looking at, sometimes you want a mixture of experts. Sometimes you want speculative decoding techniques. Sometimes you're doing JSON output, sometimes you're doing tool calling. Sometimes you have a LoRA fine tune that you've done to a model that you want to run, so you're going to have this suite. Imagine a Fortune 50 enterprise, Fortune 500, big, big enterprise. They're likely to have a portfolio of models that they've determined that this model's good for this use case and so on. Let's imagine it's 10 models in that portfolio. Each one of those 10 are subject to all the things you just said. One becomes obsolete one day, there's an update that's necessary for another one, I need to swap this other one out. And having the operational risk associated with that, that's one of the things that we're trying to help you build fast with, is to ease that burden, track that.>> Chris, I love how you just laid that out because that was a nice elegant way to talk about the ops, I fine-tuned over here, I'm using this. Those are specific use cases that require the characteristics of the certain things the models can bring to bear.
Chris Stephens
>> That's right.>> And that's tuned into the outcome you want. It's like always tapping, making sure the water flows nicely, make some tweaks. What you said after was even more complicated because, okay, that's got to be engineered, but if the models are changing, that's another dimension of risk.
Chris Stephens
>> Yeah.>> Because now I figured out how to tune it and do the knobs and the buttons for the perfect output I want.
Chris Stephens
>> That's right.>> Mix my models together and then it changes.
Chris Stephens
>> Exactly.>> So that is the risk.
Chris Stephens
>> That's a huge risk.>> Is that a major value that you guys sell to the customers, or is that more of they understand that already? I mean, it's hard to tell that story.
Chris Stephens
>> I think it's an immensely valuable thing that, again, if we wanted to talk about why is Groq great, it's easy to show you that Groq is fast. It's easy to show you that Groq can be cheap, right? That we're trying to keep costs down to really democratize AI inference. But those are the easy parts. When you talk about operational risk like you're describing in a large enterprise, fast and price performance is important of course. But all these other operational risks, and that's why again, if you look at console.groq.com, you see all the models that are available on Groq and they're from all the different open model providers. There are speech models, there are text models, classic LLMs, there are different architectures, transformers and others. You have that optionality and choice to fit the right model to your use case because it's not going to be one, or at least it's very highly unlikely that it's going to be one.>> You guys must put a priority then on education, getting people educated on that. I just think it's a great service that you guys can do that. Tying into robotics, I want to bring the robotics angle in here. We're talking about robotics and AI obviously. I've said on theCUBE, robotics is the north star for AI because to get robotics to work, your AI's got to be tight. I mean, some of the stuff they're doing is the precision and latency has to be milliseconds.
Chris Stephens
>> Yes.>> I mean, talk about flying vehicles.
Chris Stephens
>> Right.>> Okay.
Chris Stephens
>> Yeah. These are sub-millisecond latencies.>> This is like there's no near real-time option. Some near real-time is second or whatever, sub-second, then some can be later, but robotics needs to have a lot of speed. Are you seeing a lot of that robotics inference coming in as a service? Because there's levels of robotics.
Chris Stephens
>> Sure.>> There's some that can require some latency, some might like super low latency.
Chris Stephens
>> Yeah. Right now, Groq is, we're not focused on edge computing, put one LPU on an edge device. These are scaled solutions. But behind a lot of these robotic systems are scaled Groq implementations to deliver that, like you said, millisecond latency. But you wouldn't put a Groq chip on a phone, you wouldn't put a Groq chip on a single edge device.>> So console.groq.com.
Chris Stephens
>> Console.groq.com.>> Okay.
Chris Stephens
>> Sign up, free API access.>> Already signed up.
Chris Stephens
>> You're free on the developer tier.>> I like the new logo. What are you working on? You did some teaching.
Chris Stephens
>> I do teaching. Yeah.>> Name some of the things you've got going on. You've got an interesting background.
Chris Stephens
>> Yeah, I've done all the things. I started my career as a practitioner. As a data scientist. We didn't use that word back then actually. I went and worked at SAS Institute. You remember SAS?>> of course, yeah. In North Carolina.
Chris Stephens
>> I worked at SAS. I worked on some product teams there.>> Dr. Goodnight.
Chris Stephens
>> Dr. Goodnight, yep. The visionary at the time in a lot of ways. And one of the most fascinating ways, we're not here to talk about SAS, but.>> I love SAS.
Chris Stephens
>> Company culture-wise, the things that they provided to their employees 30 years ago.>> Yeah.
Chris Stephens
>> You go to a Silicon Valley office now and there's snacks and food and catering and all these things like that, that was de facto 30 years ago for them. Right? So anyway, so I went there->> Well you know a little trivia on SAS, that's what I might as well talk about because I like SAS, because I think they're a great example of how to run a company, mission-driven.
Chris Stephens
>> Yeah.>> The word has it Google copied them with the Googleplex because Larry and Sergey loved the SAS campus-
Chris Stephens
>> Yeah, the campus down.... >> and what they did, how they took care of their employees in a way that was some say, cradle to grave. But I then asked, because I worked at Hewlett-Packard back in the late '80s and early '90s for nine years, but it turns out, he was very impressed with Hewlett-Packard.
Chris Stephens
>> Interesting.>> So if you look at HP, the old HP, very similar cultural vibes.
Chris Stephens
>> Yeah.>> They don't say permanent employment, but pretty much full employment.
Chris Stephens
>> Yeah.>> Great company benefits, very conservative, but yet aggressive on profit.
Chris Stephens
>> It's another example of sometimes it's like everything old is new again, right?>> Yeah. But we need to bring back that mission of the company, and you're starting to see it now. Again, in this sector that we're in, there's a lot of tech for good because first time in my career I've seen, I've never seen this before, where you had tech innovation, finance and I won't say philanthropy, but for good, a cultural impact. Values that are aligned. So a lot of entrepreneurs are doing things for good and it sounds like a philanthropist pitch. Yeah, we're going to make money.
Chris Stephens
>> I'm not going to make a blatant philanthropy pitch, but when Jonathan talks about driving the cost of inference to zero, a big part of that is opening the aperture of participation in this AI revolution. My personal opinion is that this is about the equivalent of electricity in terms of its impact on humanity. So if you take that as your analogy and you can say it's the internet or fire, some people say that, but either way.>> I like electricity, go back to electricity.
Chris Stephens
>> This is a big thing, right? By driving the cost to zero and the things that we're doing scaling globally, a big part of that is widening the participation. We want developers all around the world to be able to participate in this new AI economy to be innovating. We want to make sure that it's not, just in the hands of the few and the powerful.>> Democratize 100%.
Chris Stephens
>> Right.>> Yeah. I mean, what's great about some of the decentralized architectures we had also blockchain trailblazers the past two weeks, is that we have an environment now where you can actually have pure capitalism, fully transparent. So if it's fully transparent, it's laid out there so it shouldn't be a bad word.
Chris Stephens
>> That's right.>> Right? And by the way, with AI, it looks like there was more contribution to society.
Chris Stephens
>> Yeah, exactly.>> That supports some of the radical ideas. Some say it's like basic income. Well, if you can throw off all that value, why work as hard?
Chris Stephens
>> Yeah.>> I mean, unless you really love it. People, I like to work hard.
Chris Stephens
>> Sure.>> But that brings up different mindset of I don't have to be rich and then donate and then do good.
Chris Stephens
>> I agree with you. The thing that I worry about people you ask, what do I do with my time? So I do teach at Carnegie Mellon and one of the things people ask me all the time like, "What do you think is going to happen?"
And God forbid we don't blow each other up or whatever, but what I think is going to happen in all seriousness is something that looks like Wall-E, you remember the movie Wall-E?>> Yeah, I do.
Chris Stephens
>> Right? We're becoming capable of automating the toil out of our lives. I just hope we don't automate the humanity out of ourselves essentially is the takeaway that I got from Wall-E anyways.>> I mean, humanity is key. I mean, I was working with John Mack, a local philanthropist. He's got a project called Notes to Humanity, and he has a project where he's trying to get people to participate in actually writing down with their hands. I'm like, "Do an app, scale it, get millions of people."
Chris Stephens
>> Right.>> Postcard to humanity. In other words, you write down a couple of prompts and actually write in a postcard and he ingests it and collects all the responses.
Chris Stephens
>> Interesting.>> And it's how to save humanity. What can we do to preserve humanity in the digital era?
Chris Stephens
>> Right, interesting.>> And the responses are really different, but they're kind of the same.
Chris Stephens
>> Yeah.>> They want the human affect not to go away.
Chris Stephens
>> We don't want to lose the humans again.>> Yeah. And I think that, the humanity thing, is huge, but how do you do that? Obviously human plus AI, everyone knows that.
Chris Stephens
>> Well, I have a thought on that and it ties together two things. Again, back to what do I do with my time? It connects to Carnegie Mellon, but also I have five kids. My daughter's an artist, and so she studies art. She's actually in Italy studying art right now. I think about the importance of those fields. For the longest time you saw liberal arts departments closing in universities, and when people talk about keeping the humanity in society as these technology advances can bring powerful change, I think that that liberal arts, that humanities level thinking and education is going to really be important as we move forward.>> I was chatting with Greg earlier, I mean, Matt Rogers earlier, he's the founder of Nest. He's running a company called Mill and they're like a waste, he's building a wastebasket for the kitchen that looks beautiful like a Nest device, but it automatically decomposed everything. It's awesome, great vision. But he did Nest. He was working at Apple, did the iPod, iPhone, so he's one of those guys. He and I were talking about this because he thinks AI will completely change the waste stream or the waste management area, which makes a lot of sense. He lays out a good case.
Chris Stephens
>> Sure.>> Media we're in, that's being decimated by business model failure, deplatforming of people. So arts these areas that are very strong, not only to preserve art, art tech is huge right now.
Chris Stephens
>> Right.>> Most of the art that's collected is paintings so that the entire art community doesn't even know what to do with digital art.
Chris Stephens
>> Yeah.>> They think, oh, NFT, the monkey thing or whatever.
Chris Stephens
>> Right.>> No, there's other artists actually creating digital art.
Chris Stephens
>> Yeah. I was at an event and the CIO, I think is her title, maybe CTO, but of the MoMA was there, Diane Pan, I think is her name, Diana Pan. Anyways. She was talking about this very topic and absolutely fascinating. Some of the stuff they're doing there, this isn't a plug for them, but it just because you brought it up. Fascinating.>> Yeah, I saw a demo that was kick ass. It was basically someone had indexed or digitized all the art and they put it in a data form. And now AI algorithms render all the data from all the artists-
Chris Stephens
>> She showed us that.>> Oh, you saw that demo?
Chris Stephens
>> Yeah.>> Okay, so we saw the same demo.
Chris Stephens
>> Yeah.>> That's incredible.
Chris Stephens
>> It was fascinating. So it was AI generated art.>> Yes.
Chris Stephens
>> Having learned from all of these artists.>> Yeah, but their training was the masters who are no longer alive.
Chris Stephens
>> Right.>> Well Chris, great to have you on. Final question. What are you doing? What are you optimizing your time for? What are you focused on? What's the big thing that you're driving right now?
Chris Stephens
>> Professionally or personally?>> Both.
Chris Stephens
>> I mean, professionally, I'm spending a lot of my time in the Middle East making sure we get our business on square footing and really growing that there. We're finding massive global demand that we're serving from there, but also a lot of local demand in that region so that's a big opportunity for Groq naturally. And personally, I said I got five kids.>> And you're teaching at Carnegie Mellon.
Chris Stephens
>> I teach at Carnegie Mellon, so I try to play a little guitar on the side and unwind that way.>> Yeah, that's cool. Carnegie Mellon, which course you're teaching over there.
Chris Stephens
>> Yeah, so I do a couple of things. I teach in some master's and executive level or exec ed programs in data and AI. For example, we have a chief data officer certification program, and I was one of the instructors and faculty members that built up that program->> All right, train the next generation.
Chris Stephens
>> Train in the next generation, exactly.>> Well, thanks for coming on theCUBE.
Chris Stephens
>> Yeah, it's been great.>> Really appreciate your time and great to get the updates, and fascinating discussion.
Chris Stephens
>> Appreciate it.>> We went on a couple of rabbit holes here. That was really fun, thanks.
Chris Stephens
>> Cool, yeah, right on. This was fun.>> All right. I'm John Furrier, host of theCUBE. We are here at the NYSE. We're here for five days in New York City, robotics and AI leaders, wall-to-wall coverage continue and bring all the data to you. It's all free, streaming live on siliconangle.com and thecube.net. Thanks for watching.