Join us for an insightful episode featuring Raj Verma, CEO of SingleStore, as he shares his expertise in the rapidly-evolving landscape of data infrastructure and Artificial Intelligence. Hosted by theCUBE's John Furrier, this conversation takes place at the prestigious New York Stock Exchange, spotlighting SingleStore's strategic position and the innovative partnerships shaping the future of AI infrastructure. Verma's insights reveal the dynamic intersection of Wall Street and Silicon Valley.
In this episode, Verma discusses the transformative role of databases in AI development and the critical importance of modernizing data estates to capitalize on new AI capabilities. According to Verma, integrating data effectively can significantly enhance AI's operational efficiency, emphasizing the need for organizations to harness their own data. theCUBE analysts explore the future of enterprise technology, echoing Verma's predictions for AI-driven disruption across various industries. Don't miss out on the key takeaways from this engaging discussion. Learn more about SingleStore here: [SingleStore](https://singlestore.com). #AI #Cybersecurity #DataInfrastructure #SingleStore #NYSE
Stay connected with the latest in tech innovation by following the full series with theCUBE at NYSE Wired.
00:00 - Intro
00:06 - Launching into New Ventures: A Market and Partnership Overview
04:31 - AI Evolution: Infrastructure Trends and Applications Across Markets
08:57 - Modernizing Data Estates for the Future of AI and Agents
11:58 - Challenges with AI Hallucinations and Data Reliability
16:11 - Advancements in Data Technologies and Enterprise AI Integration
19:32 - Shifts in Enterprise Data Usage for AI
23:30 - The Future of System Software and Applications
31:24 - Disruption in Professional Services and SaaS Models
35:15 - Navigating the Future: AI, Innovation, and Strategic Roadmaps
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE + NYSE Wired: Mixture of Experts Series
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE + NYSE Wired: Mixture of Experts Series.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: Mixture of Experts Series. If you don’t think you received an email check your
spam folder.
Sign in to theCUBE + NYSE Wired: Mixture of Experts Series.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: Mixture of Experts Series
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: Mixture of Experts Series. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Kevin Cochrane, Vultr
Join us for an insightful episode featuring Raj Verma, CEO of SingleStore, as he shares his expertise in the rapidly-evolving landscape of data infrastructure and Artificial Intelligence. Hosted by theCUBE's John Furrier, this conversation takes place at the prestigious New York Stock Exchange, spotlighting SingleStore's strategic position and the innovative partnerships shaping the future of AI infrastructure. Verma's insights reveal the dynamic intersection of Wall Street and Silicon Valley.
In this episode, Verma discusses the transformative role of databases in AI development and the critical importance of modernizing data estates to capitalize on new AI capabilities. According to Verma, integrating data effectively can significantly enhance AI's operational efficiency, emphasizing the need for organizations to harness their own data. theCUBE analysts explore the future of enterprise technology, echoing Verma's predictions for AI-driven disruption across various industries. Don't miss out on the key takeaways from this engaging discussion. Learn more about SingleStore here: [SingleStore](https://singlestore.com). #AI #Cybersecurity #DataInfrastructure #SingleStore #NYSE
Stay connected with the latest in tech innovation by following the full series with theCUBE at NYSE Wired.
00:00 - Intro
00:06 - Launching into New Ventures: A Market and Partnership Overview
04:31 - AI Evolution: Infrastructure Trends and Applications Across Markets
08:57 - Modernizing Data Estates for the Future of AI and Agents
11:58 - Challenges with AI Hallucinations and Data Reliability
16:11 - Advancements in Data Technologies and Enterprise AI Integration
19:32 - Shifts in Enterprise Data Usage for AI
23:30 - The Future of System Software and Applications
31:24 - Disruption in Professional Services and SaaS Models
35:15 - Navigating the Future: AI, Innovation, and Strategic Roadmaps
play_circle_outlineEnhancing AI Capabilities: Building Critical Infrastructure and Developer Ecosystems Through Strategic Partnerships and Investment Growth
replyShare Clip
play_circle_outlineMention of a recent hackathon that showcased innovative AI applications.
replyShare Clip
play_circle_outlineNoting the future wave of IPOs driven by companies developing in the AI space.
replyShare Clip
play_circle_outlineChallenges of providing fast, efficient, and compliant AI infrastructure for enterprises.
In this interview from the theCUBE + NYSE Wired: Mixture of Experts series, Kevin Cochrane, CMO at Vultr, joins theCUBE’s John Furrier to unpack how a developer-first platform and an open partner ecosystem are powering next-gen AI infrastructure. Cochrane outlines three growth vectors – global data center capacity (spanning 32 regions) with NVIDIA and AMD GPUs, a best-of-breed partner stack and an expanding developer community. He shares takeaways from RAISE Paris, where Vultr convened partners and ran a LabLab hackathon that produced 60+ submissions and five...Read more
exploreKeep Exploring
What are the key vectors of growth being focused on in relation to data center capacity and the ecosystem for AI-native applications?add
What were the key components that contributed to the success of the RAISE event?add
What changes and trends are expected in the landscape of enterprise applications and IPOs due to advancements in AI infrastructure?add
What challenges are faced when implementing new infrastructure in an enterprise setting?add
>> Welcome back, everyone to theCUBE. I'm John Furrier here in our New York Stock Exchange studio, the Buttonwood podium overlooking the trading floor behind me. They're actively trading here, of course, part of the NYSE Wired program that we're powering. Of course, mixture of experts are coming through. We'll get them into our index. Just like AI, we've got a mixture of real experts. Kevin Cochrane is here. He's the CMO of Vultr, friend of theCUBE, just saw at the RAISE conference in Paris. Partnering, great partner ecosystem developing, building real critical infrastructure for GenAI. Kevin, great to see you. Thanks for coming back on theCUBE. Appreciate it.>> Great to see you again, John. Always happy to talk to you.>> I feel like it was just yesterday we were in Paris, but that was July, but great event on the international scene. Now we're back in New York. I'm going to be hosting a panel with you and the CMO of DDN, Jyothi, where you're starting to see the infrastructure, where you've got storage fabrics, network fabrics. NVIDIA's got a slew of new announcements coming out to make everything better, faster, and more performant. You're in the middle of it. You have many partners.>> Yes.>> Give a quick background for the folks that don't know Vultr, what you guys do. But you're definitely moving the needle on providing critical needed infrastructure.>> Yeah, so Vultr is what I like to call the best kept secret in tech. This is a company that launches platform in 2014. The only company that I know of in tech, and someone, please correct me if I'm wrong, that grew north of $100 million worth of revenue profitably without a dime of outside capital. And very importantly, not a single person in sales and not a single person in marketing.>> That is a hall of fame stat. Just saying as an entrepreneur myself, the top of Mount Everest of entrepreneurship is to do that.>> Right, right.>> So, hats off to the Vultr team. If you're watching props to you guys, that's how it's supposed to be done, but making people money is not bad. So you took money->> Of course, but I mean, let's get back to this real quickly. I mean, the credit does go to the engineering team. This is a company, just engineers that built a better platform and truly lived up the value proposition of being easy to use and super reliable, best performance, greatest cost efficiency, and simply having the right tools for the developers to get the job done. So, it was a product that just sold itself. And that is something that I think is even more critical here in this day and age at AI infrastructure, there's lots of acceleration on different vectors of roadmaps, storage roadmaps, networking roadmaps, GP roadmaps and more. And what's critical is that we get the infrastructure right and we make it easy for developers to very quickly spin up the infrastructure that they need so they can invent the future. This is what Vultr has always historically done, and that's what we're doing now here today in AI infrastructure.>> It's like the expression, "Do the right thing when no one's watching.">> Correct.>> It's an old adage, but it's about integrity. But you guys got the product right early.>> Yes.>> Okay. So, what does that mean for you guys now? Because you look at the stats. I mean, right now obviously all the headlines, huge growth, major capital build out in AI, that's the foundation layers of all the growth that's coming behind it. We're seeing, we can't without a day reading anything in Wall Street Journal or any kind ... Of course, theCUBE, that's all we talk about. But this is a table-setting moment. And you guys got the front end. What is this going to be enabling? What are some of the action you got going on now?>> Right, so let's look at three vectors of growth. There's one vector of growth that everyone's focused on, which is the capital expenditure, and how you're building out new data center capacity, how you're provisioning those new data centers with new top-of-line GPUs from NVIDIA, or in our case both NVIDIA and AMD, which we'll talk about a little bit later on. The two other vectors of growth, however, is number one in the ecosystem. It takes an ecosystem to build a platform. And the ecosystem is absolutely critical, because enterprises today need freedom and flexibility and choice to compose a radical new application stack for new AI-native applications. What we are doing here at Vultr is we're building an open ecosystem, a vibrant ecosystem so that our customers can work with best-of-breed technology partners across the AI stack. The third vector of growth is also the developer ecosystem. You have to remember that the AI engineers we have today will not be the AI engineers that we have tomorrow. There are millions of existing developers that are retooling and re-skilling and learning how to build new AI-native applications. Vultr has always been a platform built by developers for developers. And we have a mission to democratize access to best-in-class AI infrastructure for developers all around the planet, and to enable them to learn new skills and to basically invent the future. That's what we did at RAISE. RAISE was an amazing event. You were there, it was fantastic, but at RAISE there were two critical parts of what made our experience there so special for us. The first is that second growth vector. We were joined by all of our partners. All of our partners were there in our booth showcasing their newest innovations to a packed audience. It was absolutely amazing to see. And then we were also joined by the third growth vector. We ran a hackathon with our friends at LabLab and we had hundreds of just AI engineers in Paris submit some of the most amazing applications, over 60 applications. We had such a hard job judging a winner, we wound up having to give five winners in the end of the day. But these applications were amazing. But it's a power of what you can do if you bring the ecosystem together. You're investing not just in the data center capacity, not just in the GPUs, but you're investing in the developers and what they need to be successful.>> I mean, I'm huge on this point. People who watch theCUBE know that I'm hardcore about eco. I love ecosystems, because ecosystems is the proof that platforms work.>> That's correct.>> If you say you're a platform and can't show the ecosystem, there's no proof.>> That's correct.>> And that is the outcome of a platform. It enables others.>> That's correct.>> And that's what's happening.>> That's correct.>> And also I would point out that if you look at the IPOs here at the NYC as a window opens up it's crypto and companies like Figma, they're not a real AI company, but they're using AI. But behind these successful companies is the wave of IPOs coming.>> That's correct.>> Those are the people that are building in this new infrastructure.>> That's correct.>> What are some of the use cases? Can you share some anecdotal, you don't need to name names, but what are some of the things that are popping out of the infrastructure enablement that you guys are having?>> Yeah, so a few things. So first, we will see a wave of IPOs in the future once we get through this initial wave of build out of AI infrastructure and core platform services that enable startups to move fast and reinvent every single application that we use today on the planet. CRM in five years is going to look very differently than CRM today. Pick an enterprise application, it will be disrupted. It will be disrupted by either the existing incumbent player taking a radical approach to rethinking their technology stack. Or there will be a startup that displaces them. I mean, you have to look at this as the moment that we made the shift from client server to web. Traditional leaders and client server applications, some of them didn't retool and reinvent their tech stack fast enough, and then you wound up having new people come on board and take that leadership position. So, you're going to see over the course of the next two, three, four, five years the continued acceleration of IPOs for new people that are just taking advantage in the AI infrastructure that's getting laid today.>> We're going to have Joe Theon, who's a CMO at DDN, company we follow a very closely by all the founders on theCUBE. I think that speaks to some of the partnerships, their storage layer, fabric capabilities, that's being re-changed over, re-architected.>> Correct.>> So, the theme we're seeing is, get the architecture right and software will be fungible.>> Right.>> Don't screw with the architecture.>> That's correct.>> I think he'll say the same thing. These are the kind of partnerships. Are there other partnerships that are similar than say DDN, what are other examples? Obviously, network fabric, storage, memory.>> That's correct. You have to have partnerships at all layers of the staff, from everyone from the data center operators themselves, all the way up through to the higher levels of the application tier. To talk specifically about DDN, you're 100% correct. Which is, storage is one of those industries which is getting completely reinvented. The traditional storage architectures for archiving and securing data for the long-term is very different than the data infrastructure to mobilize data at high speeds to train, tune and infer models. It's just a very different storage architecture or a very different storage network.>> And they actually, the storage vendors or companies, they don't even want to be called storage. They're data platforms. Because they're storing the data.>> It's correct.>> It's not wrong.>> Exactly. I mean, it's almost like the word storage is used to think about, you'd have files and you would have them in filing cabinets and you'd ship them off to Iron Mountain way back in the day. And then you started digitizing all of those things and you need to store them on physical server machines, and you were worried about long-term archival and the security and the lockdown. And the records manager would relate that this is very different world. This is mobilizing data. This is mobilizing all of your digital content and streaming it as fast as you possibly can to train models into two models and to infer model. So, it's a very different fabric. It's a very different network. It's a very different storage architecture, but the partnerships extend just beyond even the storage layer as well. They extend to, again, like I said, all layers of the stack.>> Kevin, I like how you said files storage. In fact, what storage is to files and files is to storage, it's kind of like how storage is to the data layer, data fabric, because it's just an advancement.>> 100%.>> No one talks about files.>> No one talks, exactly.>> They're talking about storage in the old days, now it's like no one talks about storage. It still exists.>> That's right.>> But it's not actually, they say, "Hey, where's the storage box?" Or it's on an instance or whatever. It's there.>> It's there. It's so funny, because I remember my first job out of college, and I was a management consultant and I was doing the evaluation of a large studio. And we would have to basically go to their video archive, which were all these physical tapes back in the day, the tapes that were stored in this long-term facility. And you think about those days, because I'm a little bit older->> Blow off the dust.>> It was absolutely amazing. And that's a very different world that we lived in 30, 35 years ago. The world today is all of that digital content. It is the raw intelligence that's feeding new models for new generative AI capabilities.>> Talk about where you guys are at on the growth. You mentioned the build out, and it's not obvious to the average investor or maybe even participant in the industry, or even just the lay person that revenue's back loading.>> Yeah.>> Right, so revenue is, I mean, you're not monetizing aggressively. You're building out aggressively with a monetization sheet. You're making money, now it's revenue. It's not like there's no revenue. But I would imagine, correct me if I'm wrong, the focus would be back-end. Could you share your thoughts on that? Because somebody say, "Whoa, what are they doing for revenue?" That seems to be like a knee-jerk response.>> Yeah.>> A lot of these investments pay off when they hit, there's a crossover threshold.>> That's correct. That's correct.>> Can you share how you guys think about that?>> Yeah, so first and foremost, I think it's important to note that we think about the investment in a little bit more of a broader scope than other vendors in our space, the so-called neoclouds, which is first we do worry about data center capacity and data center expansion. But not just here in North America, we look at it as a global requirement. Because innovation's going to happen everywhere, and particularly for enterprises as they start broadly adopting agentic AI in particular, when they're actually deploying new applications, they are going to have to have AI infrastructure, the capacity in all regions in which they operate. First and foremost, it's about building the new data sector capacity, but also maintaining that geographic reach. We also think about the investment that's required in all of the core networking to connect all those data centers and make sure that we have the high-speed data throughput with safety and security across an entire fabric of GPU resources and GPU clusters that span the entire globe. We also think about the investment that's required in security and compliance. Because at the end of the day this is mission-critical data and it's going to be feeding mission-critical models that is changing the way your businesses operate. And so, the security and compliance can't be, and we've talked about this before, can't be that afterthought, which is what we traditionally do in tech when we move fast and break things. And I've said this before on the show, which at this time we need to move fast and create things and not break things. And so, when we look at the investment, we look at it a little bit of a broader picture. And more specifically, when we're also focused on investment, we're focused on investment in the full range of compute infrastructure that's required for enterprises to truly unlock AI. That's not only your top-of-the-line GPUs from NVIDIA, but that's also your top-of-the-line GPUs from AMD. So, you have freedom, choice and flexibility to best match your workload to secure the best price to performance with the variety of different GPU options we provide. And again, it gets back to investing in the developer ecosystem and the partner ecosystem to unlock the value in the capital investments we are making.>> Well, it's impressive business you guys have built. And again, hats off to the founders and the engineers. I have to ask, since it's such a highly efficient team, what are they working on now? What's the mindset? Because you brought in compliance, I'm sure that wasn't on the agenda when they're trying to make everything great and seamless and easy to use, but it does add an engineering task.>> Of course.>> What are these bright minds working on now? How are they thinking about the problem space? Take us through some of the inner thinking.>> The great news about our engineers is they work hand in glove with our customers and our partners all day long. So, we always look at it as a process of co-creation. And just at a top level, because I was literally just on all the Slack channels just 30 minutes ago before getting on here, there's a few exciting things that are coming down the line. So, stay tuned for an upcoming announcement on the first ever availability of AMD's latest GPU, the MI355X. We think it's going to be a groundbreaking GPU for enterprises that are looking for a price to performance edge, as they're looking to scale their new enterprise applications. Secondly, we're also working and doubling down on our compliance. So, stay tuned here in Q4 where there might be some exciting news about FedRAMP, not to tease->> Public sector.>> Public sector does matter. So, stay tuned. That should be coming up in the October timeframe. Looking forward to that. There's a lot of hard work underway there. And then thirdly, just today we announced our new global load balancer. So you can set up availability regions for your AI infrastructure so that when you're inferring your models, you have redundancy resiliency, as well as data residency and sovereignty when you're going global. So, do take a look at the news of our new global load balancer today. It actually just shipped in one hour ago.>> Okay, so now challenges. I'm reading a slide here that I saw this morning. It's called Challenges with AI infrastructure in production. It's got to be fast, interoperable, strong integration, scalable, efficient, and cost-effective, lower cost, not up cost. Thoughts on that equation obviously makes sense, but hard, talk about how hard it's to do all those things, or is it easy for you guys?>> I think it's actually, it is very hard. And it's hard for everyone, because if it's not hard you're not doing it right. I think the most critical aspect of it is, when you're dealing with an enterprise, you're dealing with an entire buying committee, you're dealing with the CTO, you're doing it with the CFO, you're dealing with the CIO, you're dealing with the CISO, you're dealing with the CTO. And each of these C-suite actors, each of their teams has very hard requirements for what's really required for an enterprise to be able to successfully deploy new infrastructure. I think what we do, unlike anyone else at Vultr, is we understand the CTO, we understand the CIO, the CFO, we understand these actors. And we're the ones that love it when the CISO shows up in the room, because that's always the funnest conversation. So, when we talk about->> You got all the answers to the test.>> We've got all the answers to the test. But you have to understand that the entire buying committee together is going to be making the decision for what is the AI infrastructure that they're going to be basing their platform on going forward in the future. And unless you understand them, talk to them, co-create with them, you're going to be missing a critical part of the requirements. And so, the requirements you highlighted, they're all coming from different stakeholders within that buying committee. And so, it's a very accurate list, because each one of those stakeholders has a different lens that they're looking through the investment they're making in their new AI infrastructure provider.>> My final question for you, we got Hot Chips next week. You got OCP, Open Compute Summit and Supercomputing 25 coming up, the word gigascale AI has been kicked around. What's your reaction to that? How do you define gigascale AI? Is it clusters, is it multiple data centers? How do you guys frame that, because that's where the puck is now and going to.>> Yeah, so you have to look at gigascale again on different vectors. There's gigascale within a particular availability zone. Then there's also gigascale, when you're doing a production deployment, you're an enterprise. You need to be able to train a large-scale model, a large-scale data set. You got your storage powered by DDN. So you're able to get that rapid throughput of high-quality data to feed that model. But at the end of the day, when you're actually deploying, you're not just deploying in one region and inferring from one region, you're deploying across all of our 32 regions globally. We had one customer, it's a very large networking customer. I can't name them without their permission right now today, but it's one of the top three networking companies. They spun up an entire new application on our infrastructure just last week. They spun it up in 23 regions overnight. That's gigascale. When you're really building and scaling AI, it's not just the scale within one data center, one availability zone, that's one thing. But across all of those availability zones, that's when you really get to the gigascale, because that's really when you're doing an enterprise application that's being utilized by millions, hundreds of millions of people on the planet.>> And I would just point out that there's economies of scale that go into the expertise of doing that. This isn't just Johnny come lately, knocked down a new data center, put a new region out there, even in the silo, you've got to be tied together.>> That's right. And this stuff is very complex. This stuff is very hard. It's like the security and compliance. It's not something that you can turn around and just do overnight. This is the benefit of us having 10 years of operating history. This is the benefits of us having deep, broad partnerships with the likes of Broadcom and Juniper, working closely with them on open ethernet standards, and kind of the next generation of ethernet to support AI infrastructure. It's that 10-year operating history. It's that process of co-creation. It's like having learned by doing year after year, customer after customer on how to scale. That makes a huge difference. And that's something that can't be replicated overnight.>> Yeah. I've been riffing with Dave Vellante on our podcast around the old Wayne Gretzky expression, "Skate to where the puck is going." That implies you're not there. You guys are skating with the puck.>> Right.>> So, it's kind of like when you're pioneering categories, the puck is, you don't have to wait for the puck.>> Right.>> It's sitting right there. Just get it and skate with it.>> Exactly.>> What do you think about that? Is that a good->> In my opinion, I think we are the puck.>> Great to see you. Thanks for coming on.>> Great to see you. Always great to see you.>> Great insight again from the doers making it happen. They are the infrastructure leaders, okay. The mixture of experts here on theCUBE and the NYSE Wired program. John Furrier, your host. Thanks for watching.>> Thank you.