In this interview from theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, Glean co-founder and CEO Arvind Jain joins theCUBE’s John Furrier to unpack what’s really working in enterprise AI today and what comes next. Jain explains why knowledge access remains the first successful AI use case at scale and how Glean’s enterprise search brings AI into everyday work. He details the past year’s lessons with AI agents – from the need for guardrails, security, evaluation and monitoring to democratizing agent building so business owners (not just data scientists) can create production-grade agents.
The conversation dives into Glean’s vision of the enterprise brain powered by an enterprise graph, highlighting the importance of deep context, human workflows and behavior to reduce “noise” and drive outcomes. Jain outlines core building blocks – hundreds of enterprise integrations and a growing actions library – that let agents securely read company knowledge and take actions across systems (e.g., CRM updates, HR tasks, calendar checks). He discusses how organizations are standing up AI Centers of Excellence, prioritizing “top 10–20” agents across functions like engineering, support and sales, and why a horizontal AI data platform that unifies structured and unstructured data – accessed conversationally and stitched together via standards like MCP – sets the foundation for AI factory-scale operations. Looking ahead, Jain says Glean’s upgraded assistant is evolving from reactive tool to proactive companion that anticipates tasks and accelerates productivity.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: AI Factories - Data Centers of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jeff Clarke, Dell Technologies
In this interview from theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, Glean co-founder and CEO Arvind Jain joins theCUBE’s John Furrier to unpack what’s really working in enterprise AI today and what comes next. Jain explains why knowledge access remains the first successful AI use case at scale and how Glean’s enterprise search brings AI into everyday work. He details the past year’s lessons with AI agents – from the need for guardrails, security, evaluation and monitoring to democratizing agent building so business owners (not just data scientists) can create production-grade agents.
The conversation dives into Glean’s vision of the enterprise brain powered by an enterprise graph, highlighting the importance of deep context, human workflows and behavior to reduce “noise” and drive outcomes. Jain outlines core building blocks – hundreds of enterprise integrations and a growing actions library – that let agents securely read company knowledge and take actions across systems (e.g., CRM updates, HR tasks, calendar checks). He discusses how organizations are standing up AI Centers of Excellence, prioritizing “top 10–20” agents across functions like engineering, support and sales, and why a horizontal AI data platform that unifies structured and unstructured data – accessed conversationally and stitched together via standards like MCP – sets the foundation for AI factory-scale operations. Looking ahead, Jain says Glean’s upgraded assistant is evolving from reactive tool to proactive companion that anticipates tasks and accelerates productivity.
In this theCUBE + NYSE Wired "AI Factories - Data Centers of the Future” segment, Jeff Clarke, vice chairman and chief operating officer of Dell Technologies Inc., joins theCUBE’s John Furrier and Dave Vellante to break down how AI factories are reshaping modern infrastructure and enterprise strategy. Clarke shares notable demand signals – including Dell’s Q3 record $12.3B in orders, $30B year-to-date, updated guidance to $25B for the year and a customer pipeline that includes more than 3,200 customers that have bought Dell AI Factories – while also flagging ...Read more
exploreKeep Exploring
What is the current state of orders and demand for AI Factories in the recent quarter compared to previous years?add
What are the implications of AI on competitive advantage and operational efficiency within a company?add
What are some indicators of successful investment and implementation of technology in large companies?add
What impact is AI having on enterprise infrastructure and data management?add
What are the challenges and strategies related to data management and utilization at Dell?add
What is the importance of consumer PCs to Dell and how does the company's approach to the PC market impact its business model?add
>> Welcome everyone to theCUBE here at our New York Stock Exchange studio. I'm John Furrier, Dave Vellante. This is part of our AI Factories series, got a special guest here, vice chairman, chief operator with Dell Technologies, Jeff Clarke, legend in the industry. Jeff, great to have you back on theCUBE here in our new studio. Thanks for coming in.
Dave Vellante
>> Yeah, appreciate you coming down.
Jeff Clarke
>> Thanks for having me, fellas.
John Furrier
>> Iconic location. The AI factory is the hottest content we have right now. It was a great vision, we started with you guys on this. It's now basically large scale systems, it's the whole discussion. And so, you just wrote a predictions post for 2026, I'm sure that's in there. What's your take on this market right now as the year comes to a close? AI Factories are hot, edge is coming fast. Speed game.
Jeff Clarke
>> Well, we just finished our Q3 where we announced $12.3 billion in orders in a single quarter, which was a record $30 billion year to date. Pretty extraordinary compared to last year and the year before. This is year three. Year three we shipped a billion and a half dollars, or excuse me, year one a billion and a half dollars, year two, $10 billion, and we just updated our guidance for this year to $25 billion. I think that's indicative of the demand environment. Our five quarter pipeline continues to grow, it's growing across all types of customers from neoclouds, to sovereigns, to enterprises. The number of customers that have bought Dell AI Factories is over 3,200 in our pipeline, there's over 6,000. So, I think momentum's pretty good.
Dave Vellante
>> That is good.
John Furrier
>> Are there any cracks in the demand picture? I mean, and is it across the board? Is it compute, storage, networking, the whole thing?
Jeff Clarke
>> Well Dave, the last time we were together we talked about this memory thing.
John Furrier
>> Yeah. Well, okay.
Jeff Clarke
>> This memory thing is an unknown.
John Furrier
>> So, that's going to create just more demand than supply, right? I mean, more of a balance then.
Jeff Clarke
>> I don't think we know yet. The AI sector segment has never been through such a change in input cost. The others are seasoned veterans at it. But when we ultimately get to it and I talk about it in my predictions, token growth continues to explode. Token growth continues to explode, you got to have token processors. Token processing machines are GPUs and the computational assets that go with that. So, it's hard to imagine it slows down, but it's something we're certainly going to watch. You ask what could be a change, there's one.
John Furrier
>> You guys made a great bet. At Dell Tech World last year, we chatted on theCUBE. I quoted this with Michael too when he was on just recently. You guys made good bets, and you highlighted that on theCUBE. The numbers are good, revenue's up. It's not like you're cutting costs and just drive your profitability, top line's booming. But the AI factory and the demand for tokens, that's funding the new scale data centers. Small, medium, and large, all data centers are growing. Hybrid, another great bet. The bets keep rolling in. 2026, what's the prediction bet win this year? What's going to come home for you guys? Is it hybrid? Is it-
Jeff Clarke
>> Well, look, I think I talk about five different things in terms of what I think are the logical evolution predictions that go along with this, and the first and foremost is speed, and what AI does with speed and it becomes the new competitive advantage. And I think about our company, so I can give relative examples of how we're using AI and what it's speeding up. We now have digital twins in our engineering environment. Our ability to simulate, understand multiple iterations of how to design the cold plate in these things, the CDU in these things has accelerated, and our ability to understand that at greater detail. I think about the speed at which our sales force can access information. We built an internal tool called Sales Chat, which is basically a prompt base, ask it a question and you access all of the company's information about its products and services, and serves that up in a crisp answer to our sales force. We've saved them hours a week in searching for information. Speed, speed, speed. And it's reshaping I think, the competitive basis and inferences at the core of that. And I think you have to have a flexible architecture inside enterprises to take advantage of that, because the tools are changing. Another example would be AI coding assistance. We've deployed one and we already know we need to upgrade it, and it's with new tools that are coming at an accelerated rate. You need a flexible architecture that's agile. Pull that out, put a new one in and just keep going.
Dave Vellante
>> When you meet with customers, I know you guys, you and Michael and the executive team said, "We're doing this. We're all in. We don't do it, we're going to get disrupted." When you meet with customers, are they as tuned in to the urgency and the mandate? And how hard was it to do what you just described? Your sales, I know you guys would work on things like next best action. I mean, some fairly sophisticated AI capabilities that may be tougher for mainstream enterprises. What are you seeing out there?
Jeff Clarke
>> Well, I think you have to look at it, the largest, most sophisticated companies in the world see this, see the opportunity, are embracing the technology and are rapidly deploying it and getting return on investments. Inside our company, we've embraced it. We've talked about the modernization of the Dell company. We're well into that, two and a half years into it and we're seeing real return on the investments to the point that we will accelerate more capability here. And we have yet to scratch the surface, which I think is the next level, which is Ingentec. And then, what we're seeing is increase proof of concepts by enterprises, and are walking away as there's a there there, that there is an opportunity to see a return on their investment, there is an opportunity to ultimately drive higher levels of efficiency, productivity, or back to what I described, speed. One of the ways that Michael describes this, I'll be directionally right, I won't remember the exact numbers, but the global economy is roughly $120 trillion. Two thirds of that is roughly service space so we'll call that 70 plus trillion. How much would you spend to have 10% productivity on $70 trillion with a benefit seven trillion? Likely a bit.
Dave Vellante
>> Yeah. A few trillion anyway.
Jeff Clarke
>> That's the prize that's out there across the smallest businesses to the largest multinationals in the world.
John Furrier
>> On the enterprise side, one of the things we're seeing is obviously the AI factor has been great for the large scale deployments, enterprise is going to open up really fast this year with agents. Thoughts on predictions there on on prem activity, any signaling you're seeing? Fast acceleration, mid-year, certainly the enthusiasm right now in confidence is much higher than it was even a year ago.
Jeff Clarke
>> Sure. I think the first step is along the lines that AI is driving, I think, a fundamental infrastructure change in enterprises. And there's the traditional way we've computed traditional workloads, stable workloads, generally block data sets that are very stable and we've become very efficient at managing that. Now we look at what I call accelerated computing, the AI era, and it opens up with containers, it opens up bare metal, it's unstructured data. The data is not very predictable. It's created all over the place. So, I think the way we view this is there is a partitioning of the architecture, and within the architecture and the accelerated computing you're going to see a continuum of cloud-based tools to on prem based tools. In other words, hybrid. And you'll see data sets that are very important, need to be secured, need to be triaged in real time where latency matters. There'll be a tendency to do those on prem, and there'll be data sets that are less sensitive that are massive scale that need that compute power of the cloud, and they will use the cloud. And that continuum I think is very logical, and that extends all the way to the edge. That's how we believe it's going to be built out over the next handful of years, and now we got to talk about agents. Agents are exciting, they drive a lot more tokens, which just there's this circle that it's ... Man, I've been at this a long time, we've talked about that. I've never seen anything drive as fast as this. So, you ultimately have a new architecture driven by this incredible new capability that the more you use it, the more you want it. And the more you want it, and the more you use it, you come up with new use cases. And the more use cases you look at and you extend out, you look at now Ingentec, and the currency is tokens, and now you're connecting the substrate of companies together, or of your company together and its processes, it's unbelievably powerful.
John Furrier
>> It's a great innovation flywheel, for sure.
Dave Vellante
>> Given that continuum, that spectrum that you just mentioned, which seems like getting richer, and richer, and richer, and widening, there's of course this narrative around GPU cycles that will compress, which is not necessarily a bad thing for Dell, but what are your assumptions on that? I mean, given that it's such a rich continuum, will today's training infrastructure become tomorrow's inference infrastructure? People are just going to dispose of the existing infrastructure and accelerate refresh? What do you think is going to happen?
Jeff Clarke
>> Well, I think mileage varies by our customers. It's very much business model specific, depreciation cycles play into this and there is no one answer that I could give. What I can tell you is these have long lives. Again, my best source of examples are our own company. Our AI coding assistants are running on Dell PowerEdge servers with A100s in them.
Dave Vellante
>> Yeah, okay. So, there you go. You're not going to just throw those out, right?
Jeff Clarke
>> It's a six-year-old part that is incredibly capable, that runs our AI coding assistant infrastructure. Now, there's more than one of them. So, there's a few of them over there in the data center that I didn't have to go retool that we created space for, had the cooling and power for, and it runs our coding assistant infrastructure.
Dave Vellante
>> And that's a depreciated asset or it's still driving utility. So, you're making productivity off of that.
Jeff Clarke
>> Absolutely.
John Furrier
>> It's great.
Jeff Clarke
>> So, that's an example of I know for our company what we've done.
John Furrier
>> It shows the new kind of stack that's emerging, because some use cases might not need GB300s. Some could have nice compute, good capabilities, and then the GB300s is going to open up those new token use cases. Any insight into those use cases? What's the big gear being used for the most in the factories? What's the big-tokens? They're pumping out the tokens and they're processing-
Jeff Clarke
>> Well, it's clearly large scale training of these foundational models is compute intensive. And then, inference of that at different companies is tremendously large scale. But most of the use cases for enterprises is, again, I look at our company. We're I think very indicative of that. You're going to run a use case for your developers, you're going to run a use case for your sales force, you're going to run a use case for your service organization, and our case content and our fifth use case is the supply chain. And you can begin to look at those use cases, which starts with the data. Now you have to look at the architectural considerations of network, storage, and compute, build a balanced architecture, get the data set, look at the workload, and then begin to ultimately run inference, which I call is AI and production, to get the outputs or outcome you're looking for. That's how companies, enterprises are going to deploy. It's not go pile me up a bunch of clusters or a big cluster with a lot of GPUs in it to be clear about it, it's what's my use case? What's my computational workload? What's my storage requirement? How fast do I need to feed it with my network infrastructure? Okay, I understand that. That's how we built our use cases.
John Furrier
>> Yeah, I love to throw back and look at some of the old Dell tech worlds. Go back to maybe four years ago of you guys had it right on cloud core edge, you had the whole end to end thing. But now if you modernize where you are now with the factories, everything's changed, everything's scaling up. So great vision, you check the boxes on those. I have to ask you, because I asked Jensen Wong this last Thursday, and Michael also talked about it when he was here, the edge is going to produce some intelligence, and AI Factories produce intelligence with tokens. Smaller LLMs are out, you're seeing people work with all different kinds of language models, foundation models, but the edge is where the action is for the access, end users, retail outlets, a lot more edge factory use cases coming. What's your prediction and vision on the intelligent edge as it ... I mean, you could almost see if a Dell Factory plugging into a retail outlet, connect all the wireless, hyper converge the edge, and you now have full factory intelligence.
Jeff Clarke
>> So, that notion of continuum that you referenced four years ago, I think is still the architecture in today's AI world. You got cloud, you got on prem data center, and you have the edge. And as we look at the edge, it's a great example just over our shoulders or over your backs here of edge environments. So, we think these micro large language models or small language models, whatever you prefer to call them-
John Furrier
>> Specialty models. Yeah....
Jeff Clarke
>> well, they get task specific. So, that's why I think specialty is the right way to look at. They get very task specific. So, you have a task specific model that's optimized to do one thing and one thing very well, and it tends to be open source in nature. And you begin to craft a software developer's world, a mechanical engineer's flow therm world. You can begin to build that model and that capability specifically for that task. That's where I think this goes, all the signs point to that. So, you have this computational need on the edge, and if you really want to extend that then we can have a number of agents on your computer, on the edge, and those agents are going to be local, requiring an NPU or some form of accelerator to do that. Plus the language model work. I mean, we just keep on come up with more use cases-
John Furrier
>> They're just computers. They're computers that are high performing. Yeah.
Jeff Clarke
>> They're just computers, and the need to compute is growing. As the token, again, tokens growth is nuts, continues that way, and it extends all the way to the edge to the PC.
Dave Vellante
>> And so, two years ago at Dell tech summit, you laid out the case for what I called at the time enterprise AGI. In other words, not chasing the Messiah AGI, all knowing, but really in the enterprise, smaller language models, more efficient, specialized models.
Jeff Clarke
>> Correct. Doing largely inference in some micro tuning or fine-tuning.
Dave Vellante
>> On your proprietary data. Okay. You're doing that, presumably your customers are doing that. What do you need to do that? You mentioned open models, open source, open weights. I don't know, do you need training data? What do customers need to actually bring in, train those models, ensure that it's proprietary, the GovernIT doesn't leak, it's got to be secured. Where are we at in that whole cycle?
Jeff Clarke
>> Well, you're going to love this given our many storage conversations in the past. Where's my darn data? What is it? Is it clean? How do I ingest it in these things? And then, how do I use it to get the outcome I'm looking for?
Dave Vellante
>> Who does that at Dell? Is that just got a storage team? You got a bunch of data scientists? Is it line of business people?
Jeff Clarke
>> It's a combination of such. And so, we have a three-pronged storage strategy, private cloud. I'll go to the other one, cyber resilience. The middle one I skip, which is why I skipped it because I want to talk about it, is the work we're doing around the AI data platform. So, now think of the AI data platform as our unstructured assets, so you would know them as ObjectScale and PowerScale. You would know it as our new parallel file system that's now in the hands of customers in beta, Project Lightning. The work that we're doing with NVIDIA around KV Cache to speed up inferencing. And then the data lakehouse, which is how do you ultimately help customers ingest their data is how we're helping customers. And that's a combination of capability around our product businesses and our service organization.
John Furrier
>> Talk about the physical AI piece, because in your roadmap you guys presented the investor meeting I think last month or the month after, I can't remember which month it was-
Jeff Clarke
>> It was October....
John Furrier
>> AI, agents and physical AI was the-
Jeff Clarke
>> That's the progression. Yes, sir. ...
John Furrier
>> three progression. Physical AI is right around the corner, robotics a super hot category. There's low hanging fruit use cases. Where does the physical AI start to click in? How do you see that progression transitioning or connecting?
Jeff Clarke
>> Look, so as a physical AI evolves and whether it's these types of robots or that types of robots, or types of drones, or other types of automated carriers or things, I think our role is when they're done for the day and they plug in and they say, "I did all of this today and here's what I learned today," and the model that they're built from gets updated, gets retrained and then spews out back, "Here's what you'll do tomorrow from what you learned today," that computational side of it, that training side, that tuning side, that learning side, storage, compute, networking, that's where we play. So, imagine in the autonomous driving world cars plug in, here's what I learned today, here's the mistakes I made, it gets translated. That same sort of physical infrastructure is going, or the same infrastructure, is going to be needed for physical AI and I think that's a role we can help many companies with.
John Furrier
>> Are you going to have an OEM deal with the car manufacturers for a little chassis data centerpiece in the chassis?
Jeff Clarke
>> Oh, you're killing me.
John Furrier
>> You are working on that. I mean, we saw, I mean, you're starting to see cars being retrofitted for computing, networking. I mean, they have all the instrumentation. I mean, it's a rolling factory at this point. It's a driving-
Jeff Clarke
>> I think the way to look at this is the technology is going to be immersive. It's going to be in many platforms. It's going to change the way we live, the change the way we work, the change the way we entertain ourselves, ultimately shop, buy, all of those things. I'm sure it'll find its way in many different products and services.
Dave Vellante
>> So, you know a little bit about PCs. You talked at the financial analyst meeting that you guys are going all in on consumer. I wonder if you could talk about why that's important to Dell, number one. And number two, I mean, I remember when HP decided to split and I said, "Oh, that's interesting. Now there's one company that has an end to end supply chain." Does that matter? Why does it matter? What role do PCs play?
Jeff Clarke
>> Well, PCs are a very important business to the company, and it's our most capital efficient business if you don't know. By far.
Dave Vellante
>> Really?
Jeff Clarke
>> Oh, by far. Its working capital dynamics are extraordinarily good. And when you think about it, it's also a scale business. Scale matters. Give or take, there's 280 million units in the PC industry. It's really hard to be in a scale business if you're not really working in half of it, and consumers roughly 45% of the units. So, if you're looking for scale and scale is a differentiator, and scale's a differentiator in cost, it makes sense to play in a broader swath of the market than a narrower swath in the market. And while we have prided ourselves to being the premium space and in the commercial space, there's still a lot of space by definition that I didn't cover, that's why talking about being in the consumer business is important. It's scale, it's capital efficiency, it's the brand reaching new users. New users that are going to make their way in their workforce need to see a Dell.
John Furrier
>> You can move faster too on a product lifecycle too. You can move faster on changing product cycles-
Dave Vellante
>> Oh, without question. ...
John Furrier
>> and that's a huge risk management piece.
Jeff Clarke
>> Test technology, get technology in, get it out, get to the next innovation, drive the innovation in consumer at a much faster pace-
John Furrier
>> So, your competitors miss a quarter or miss a decision on product, miss the market, that takes them sideways, you guys have that ahead.
Jeff Clarke
>> Sure. Any of us are susceptible to the same challenge opportunity or perhaps a mishap. For me, it really is about driving the scale business. If you're going to be in the PC business, then you need to compete. You need to compete in all of it, not some of it. If you're going to compete in all of it, that means you're in consumer. If you're going to be in consumer, you need to play to win. If you're going to play to win, you need to move with speed, you need to innovate, you need to have access to broad product coverage, you need to be able to have a simplified brand strategy and communicate your value proposition with marketing. That's what we're focused on, that's what we're doing, and it really fills out the edge because there's going to be task models on consumer PCs. Some of the best content creators in the industry are working on consumer class PCs.
John Furrier
>> Yeah. I have to ask you, just go back a few years. I remember the conversations we were involved in with you guys around the AI PC. It wasn't obvious back then what the use cases would be for that. What do you know now looking at going into 2026, the role of the AI PC will be? Because it's evolved. There's clear lines of site. You mentioned a few of them. You're going to be doing calculations on your PC device, you'll probably be talking to a model, maybe in a hybrid environment. What have you learned on the AI PC that maybe that's now clear that wasn't clear maybe two years ago?
Jeff Clarke
>> Well, I think in the coding development, in the engineering side, the hard sciences side, you can actually see the use of AI in models, and some of the engineering task and development tasks. The broader application, we've not got to that killer app. That's what we need. Now what's interesting, or what I think is important, is we're seeding the marketplace with a bunch of AI enabled PCs waiting for that killer app. They have NPUs in them, they have very capable accelerators in them, so how do we now utilize that? Again, been at this a while. It's not uncommon-
John Furrier
>> The browser wars are back, but they're in the PC. Perplexity new browser is pretty good.
Jeff Clarke
>> It's not uncommon that hardware leads software. So as the killer app is there, you have an install base now of increasingly AI capable PCs. I'm excited by that.
Dave Vellante
>> Maybe it goes beyond the browser, to your point. And then, if you look at the productivity comments you were making before about Michael's whatever, 115 trillion global GDP, if you can take what we're getting with coding and then bring that to other disciplines, that's where you're going to get that productivity hit.
Jeff Clarke
>> Yeah. Now think of add natural language search.
John Furrier
>> Yeah.
Jeff Clarke
>> Gosh, wouldn't you like to talk to the darn thing and it finds your stuff?
Dave Vellante
>> Yeah, I would.
Jeff Clarke
>> And does it like that? Versus what did I write in 2008? Last week?
John Furrier
>> Well, AI infrastructure is super hot right now, you guys continue to do well. Congratulations on the business performance and the product leadership. Again, we're thrilled to partner with you guys, and thanks so much for your support.
Jeff Clarke
>> Thanks for having us, I appreciate it. Thanks, fellas.
John Furrier
>> All right.
Dave Vellante
>> Thank you for coming in.
Jeff Clarke
>> Thanks for having me.
John Furrier
>> Jeff Clarke in theCUBE at our new NYSE studio is part of theCUBE Studios in Palo Alto and California, and also here near connecting tech and Wall Street. I'm John Furrier, Dave Vellante, thanks for watching.