In this segment from theCUBE + NYSE Wired’s “AI Factories – Data Centers of the Future” series, theCUBE’s Dave Vellante sits down with Rob Biederman, managing partner at Asymmetric Capital, to unpack a disciplined approach to early-stage investing amid AI-scale infrastructure shifts. Biederman explains Asymmetric’s founder-first model: writing $1–$10M checks (often via SAFEs), joining boards as they form and helping operators with go-to-market, operations, finance and strategy (not product/engineering). He shares why the firm avoided 2021’s lofty SaaS multiples in favor of backing proven builders earlier (single-digit pre-money), and highlights portfolio execution such as a cash-efficient LATAM e-commerce company scaling from ~$1-2M to about $50M in revenue. The discussion also explores Asymmetric’s subscale buy-and-build plays (e.g., pool cleaning in San Diego, sleep apnea clinics in Houston), where density, tech-enabled services and platform ops expand margins and enterprise value.
Biederman weighs in on AI economics as enterprises race to “AI factories,” cautioning that not every AI workload creates ROI and that overbuilt compute assumptions could face a reckoning. He argues that winners will prove a clear 10× value equation and avoid scaling go-to-market before product-market fit. Additional insights include early liquidity discipline (returning $0.20 on the dollar before the fund’s third anniversary), portfolio survivability (34 of 35 companies still operating; three positive exits), and guidance to founders: make your value proposition relevant, credible and differentiated. Tune in for candid perspective on how capital efficiency, ownership discipline and anti-thematic sourcing intersect with a world where GPU-dense data centers and AI-scale software are reshaping enterprise infrastructure and economics.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for AI Factories - Data Centers of the Future
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for AI Factories - Data Centers of the Future.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. If you don’t think you received an email check your
spam folder.
Sign in to AI Factories - Data Centers of the Future.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future
Please sign in with LinkedIn to continue to theCUBE + NYSE Wired: The AI Factory - Data Center of the Future. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jerry Tang, TigerDC & Atlas Cloud
In this segment from theCUBE + NYSE Wired’s “AI Factories – Data Centers of the Future” series, theCUBE’s Dave Vellante sits down with Rob Biederman, managing partner at Asymmetric Capital, to unpack a disciplined approach to early-stage investing amid AI-scale infrastructure shifts. Biederman explains Asymmetric’s founder-first model: writing $1–$10M checks (often via SAFEs), joining boards as they form and helping operators with go-to-market, operations, finance and strategy (not product/engineering). He shares why the firm avoided 2021’s lofty SaaS multiples in favor of backing proven builders earlier (single-digit pre-money), and highlights portfolio execution such as a cash-efficient LATAM e-commerce company scaling from ~$1-2M to about $50M in revenue. The discussion also explores Asymmetric’s subscale buy-and-build plays (e.g., pool cleaning in San Diego, sleep apnea clinics in Houston), where density, tech-enabled services and platform ops expand margins and enterprise value.
Biederman weighs in on AI economics as enterprises race to “AI factories,” cautioning that not every AI workload creates ROI and that overbuilt compute assumptions could face a reckoning. He argues that winners will prove a clear 10× value equation and avoid scaling go-to-market before product-market fit. Additional insights include early liquidity discipline (returning $0.20 on the dollar before the fund’s third anniversary), portfolio survivability (34 of 35 companies still operating; three positive exits), and guidance to founders: make your value proposition relevant, credible and differentiated. Tune in for candid perspective on how capital efficiency, ownership discipline and anti-thematic sourcing intersect with a world where GPU-dense data centers and AI-scale software are reshaping enterprise infrastructure and economics.
In this AI Factories – Data Centers of the Future segment from the New York Stock Exchange, theCUBE’s Dave Vellante sits down with Jerry Tang, founding partner at VCV and founder of TigerDC and Atlas Cloud, to unpack how AI factories are redefining data center strategy. Tang explains why TigerDC is developing AI data center sites in South Carolina and North Carolina, with a first phase targeting 300 megawatts of IT load and an ultimate goal of two gigawatts, and how land, permits, power agreements and grid design turn each build into a multi-year effort. He a...Read more
exploreKeep Exploring
What factors contributed to the decision to start VCV and what businesses does it encompass?add
What are the primary functions of TigerDC and Atlas Cloud, and what industries do they focus on?add
What services does TigerDC provide in relation to data centers?add
What are the unique requirements and concerns of financial services regarding AI cloud solutions?add
What are the key companies involved in your partnerships?add
>> Hey, everybody. Welcome back to the New York Stock Exchange. We're here in the Buttonwood Podium, overlooking the Options Exchange. My name is Dave Vellante and this is our AI Factory series, the NYSE Wired, plus theCUBE. We're super excited to have Jerry Tang here. He is the founding partner of VCV. Jerry, thanks for coming in. Good to see you.
Jerry Tang
>> Thanks, Dave.
Dave Vellante
>> So, founding partner. So, tell me about the firm, VCV, why you started the firm and what's your schtick?
Jerry Tang
>> Sure. So, I'm engineering trained, turned investment banker. I worked as an investment banker for 14 years. Actually, I would say my first entrepreneur experience was in the bank. I was able to increase the size of the lending operation in the US by 20X, from $500 million to $10 billion a year from time span 2014 to 2019.
Dave Vellante
>> Wow, fast. I mean, as an intrepreneur, right?
Jerry Tang
>> As an entrepreneur in a large organization, it has over 20,000 employees globally. And I always thought to myself I was going to quit my job in 18 months. 18 months became 14 years. I was treated pretty well because of the successes.
Dave Vellante
>> Yes, good culture.
Jerry Tang
>> 2021, I finally quit it to start on my own. So, VCV is a holding company. There are two businesses we have under the VCV umbrella related to AI. TigerDC is a AI data center development company, and Atlas Cloud is an AI cloud company focused on financial services vertical.
Dave Vellante
>> Exclusively financial services?
Jerry Tang
>> Exclusively.
Dave Vellante
>> Okay. So, let's see. So, TigerDC essentially helps build data centers, is that right?
Jerry Tang
>> It builds the physical and the power infrastructure. So, we have sites in South Carolina and North Carolina. Potentially, scale up to two gigawatts in those two states. And we are targeting hyperscale customers. We have been in a very active discussion with multiple of them.
Dave Vellante
>> Helping them build out, so you're-
Jerry Tang
>> So, we provide the space, the power, the cooling and cable, PDUs. And then, they can just come in and rack up the computer servers.
Dave Vellante
>> So, you need land, energy and water?
Jerry Tang
>> Exactly.
Dave Vellante
>> And so, you've got that and then that's how you're attracting these hyperscalers.
Jerry Tang
>> Yeah.
Dave Vellante
>> And where are you at? Are you actually to revenue yet or is this-
Jerry Tang
>> We just broke ground a couple months ago. Yeah, the first phase will likely be a 300 megawatts of IT load.
Dave Vellante
>> Okay. And then, Atlas Cloud is a GPU cloud-specific, is it AI cloud-specific financial services?
Jerry Tang
>> AI cloud specifically for financial services.
Dave Vellante
>> So, what is the unique requirement in financial services that you guys are trying to attack?
Jerry Tang
>> There's a lot of concerns around compliance, a lot of concerns around data privacy. For example, I spoke to some of my former colleagues in investment banks. They all worried about having to upload data to the public cloud or to the public GPT providers. So, that's a big unmet need. And if they deploy privately themselves, they have outdated models that don't work really well from an inference point of view.
Dave Vellante
>> So, is Atlas Cloud, don't hate me for this, is it a colo or is it like a neocloud, or a hybrid of those?
Jerry Tang
>> I would say it's a neocloud, except we aim to solve enterprises' need for AI from bottom up, the GPU hardware, the cloud layer, and the AI inference layer and the even, agent layer.
Dave Vellante
>> So, Jerry, you're hitting both of the growth vectors because we know that much of the AI is occurring in the hyperscaler clouds and the neoclouds, but the neoclouds are this new emerging vector of growth, which you're playing part of, but it's also serving enterprises who don't necessarily want to put their data into the hyperscaler clouds, which are more general purpose, or like you said, they have privacy concerns. How about latency? Is that a factor? Is that something that you address or do you have that same sort of challenge?
Jerry Tang
>> That's one of the things we can provide to the clients. There's a lot of complaints about the AI models taking longer than clients want. So, we have a fine tuning capability and inference engine optimization capability to help to reduce that latency.
Dave Vellante
>> What's the sustainable value proposition? On the one hand, if you have data center capacity and you can light up GPUs, that's a win. But assume that doesn't last forever as a competitive advantage, what is that unique advantage that you bring with whether it's TigerDC or Atlas Cloud to the customers?
Jerry Tang
>> I mean, you hit the nail on the head. I think the power constraint is real, it's probably going to be there for the next 5 years, 10 years, 15 years. So, it would be for while before there's an abundant amount of power capacity to power all the GPU servers. Secondly, if you are just focused on one particular vertical, you're going to gain a lot of knowledge and expertise in the processes and integration with these enterprises in that sector. So, that's our hope. That's how we want to build our moat going forward.
Dave Vellante
>> Okay, so energy's a constraint. I presume you're GPU-constrained, like everybody else. Compute. Everyone's compute constrained.
Jerry Tang
>> I think the energy is more constraining than the GPUs right now-
Dave Vellante
>> That's what Satya said the other day.
Jerry Tang
>> Yeah, yeah. He said that he got tons of GPUs sitting in a warehouse without power or racks to plug in them, right?
Dave Vellante
>> Yeah, right. He's waiting to light them up and... Well, that's a depreciating asset, that's not a good thing. Okay. So, we're here at the AI Factory series. Jensen uses that term, he's created it as NVIDIA's own. What do you think about an AI factory? What does that term mean to you?
Jerry Tang
>> I think it's an integrated system to provide the intelligence that individuals and the companies can use readily. That means you have the power to drive the AI factory, which are the physical infrastructure that TigerDC can develop and build and deliver. And also, the GPU servers, the networks and the maintenance on the cloud layer of multiple thousands or tens of thousands of GPUs in a coherent way, plus above that, inference framework and engines that are efficient enough for various enterprise and individual usage. I'll give you an example. Right now, the video generation is very hot. If you can fine tune or optimize the inference engine, you can reduce the cost of the 5 cents per second video to probably 2 cents per second for the video at the same quality.
Dave Vellante
>> And you do that through software?
Jerry Tang
>> You do that through looking at the algorithm of serving the AI models. Yes, exactly, software.
Dave Vellante
>> Okay. So, power is a constraint, is the constraint really, right now.
Jerry Tang
>> Yes.
Dave Vellante
>> Is the network underutilized today?
Jerry Tang
>> Depending on which network you talk about it, the intranet, the cluster network within the data center itself or the external-
Dave Vellante
>> Let's unpack that. So, let's say the scale-up network, the interconnection of the GPUs and I guess there's scale-out, which is ethernet and then, of course, which is across AI factories. Let's start with the scale-up. Is that network underutilized?
Jerry Tang
>> I don't think so.
Dave Vellante
>> No? Okay.
Jerry Tang
>> Yeah, I think that's fully utilized. It can be IB, InfiniBand, or it can be ethernet with 400 gigabytes, 800 gigabytes of bandwidth.
Dave Vellante
>> Okay. So, that's not a problem. Maybe is the scale-out network underutilized?
Jerry Tang
>> The scale-out? I wouldn't say it's underutilized. It's highly geographic-dependent. So, let's say you have your data center in the middle of nowhere, yes, that can be a constraint for your data center. But you-
Dave Vellante
>> Maybe underutilized is the wrong term. Let me try a different question here. If power is the constraint and you can improve the power efficiency with the next generation, whether it's a Blackwell or a Rubin or whatever it is, if you can improve it by, what is it, 3X, 4X?
Jerry Tang
>> 3X, 4X, 10X.
Dave Vellante
>> So, then that means that essentially if you can monetize that AI factory, that means the next generation is a no-brainer from an ROI standpoint for an operator. Is that the right thinking? I call it Jensen's law. Buy more, make more or save more. Does that law hold true in your world?
Jerry Tang
>> I agree with Jensen. If you can reduce the cost, OpEx of the GPUs, let's say you reduce by 3X, 4X, your demand could go up by more than 3X, 4X-
Dave Vellante
>> Without having to add more power. Now, you can do more work. You can generate more tokens.
Jerry Tang
>> Exactly.
Dave Vellante
>> And your cost per token per watt goes down. Now, come back to the network. If the network is fully utilized, then you don't get the benefit. But if the network is utilized at say 60%, 70% and you can take it to 75% or 80%, that's also a benefit, is it not?
Jerry Tang
>> I would say the intranet, the cluster network is fully utilized.
Dave Vellante
>> It is? Okay.
Jerry Tang
>> It is kind of a constraint. For example, if you have a lot of data to transmit amongst the GPUs, one GPU needs to wait for another GPU to transmit the data to this GPU to do the compute, so there's a wait time. So, arguably, if you can enlarge the bandwidth of the intranet bandwidth from 800 gig to even more, you can increase the utilization rate of GPUs and the GPU is the most expensive item in this whole ecosystem.
Dave Vellante
>> Okay. So, maybe that's why NVIDIA make the marketing claim that the network is essentially free. I like to say it's economically neutral, maybe. Is maybe a better way to say it, maybe a more honest way to say it, but free is a good marketing. When you think about the stack and how the stack is changing, you're a engineer, you understand the general purpose CPU stack, CPU network storage. Now, everything's changing. How is that stack changing to accommodate what I'll call extreme parallel processing? What part of the stacks are changing the most and how are they changing?
Jerry Tang
>> Yeah, so you are talking about a CPU and a GPU, what's the difference? So, there was a online video on YouTube, I think explains it really well, I think they are shooting a white board with a bunch of paint guns. And CPUs, you need to shoot serially, it took like five minutes. GPUs, you have 100 of them-
Dave Vellante
>> Scatter plots.
Jerry Tang
>> Scatter plots, you can paint in one second. And in order for the GPUs to do that, you need this hyper-connected network either InfiniBand or ethernet, right?
Dave Vellante
>> Right. Well, how do you see that playing? That's funny, somebody said to me today, InfiniBand, is that still around? I'm like, "Well, it actually NVIDIA's revenues on InfiniBand doubled." And they were like, "Wow." They kind of were blown away by that, so there's places for each. But how do you think about InfiniBand, where it fits? Is it the, what you call an intra-network and then for maybe training and then ethernet for the scale out and scale across? Is that how you see it?
Jerry Tang
>> Yeah, so ethernet is more for inference. You don't need thousands or tens of thousands GPUs to be interconnected to each other. And InfiniBand is for training because in order to get to compress the time to train a model, you want to have every GPU talk to every GPU in the same cluster.
Dave Vellante
>> Right. Okay. So, what about partnerships? Who are the key companies that you're partnering with? I presume NVIDIA is one because-
Jerry Tang
>> NVIDIA for sure because we purpose-built for TigerDC our data centers for NVIDIA manufacturer the GPU servers. And we team up with the MEP design firm from Stargate, and we team up with one of the largest general contractors in the data center space.
Dave Vellante
>> So, what are the assumptions that you're making about this market? Energy is a constraint. It sounds like you feel it's going to be a constraint for some time. What other fundamental assumptions, what other bets are you making?
Jerry Tang
>> Yeah, energy in the US or most of the Western world is just going to take longer to develop, given the regulatory constraints, the land constraints. The other side is really demand. Our belief is as the GPU cost, computer cost goes down exponentially using Jensen's law, the demand will more than increase for the same amount of decreasing pricing.
Dave Vellante
>> Jevons paradox?
Jerry Tang
>> Yes.
Dave Vellante
>> Satya likes to quote, right? Price per-
Jerry Tang
>> Another analogy is this, do you know what a percentage of a brain consume as-
Dave Vellante
>> They say it's 10%, but then I've read that's-
Jerry Tang
>> I think it's like 20% if you're actively thinking all day. So, my analogy is right now data center power consumption is about 5%, 6% of total power usage. That should go up to 10%, 20% easily.
Dave Vellante
>> I think, Jerry, it depends whether you have a CPU brain or a GPU brain. Okay. So, I wonder if I could ask you come back to the land and the regulatory constraints. What does it take for you to actually get to the point where you can break ground? And then what's the cycle look like?
Jerry Tang
>> It's going to take some time and that's why I'm bullish about a data center supply-and-demand dynamics. For example, in South Carolina, we started looking at this land parcel about 18 months ago. We hired a team from X, Google, Meta, Microsoft, to help us to project manage it. It took us 12 months to get a initial design and get the initial permits. And also, it is taking us 18 months to get the power agreement in place.
Dave Vellante
>> And then, you work on the regulatory pieces in parallel?
Jerry Tang
>> Yes, we work the air permits, the land permits all in parallel. But once you have all these things ready, it will take you another 15 months, 18 months to finish the building.
Dave Vellante
>> So, you've got to start with the design. You're working on the regulatory approvals in parallel, getting the building permit essentially. And then, it's 18 months you got the power in place. So, you're working that in parallel as well, right?
Jerry Tang
>> Yes.
Dave Vellante
>> Okay. So, that adds six months to the 12 months. Okay. So, it's 18 months, and by the end of that 18-month period, once you have the power in place, do you have the regulatory approvals or not necessarily?
Jerry Tang
>> We have initial permits already, we just need the tenants, whoever is going to come in, to tweak the final design before we submit for the final permits.
Dave Vellante
>> Okay. And then, so that's another-
Jerry Tang
>> That would be another three months, four months.
Dave Vellante
>> Okay. And then, you got to build it.
Jerry Tang
>> Then, we got to build it.
Dave Vellante
>> So, you got to get the contractors, you got to get the electricians, you got to get the plumbers. You've got to get the supply chain in order.
Jerry Tang
>> We pre-lined up a bunch of the contractors already ahead of time. One of the contractors started working with us last summer actually. So, when we started the project, they're ready to go.
Dave Vellante
>> So, start to finish, how long does it take to go from initial design phase to producing tokens? Is it three years?
Jerry Tang
>> I would say three years.
Dave Vellante
>> Yeah, okay. And then, you said you're in South Carolina?
Jerry Tang
>> South Carolina, yeah.
Dave Vellante
>> Awesome state. Where in South Carolina?
Jerry Tang
>> Upper state of South Carolina, in the Greenville, Spartanburg metropolitan area.
Dave Vellante
>> Oh, you're in the city?
Jerry Tang
>> Yes.
Dave Vellante
>> Or close to the city?
Jerry Tang
>> We are very close to civilization, yes.
Dave Vellante
>> Oh, explain that choice. I would think you'd be out in the hinterlands, find a spot in the Columbia River. So, why'd you choose that location? How do you get access to power? Are you on the grid? Are you doing your own power?
Jerry Tang
>> It's a great state to do business in general. We are within a very large industrial park, has access to transmission lines, to gas transfer lines. So, it's strategically positioned and we actually met with the governor, he's very supportive of our project. The state is providing some very favorable tax incentives to us.
Dave Vellante
>> But ultimately, you're going for two gigawatts of capacity? Is that what you said?
Jerry Tang
>> That would be the ultimate goal, yes.
Dave Vellante
>> And the local grid can power that?
Jerry Tang
>> It would be a creative solution. There's a grid. There's a new power plant. There will be some transitional micro-grid solution.
Dave Vellante
>> Okay.
Jerry Tang
>> Yeah.
Dave Vellante
>> That you guys will build on your own to support your data centers-
Jerry Tang
>> Yes. On our own and bringing partners who have done very large projects.
Dave Vellante
>> Yeah, but quasi-dedicated capacity or energy for your data centers or-
Jerry Tang
>> Yes. It will be a dedicated capacity.
Dave Vellante
>> How did you fund all this? People want to know, entrepreneurs, they start a firm. How'd you fund it?
Jerry Tang
>> I mean, if you look at the news these days, this announcement of Blackstone, BlackRock, and all these firms are announcing tens of billions, hundreds of billions of investment-
Dave Vellante
>> So, you're saying it wasn't hard?
Jerry Tang
>> I don't think it's hard. There's a lot of capital chasing data center deals.
Dave Vellante
>> But you have VCs? You have private equity?
Jerry Tang
>> For the development company itself is self-funded. On the project level, it'll be a PE. We'll bring in joint venture partners to fund the project.
Dave Vellante
>> I see. So, the structure is a holding company orchestrates everything and you funded that yourself and then it's project funding, based upon the business plan and everything else. People are excited about this industry. I'll give you the last word. Where do you see this industry, not next year, not in two years... How far out can you go? Let's go three years because your data centers will be built, okay? What are you going to be walking into when you're generating that intelligence and those tokens? What's the world going to look like to Jerry?
Jerry Tang
>> It'll be amazing. I think it would take a very large percentage away from human workers, the tedious part of their work and increase the productivity tremendously. For Atlas Cloud, as I said, we're focused on our financial services. Every year there's a trillion dollar of expenses on staff and that's our TAM.
Dave Vellante
>> Yeah, scaling without labor is the future that I think we're going to see. Jerry Tang, thanks so much for coming on theCUBE-
Jerry Tang
>> Dave. Thank you....
Dave Vellante
>> NYSE Wired. It was great to have you. And thank you for watching the AI Factory series. I'm Dave Vellante with John Furrier. We'll be right back from the New York Stock Exchange Wired, plus theCUBE, right after this short break.