In this interview from the Nvidia GTC AI Conference and Expo, Kannan Soundarapandian, vice president and general manager of high voltage power at Texas Instruments, joins theCUBE + NYSE Wired's Gemma Allen to discuss why power delivery is the critical bottleneck standing between today's AI ambitions and tomorrow's gigawatt-scale data centers. Soundarapandian explains how Texas Instruments is investing in gallium nitride (GaN) switching technology and 800-volt architectures to shepherd massive amounts of energy from the grid to the GPU in denser, more efficient form factors. With data center power demand escalating at least 40% year over year, he outlines why higher voltage delivered closer to the point of consumption is the only path to sustaining the next generation of AI workloads.
The conversation also explores the concept of "blast radius" — the cascading failure risk when a single power converter goes down and takes an entire AI workload with it — and why extreme reliability has become a zero-tolerance engineering challenge. Soundarapandian details how decades of automotive-grade qualification work gave Texas Instruments the foundation to meet these demands, drawing a direct line from 800-volt regenerative braking systems in Le Mans racing to today's AI rack power delivery. He also shares insights on the company's domestic manufacturing expansion, including seven new 300-millimeter wafer factories on US soil, and a newly announced grid-to-gate reference solution that reduces power conversion to just two stages from 800 volts down to the sub-one-volt rail where GPUs operate. From the physics of cooling in space-based data centers to the practical realities of decluttering racks for safe high-voltage routing, Soundarapandian provides a grounded roadmap for how power infrastructure will shape the trajectory of AI at scale.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
NVIDIA GTC 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for NVIDIA GTC 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for NVIDIA GTC 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
NVIDIA GTC 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to NVIDIA GTC 2026
Please sign in with LinkedIn to continue to NVIDIA GTC 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Kannan Soundarapandian, Texas Instruments
Gemma Allen sits down with Kappan Soundarapandian, Vice President and General Manager, High Voltage Power at Texas Instruments during NVIDIA GTC '26 at the San Jose Convention Center in San Jose, CA.
Vice President and General Manager, High Voltage PowerTexas Instruments
In this interview from the Nvidia GTC AI Conference and Expo, Kannan Soundarapandian, vice president and general manager of high voltage power at Texas Instruments, joins theCUBE + NYSE Wired's Gemma Allen to discuss why power delivery is the critical bottleneck standing between today's AI ambitions and tomorrow's gigawatt-scale data centers. Soundarapandian explains how Texas Instruments is investing in gallium nitride (GaN) switching technology and 800-volt architectures to shepherd massive amounts of energy from the grid to the GPU in denser, more efficien...Read more
exploreKeep Exploring
How critical is power delivery to the future AI/inference ecosystem, and how is Texas Instruments addressing the escalating power and voltage requirements for GPUs, LPUs, and other processors?add
Why is higher voltage important for power delivery in large-scale data centers, and how does the choice of voltage affect copper usage and overall efficiency?add
Why has rack-level power delivery evolved from 12V to 48V and now to 800V, and what challenges does bringing 800V into a server rack present?add
What did you demonstrate at GTC regarding your "grid-to-gate" power solution, and how does it work?add
>> Welcome back to theCUBE, here on the ground in San Jose. It's NVIDIA GTC 2026, and I'm here at the Texas Instruments Booth where the energy is palpable. Joining me now is Kannan, VP and GM of high voltage power at Texas Instruments. Welcome.
Kannan Soundarapandian
>> Glad to be here.
Gemma Allen
>> So we have heard so much this week about GPUs, LPUs, the future of AI and inference, but one thing we know for sure is that if power can reach these GPUs, then we don't have any ecosystem to begin with, and that is Texas Instruments business. Break it down for me.
Kannan Soundarapandian
>> You said that really well. That's exactly right. The expansion that is needed to go off and feed the next generation of AI, and the next generation of AI is increasing and scaling up at a clip that we've never seen before. That's why in Texas Instruments, we've been investing in very new technologies like GAN, for example, very high voltage isolation, high precision metrology. All of these things are going to become very important to shepherd that power need from the grid all the way to where the CPUs, GPUs, and other LPUs live.
Gemma Allen
>> So we hear a lot about energy shortages. It dominates the media, it dominates the markets at times. There is this view that we have a real crisis on our hands here. When we think about the inference era, is there an even higher requirement from a power and electricity perspective than there is in the days we're living in right now? How do things change futuristically for you and the team at Texas Instruments?
Kannan Soundarapandian
>> So for me, whether it is feeding a training GPU, an NVL, a trading GPU type of a rack or an LPU type of a rack, the power needs escalating as they are is a fact of life, and there are certain fundamentals that we have to go off and invest in to make sure we're able to feed that. So from my perspective, heading into higher voltages, as you mentioned, going to 800 volts and then making sure that we can deliver the amount of power in a highly dense form factor remains the same, no matter the flavor of training or inference at the end of it.
Gemma Allen
>> So you talk about voltage at scale. It's not about the current, it's about the voltage. What does that actually mean in practice? Talk to me about the technology and the process behind this.
Kannan Soundarapandian
>> So to make it a little bit more real, the reason we say voltage at scale is important, and that also has a direct relationship with current. So the reason I say that is current technology delivers power into those server racks at about 48 to 54 kind of volts, and today, to do that for a single rack, you're looking at about 200 kilograms of copper if you want to go to the next generation of energy delivery on that voltage rail. Now, if you scale that up to gigawatt kind of data centers, you are talking about 200,000 and just very large amounts of such materials that you actually need. That's just one part of the problem. Going to higher voltages means all that need can actually be fed with significantly less use of copper to start with, and more importantly, also in a much more highly efficient manner, going back to your point about the need for energy and the supply being a constraint.
Gemma Allen
>> Let's talk about the energy sources. Again, a very interesting time right now. Lots of conversations around where the future of energy will live. We hear about the potential for nuclear, we hear about solar. Where I'm from, we hear a lot about wind. What are you actually seeing and what are your predictions?
Kannan Soundarapandian
>> So depending on where in the industry we look, infrastructure builders, data center infrastructure, just people designing and building those and planning for the future, the escalation and need for data center power is massive. The lowest number I've ever heard is year over year, it's going to be 40%. So if you think about that, at this point, the need for these AI tokens is a given. It's not going away anytime soon, and that is an escalating part of our economy. It's going to drive the world economy itself, so energy will be pulled from anywhere it can be. That includes existing sources, existing ways of distribution, and completely new sources, and completely new ways of distributing it. That includes, like you said, nuclear, it includes wind, it includes what we call renewables, and just about any economically viable path to get energy into a data center will be needed in the future.
Gemma Allen
>> So we know it's a global problem. We also know there's a lot of pressure here in the US state by state to look for solutions for this that meet, I guess, citizens' requirements and also meet the demands of commerce. What are you seeing from the perspective of your own footprint, your own growth, globally and here in the US? Where are you doubling down?
Kannan Soundarapandian
>> So mostly, if you look at Texas Instruments as a whole and what we publicly announced at this point, large expansion of capacity to be able to source the materials needed for this expansion in AI data center, for example, are being built out right now. In fact, I believe we are one of the largest investors in analog and power nodes, exactly the kind of silicon you need to supply a GPU with the power it needs. Most of this, we are, I believe, building about seven new 300 millimeter wafer factories in the US alone, on US soil, and that is going to become a critical part of the infrastructure of the nation itself, if you will. So there is a lot of investment happening right now, and one of the best parts of how we've done it in TI, you may have heard the saying, when's the best time to plant an oak tree? The answer is 10 years ago. The best time to put 300 millimeter capacity in the soil was 10 years ago, and we did it. So it makes me excited to be part of this particular point in time, if you will. We're going to be able to meet that moment with the capacity we're actually building out here in the US.
Gemma Allen
>> I want to talk about the rock itself, but before we go there, let's talk about what's happening under the hood here from a technical perspective. So you guys talk a lot about GAN, you talk about blast radius. That sounds kind of terrifying. What does that actually mean? Maybe break it down for us.
Kannan Soundarapandian
>> Sure. So again, the very problem that we are solving necessitates engineering challenges that we've never faced before. For example, some of the elements that you actually see on our booth today, there's an 800 to six volt converter that runs at about two kilowatts per cubic inch. That's the amount of delivery capability it actually has. So if you think about all that power compressed into such a small volume, it has to be perfectly built to withstand that kind of enormous pressure over lifetime. So reliability, for example, becomes supremely important, and that's where the blast radius becomes a problem, because if today in AI data centers, building redundancy into your infrastructure is a lot more difficult than it used to be before. So if, for example, there's a failure in one power converter somewhere that's in the pathway of power to the GPU, you lose an entire workload. That's where that term comes from. So if once server goes down, it's not just the one thing that ends up losing the work it's been doing. It's an entire radius around it where you lose that workload, and you've got to restart it.
Gemma Allen
>> And the cost impact too-
Kannan Soundarapandian
>> Massive....
Gemma Allen
>> are massive, more so than ever before.
Kannan Soundarapandian
>> Yeah.
Gemma Allen
>> So the customer expectations are at a zero tolerance level, I'm sure.
Kannan Soundarapandian
>> Absolutely. And the problem of the day is to be able to get that power into these GPUs, but the problem that we are not necessarily talking about as much is once you implement it, it has to be alive and ticking at extreme low levels of failure, and that becomes the engineering problem. Whoever gets that right pretty much wins in this space.
Gemma Allen
>> And one thing that's unique about your value prop at Texas Instruments is you talk about having very lean, decluttered racks, this idea of having power direct, bringing it as close as it can possibly be to where it's needed. Talk to me a little bit about the evolution of that, and where do you go from here?
Kannan Soundarapandian
>> I'll give you a little bit of a history lesson over here. So we used to be able to deliver power into the racks on a 12 volt rack plane. That was when power needs were significantly lower. Then it was 48, and that's the state of the current right now. Now it is 800. There's a reason for this. We're not doing that just for performance reasons or even cost reasons. There's an incontrovertible fact that the only way to deliver more and more power into a smaller and smaller volume is to bring higher and higher voltage just close to the point of consumption. Now, if you are going to be bringing 800 volts into a rack necessitated by the amount of power you need to deliver, that better be completely decluttered. It better be completely clean because these are dangerous voltages that have to be now safely routed to as close to the GPU as possible, and this is no small problem, and it has to be done safely, securely, and it has to last a lifetime. These are the problems of the day that are the most fun to work on.
Gemma Allen
>> But these problems, Texas Instruments is obviously moving in a great direction to solve them, but they're industry-wide. We had a time in tech where there was a level of academic input and there was time and space for that. In this generation and this era we're in, it feels as though there is no time or space to really-
Kannan Soundarapandian
>> It is accelerated.
Gemma Allen
>> Right? So how do you think about that as well from the perspective of industry? Even back to the earlier point around energy sources, do you think that the industry's doing a good job of coming together to solve for problems in a holistic way?
Kannan Soundarapandian
>> The industry will get our report card here soon in a very compressed timeline, I'm sure. But you bring up a very interesting point. One of the things that Texas Instruments has been doing for the past few decades is very heavily investing in automotive technologies and industrial technologies, and quite honestly, that is the precursor and that's what made available all the materials needed to get into the data center space right now. So if you're looking at quality needs, any car, for example, stranded on the road, nightmare scenario, nobody wants to see it, so we've already been doing decades of work to get the reliability of our products up to a level that is suitable for automotive. Now, I would tell you, with the compression and the increase in power density needed for the data center space, those needs have escalated, they haven't gone down. But I'd say that the preparation we've had over the past few decades to get to that point, understanding how to qualify these products, understanding reliability over lifetime, all of these things are going to be absolutely key.
Gemma Allen
>> And just because you mentioned the example of the car, we also hear a lot about AI on the edge. Does the, I guess, unique output or the unique commercial requirement of these GPUs or of these chips in any way impact how you think about it from the perspective of power to the source? Is there varied tiers, for example, in terms of what's needed for a GPU versus an LPU versus what the kind of expectations and metrics are, or is everything just one of the same?
Kannan Soundarapandian
>> So I would actually say, you're right, there are different levels of problems that we need to solve, so power at the edge is going to be a very different problem there. It's going to be about how do you sip the least amount of energy, because in some cases, I can imagine there can be energy harvesting elements over there that are supposed to power whatever metrology is happening at the edge. Very different problem where now you're squeezing down how little power you can use versus at the center of it all, you're talking about GPUs and LPUs, and I wouldn't necessarily separate the two. There are differences, but here, it's all about massive power consumption, massive power density, and the ability to cool all that. Because the other thing that we don't talk about, which is as important, is if you are delivering the insanity of a one megawatt kind of continuous power into something the size of a refrigerator, you have to get that power out too. So it's every single decision we make in terms of efficiency of the products that we create, which is why we've been investing in GAN, for example. That is the best switch for this particular application space right now, so the less heat you generate while doing this, the less heat you end up having to remove. It's just this entire ecosystem that goes around and around.
Gemma Allen
>> It's a cycle.
Kannan Soundarapandian
>> Yeah. And you mentioned automotive. There's a very cool thing over there. If you think about it, 800 volt elements, designs, chips, they all actually have an automotive origin point, in my mind at least. The Le Mans race is actually very familiar to all of us, I'm sure. When electric cars go down the Mulsanne Straight at about 230 miles per hour and then you break, regenerative braking alone broke every 400 volt system available at that time. They had to go to 800 volts to be able to tolerate that energy recovery. That's where it started, so everything is connected. The journey that started over there, again, on a race rack, is basically what accelerates data centers today, and we've been part of that whole story.
Gemma Allen
>> Well, speaking of the edge and speaking of cooling, I have to ask you about data centers in space.
Kannan Soundarapandian
>> Yeah.
Gemma Allen
>> We hear a lot about it. We obviously hear about Starcloud and the H100 in orbit right now. What are your thoughts from the perspective of how that will impact your industry and your business? Is it opportunistic? Is it a little bit unnerving?
Kannan Soundarapandian
>> It is. To be very honest with you, this is extremely new. I tend to look at all of these problems in terms of energy, being able to supply the energy, and as we just talked about, being able to remove that energy once it's actually used. Now in space, let's assume for a moment we figure out how to harvest that energy from a solar perspective. The bigger problem that I don't yet know that we've solved is how do you get rid of the excess heat after you're done using the energy? That has to go somewhere. We take for granted the air we breathe. This is so important for contact cooling. You can cool anything in the world because there's always a blanket of air around it. You can put other stuff to remove that heat and transfer it into the atmosphere. In space, there is none. The only way you can remove heat from an element generating it is by radiation, by radiant methods, which are typically a lot less efficient. Anyway, I say that to say that there are certain fundamental problems that we don't yet know how to solve. It's an interesting idea. We just have to keep thinking about it.
Gemma Allen
>> Okay. Well, it's Monday of next week, we're back in the office in Dallas after GTC. Everyone's a bit tired but motivated. What's ahead for you and the team at Texas Instruments for the year out?
Kannan Soundarapandian
>> So it's been a great GTC for us. We have actually been able to show you, it's right here, an entire grid to gate solution, which means basically, we're able to take power from the grid, AC power that comes in, and then transfer it into an 800 volt bus, do a hot swap solution that allows you to safely access that 800 volt bus when you remove and take servers out, and then use a down converter. We have just two stages of conversion that we've enabled as of yesterday. We went public with it. Basically, two stages of conversion from 800 volts all the way down to sub one volt that the GPU lives on, so this is very exciting for us. We've now shown what that entire scale looks like. This solution is available and is ready to go today.
Gemma Allen
>> Wow.
Kannan Soundarapandian
>> That is what animates us, is what's kept us going for the last months over here, so we're very happy to be here and show you that solution, and Monday, it's all about getting right back to work on it.
Gemma Allen
>> Meeting those purchase orders.
Kannan Soundarapandian
>> Correct.
Gemma Allen
>> Well, Kannan, thank you so much for talking. That was a fascinating conversation.
Kannan Soundarapandian
>> Thank you so much.
Gemma Allen
>> So much to unpack there, but let's see. Let's see what the world is like a year from now.
Kannan Soundarapandian
>> Let's do that.
Gemma Allen
>> Thanks so much.
Kannan Soundarapandian
>> Thank you so much.
Gemma Allen
>> I'm Gemma Allen here at theCUBE, at NVIDIA GTC, at the Texas Instrument booth. Thanks so much for watching.