Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020
Paresh Kharya Director of Product Management, Data Center & Cloud Computing, at NVIDIA and Kevin Deierling, Senior Vice President, Marketing at NVIDIA talk with Stu Miniman for HPE Discover 2020. Visit thecube.net for our full catalog of interviews. #HPEDiscover #theCUBE #HPE @Hewlett Packard Enterprise @SiliconANGLE theCUBE​ https://siliconangle.com/2020/06/24/qa-how-nvidias-bluefield-intelligent-data-processing-unit-helps-optimize-computing-at-data-center-scale-hpediscover/ Q&A: How Nvidia’s BlueField intelligent data processing unit helps optimize computing at data-center scale The first business computers were huge contraptions that took up rooms. Technology advanced, and computers became boxes that sat on a desk or in a rack. Now, the confluence of artificial intelligence, 5G networks that enable the internet of things, and super-powerful processors is forcing computers out of the box once more. “The new unit of computing is really the data center,” said Kevin Deierling (pictured, right), senior vice president of marketing at Nvidia Corp. “That’s the scale of the types of problems we’re solving.” Deierling and Paresh Kharya (pictured, left), director of product management for data center and cloud computing, at Nvidia, spoke with Stu Miniman, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the HPE Discover Virtual Experience event. They discussed the BlueField programmable data processing unit and the evolution of data processing, advances in AI, and Nvidia’s partnership with Hewlett Packard Enterprise Co. (* Disclosure below.) [Editor’s note: The following content has been edited for clarity] AI is obviously a mega trend. Can you describe where Nvidia sits and what the market is? Kharya: We are witnessing a mass of changes that are happening across every industry, and it’s from the confluence of three things: One is of course, AI; the second is 5G and the IoT; and the third is the ability to process all of the data that we have. In AI we are seeing really advanced models, from computer vision to understanding natural language to the ability to speak in conversational terms. In terms of IoT and 5G, there are billions of devices that are sensing and inferring information, and now we have the ability to act, make decisions in various industries. And, finally, with all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that’s available, we can now make sense all of this data to help industrial transformation. When you look at some of these waves of technology, there are a lot of new pieces, but architecturally some of these things remind us of the past. Can you say what’s the same and what’s different about this highly distributed, edge compute, AI, IoT environment? Deierling: When you move to the edge, instead of having a single data center with 10,000 computers, you have 10,000 data centers, each of which has a small number of servers that is processing all of that information that’s coming in. But in a sense, the problems are very, very similar whether you’re at the edge or you’re doing massive high-performance computing, scientific computing, or cloud computing. And so we’re excited to be part of bringing together the AI and the networking, because they are really optimizing at the data-center scale across the entire stack. So, obviously, we know CPU’s. When we think about GPUs, we think of Nvidia. Google came out with Cloud Tensor Processing Units. But what are the data processing units Nvidia just announced? Is this just some new AI thing or a new architectural model? Deierling: There are three key elements of this accelerated disaggregated infrastructure that the data center is becoming. One is the CPU, which is doing traditional single-threaded workloads. But for all of the accelerated workloads, you need the GPU. And that does massive parallelism, deals with massive amounts of data. But to get that data into the GPU and also into the CPU, you need an intelligent data processing unit because of the scale and scope of GPUs and CPUs today. These are not single core entities; these are hundreds or even thousands of cores in a big system. So you need to steer the traffic exactly to the right place, and you need to do it securely, you need to do it virtualized, you need to do it with containers. ... Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the HPE Discover Virtual Experience event. (* Disclosure: TheCUBE is a paid media partner for the HPE Discover Virtual Experience. Neither Hewlett Packard Enterprise Co., the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)