Pradeep Sindhu, CEO & Co-Founder at Fungible joins Dave Vellante for theCUBE on Cloud 2021.
#theCUBE #CUBEOnCloud
https://siliconangle.com/2021/01/21/data-processing-unit-offers-new-architectural-solution-cloud-data-center-networks-cubeoncloud/
Data processing unit offers new architectural solution for cloud data center networks
SPECIAL COVERAGE: THECUBE ON CLOUD BY MARK ALBERTSON
One of the maxims in the technology world is that much of what powered innovation over the past decade will likely be replaced by something new in the current decade. The rise of the data processing unit provides a prime example of this reality.
By more efficiently executing data-centric computations within server nodes, the DPU offers the potential for significant improvement in next-generation cloud architectures. This will be an important development because the critical functions of network, storage, virtualization and security have outstripped the capabilities of general-purpose central processing units.
“CPUs are not good at executing these data-centric computations, and in a compute centric cloud architecture, the interactions between server nodes are very inefficient,” said Pradeep Sindhu (pictured), founder and chief executive officer of Fungible Inc. “What we are looking to do at Fungible is to solve these two basic problems.”
Sindhu spoke with Dave Vellante, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during theCUBE on Cloud event. They discussed the need to address processing issues within CPU architecture, the use of high-level code as a solution, improving cloud network efficiency and how the DPU is different from other processor offerings in the enterprise.
Playing traffic cop
At the heart of the workload processing dilemma is the reality that cloud data center servers are built using general-purpose x86 CPUs. This architectural model depends on being able to scale out identical or near-identical servers, all connected to a standard IP ethernet network.
That might have been adequate in a time before cloud computing required the processing of data-heavy workloads, but artificial intelligence applications, which rely on vast amounts of information, have changed the game dramatically. CPUs are now being asked to run applications and direct traffic for I/O, according to Sindhu.
“The architecture of these CPUs was never designed to play traffic cop,” Sindhu said. “You’re interrupting the CPU many millions of times a second. It’s critical that in this new architecture where there is a lot of data, a lot of east-west traffic, the percentage of workload which is data-centric has gone from maybe 1% to 2% to 30% to 40%.”
Fungible’s solution significantly boosts the threads or high-level code executed by a processor. There are at least 1,000 different threads inside the DPU to address the need for concurrent computations, according to Sindhu, and the company has strengthened chip transistors as well.
“Our architecture consists of very heavily multithreaded general-purpose CPUs combined with very heavily threaded specific accelerators,” Sindhu said. “We’ve improved the efficiency of those transistors by 30 times.”
Boosting network utilization
In addition to building a suitable replacement for the CPU, technologists must also address another issue inherent in current IP ethernet-based networks. Utilization rates are often inefficiently low, according to Sindhu, so his company is pursuing a solution using Fabric Control Protocol.
As described in a patent filing, FCP sprays individual packers for a given data flow across multiple paths in a data center switch fabric.
“We were trying to solve the specific problem of data-centric computations and improving node-to-node efficiency,” Sindhu said. “When you embed FCP in hardware on top of a standard IP ethernet network, you end up with the ability to run at very large scale where the utilization of the network is 90% to 95%, not 20% to 25%.”
The role of the DPU in potentially solving issues around network efficiency and computational power has similarities to another trend in processor technology for the enterprise world. The growth of smart network interface card, or SmartNIC, architectures has been a trend over the past several years. The SmartNIC is an embedded microprocessor that offloads functions from the host.
At least 10 vendors have launched SmartNICs since 2017, including VMware Inc.’s re-architecture of the hybrid cloud through Project Monterey. However, Sindhu is careful to note the difference between the two technologies.
“A SmartNIC is not a DPU,” Sindhu said. “It’s simply taking general purpose Arm cores, putting in a network and PCI interface, integrating them all on the same chip and separating them from the CPU. It solves the problem of the data-centric workload interfering with the application workload, but it does not address the architectural problem of how to address data-centric workloads efficiently.”
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE on Cloud 2021 | Digital. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For theCUBE on Cloud 2021 | Digital
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for theCUBE on Cloud 2021 | Digital.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
theCUBE on Cloud 2021 | Digital. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to theCUBE on Cloud 2021 | Digital
Please sign in with LinkedIn to continue to theCUBE on Cloud 2021 | Digital. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Pradeep Sindhu, Fungible | theCUBE on Cloud 2021
Pradeep Sindhu, CEO & Co-Founder at Fungible joins Dave Vellante for theCUBE on Cloud 2021.
#theCUBE #CUBEOnCloud
https://siliconangle.com/2021/01/21/data-processing-unit-offers-new-architectural-solution-cloud-data-center-networks-cubeoncloud/
Data processing unit offers new architectural solution for cloud data center networks
SPECIAL COVERAGE: THECUBE ON CLOUD BY MARK ALBERTSON
One of the maxims in the technology world is that much of what powered innovation over the past decade will likely be replaced by something new in the current decade. The rise of the data processing unit provides a prime example of this reality.
By more efficiently executing data-centric computations within server nodes, the DPU offers the potential for significant improvement in next-generation cloud architectures. This will be an important development because the critical functions of network, storage, virtualization and security have outstripped the capabilities of general-purpose central processing units.
“CPUs are not good at executing these data-centric computations, and in a compute centric cloud architecture, the interactions between server nodes are very inefficient,” said Pradeep Sindhu (pictured), founder and chief executive officer of Fungible Inc. “What we are looking to do at Fungible is to solve these two basic problems.”
Sindhu spoke with Dave Vellante, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during theCUBE on Cloud event. They discussed the need to address processing issues within CPU architecture, the use of high-level code as a solution, improving cloud network efficiency and how the DPU is different from other processor offerings in the enterprise.
Playing traffic cop
At the heart of the workload processing dilemma is the reality that cloud data center servers are built using general-purpose x86 CPUs. This architectural model depends on being able to scale out identical or near-identical servers, all connected to a standard IP ethernet network.
That might have been adequate in a time before cloud computing required the processing of data-heavy workloads, but artificial intelligence applications, which rely on vast amounts of information, have changed the game dramatically. CPUs are now being asked to run applications and direct traffic for I/O, according to Sindhu.
“The architecture of these CPUs was never designed to play traffic cop,” Sindhu said. “You’re interrupting the CPU many millions of times a second. It’s critical that in this new architecture where there is a lot of data, a lot of east-west traffic, the percentage of workload which is data-centric has gone from maybe 1% to 2% to 30% to 40%.”
Fungible’s solution significantly boosts the threads or high-level code executed by a processor. There are at least 1,000 different threads inside the DPU to address the need for concurrent computations, according to Sindhu, and the company has strengthened chip transistors as well.
“Our architecture consists of very heavily multithreaded general-purpose CPUs combined with very heavily threaded specific accelerators,” Sindhu said. “We’ve improved the efficiency of those transistors by 30 times.”
Boosting network utilization
In addition to building a suitable replacement for the CPU, technologists must also address another issue inherent in current IP ethernet-based networks. Utilization rates are often inefficiently low, according to Sindhu, so his company is pursuing a solution using Fabric Control Protocol.
As described in a patent filing, FCP sprays individual packers for a given data flow across multiple paths in a data center switch fabric.
“We were trying to solve the specific problem of data-centric computations and improving node-to-node efficiency,” Sindhu said. “When you embed FCP in hardware on top of a standard IP ethernet network, you end up with the ability to run at very large scale where the utilization of the network is 90% to 95%, not 20% to 25%.”
The role of the DPU in potentially solving issues around network efficiency and computational power has similarities to another trend in processor technology for the enterprise world. The growth of smart network interface card, or SmartNIC, architectures has been a trend over the past several years. The SmartNIC is an embedded microprocessor that offloads functions from the host.
At least 10 vendors have launched SmartNICs since 2017, including VMware Inc.’s re-architecture of the hybrid cloud through Project Monterey. However, Sindhu is careful to note the difference between the two technologies.
“A SmartNIC is not a DPU,” Sindhu said. “It’s simply taking general purpose Arm cores, putting in a network and PCI interface, integrating them all on the same chip and separating them from the CPU. It solves the problem of the data-centric workload interfering with the application workload, but it does not address the architectural problem of how to address data-centric workloads efficiently.”