Thomas Sohmers, Founder, REX Computing, at Open Compute Project Summit 2015 with theCUBE's Jeff Frick
https://siliconangle.com/2015/03/27/rex-computing-to-build-the-worlds-most-power-efficient-processor-ocpsummit15/
Rex Computing to build the world’s most power-efficient processor| #OCPSummit15
When Rex Computing first opened it’s doors, their plan was to create the most power-efficient super computers in the world, said Rex Computing CEO Thomas Sohmers during an interview with theCUBE host Jeff Frick at OCP Summit 2015. In order to realize that goal, the folks at Rex Computing had to take processing infrastructure into their own hands. “We thought at that point, that it may have been possible to use other people’s processors, and as we were developing that and the system we showed last year — we realized there were a lot of fundamental issues of how processors are currently designed and built,” recounted Sohmers. In response to this obstacle, he decided to see if his company “can do a bit better,” said Sohmers
Rex Computing is now setting their sights on “a new processor architecture and instructions to that new core design,” according to Sohmers. Specifically, he stated that Rex’s current idea is “based on this concept that memory movement is expensive and doing the actual computation is much cheaper.” They’re sticking to “the basic computer route,” according to Sohmers, and “developing a 256 core version of this processor and hoping to have this chip taped out in the next 12 – 18 months,” he elaborated further. Sohmers considers their work essential to making super computing more power efficient.
Why power is a big deal in super computing
To explain why power usage is so important to super computing, Sohmers relayed the enormous task the United States Department of Energy (DoE) faces. Assigned with maintaining the United State’s nuclear stockpile, the DoE handles “weapons testing and simulations in addition to making sure that the current warheads are still safe,” according to Sohmers.
To help them accomplish these tasks, the DoE has “some of the world’s most powerful super computers,” including the world’s second biggest computer, Titan. By executive order, the DoE’s expenditure of energy is capped at twenty megawatts. Titan on its own, according to Sohmers, “contains about seventeen petaflops of sustained computing power.” Furthermore, he added, “the DoE wants to get that up to one exaflop – one thousand petaflops,” without exceeding their twenty megawatt budget. Currently, Sohmers said, the DoE is achieving about three to four gigaflops per watt. In order to accomplish their goals, Sohmers explained that the DoE needs to achieve fifty gigaflops per watt to make their objective feasible.
Shifting compute paradigm
Part of embracing a new way of streamlining power efficiency is accepting that “the way that the X86 and ARM processors — all processors we use today, including GPUs — are built is for an old paradigm,” said Sohmers. The technology world has changed so drastically since cluster computing first emerged in the 1990s. While “parallelization” makes more sense “in terms of cost per flop, of doing the actual job,” technology’s constraints “aren’t the same today,” said Sohmers. While constraints may have freed technology in certain aspects, Sohmers noted that “we have different problems” as well.
A race against time to improve HPC
HPC [High Performance Computing] is “very similar to embedded in the sense that with an embedded system in you have a lot of constraints on power, size, and its really meant to be doing a specific task. HPC is basically just the warehouse sized version of that.” Sohmers stated. He noted that the problem solving is similar both in terms of the difficulties faced and the solutions necessary.
Sohmers explained further, using Amazon.com, Inc. as an example: “The mainframe is focused on IO and Amazon is focused on having many different tasks and being able to spread the commute function to different things dynamically,” said Sohmers. The problem is that “the big iron HPC, the things that are developed there flow into Amazon and the very large distributed scale.” according to Sohmers. Right now, he remarked “we’re facing a nice problem in HPC at the top one percent.” Some large scale companies are already feeling the pinch, and he predicted that pinch will tighten in a “two to five year time frame.” After five years, Sohmers said “every one else” will begin to feel it too.
Sohmers described his position, saying that his company is doing its best to amend these difficulties before they worsen: “by focusing on the problems of the top 1% of computers right now, we’re going to be affecting the design and development of all computers in the future,” he stated emphatically.
@theCUBE @Open Compute Project #theCUBE @SiliconANGLE theCUBE #REXComputing
#ocpsummit
....
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Open Compute Project Summit 2015 | San Jose. If you don’t think you received an email check your
spam folder.
Sign in to Open Compute Project Summit 2015 | San Jose.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Open Compute Project Summit 2015 | San Jose
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Open Compute Project Summit 2015 | San Jose.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Open Compute Project Summit 2015 | San Jose. If you don’t think you received an email check your
spam folder.
Sign in to Open Compute Project Summit 2015 | San Jose.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Open Compute Project Summit 2015 | San Jose
Please sign in with LinkedIn to continue to Open Compute Project Summit 2015 | San Jose. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Thomas Sohmers, REX Computing | Open Compute Project Summit 2015
Thomas Sohmers, Founder, REX Computing, at Open Compute Project Summit 2015 with theCUBE's Jeff Frick
https://siliconangle.com/2015/03/27/rex-computing-to-build-the-worlds-most-power-efficient-processor-ocpsummit15/
Rex Computing to build the world’s most power-efficient processor| #OCPSummit15
When Rex Computing first opened it’s doors, their plan was to create the most power-efficient super computers in the world, said Rex Computing CEO Thomas Sohmers during an interview with theCUBE host Jeff Frick at OCP Summit 2015. In order to realize that goal, the folks at Rex Computing had to take processing infrastructure into their own hands. “We thought at that point, that it may have been possible to use other people’s processors, and as we were developing that and the system we showed last year — we realized there were a lot of fundamental issues of how processors are currently designed and built,” recounted Sohmers. In response to this obstacle, he decided to see if his company “can do a bit better,” said Sohmers
Rex Computing is now setting their sights on “a new processor architecture and instructions to that new core design,” according to Sohmers. Specifically, he stated that Rex’s current idea is “based on this concept that memory movement is expensive and doing the actual computation is much cheaper.” They’re sticking to “the basic computer route,” according to Sohmers, and “developing a 256 core version of this processor and hoping to have this chip taped out in the next 12 – 18 months,” he elaborated further. Sohmers considers their work essential to making super computing more power efficient.
Why power is a big deal in super computing
To explain why power usage is so important to super computing, Sohmers relayed the enormous task the United States Department of Energy (DoE) faces. Assigned with maintaining the United State’s nuclear stockpile, the DoE handles “weapons testing and simulations in addition to making sure that the current warheads are still safe,” according to Sohmers.
To help them accomplish these tasks, the DoE has “some of the world’s most powerful super computers,” including the world’s second biggest computer, Titan. By executive order, the DoE’s expenditure of energy is capped at twenty megawatts. Titan on its own, according to Sohmers, “contains about seventeen petaflops of sustained computing power.” Furthermore, he added, “the DoE wants to get that up to one exaflop – one thousand petaflops,” without exceeding their twenty megawatt budget. Currently, Sohmers said, the DoE is achieving about three to four gigaflops per watt. In order to accomplish their goals, Sohmers explained that the DoE needs to achieve fifty gigaflops per watt to make their objective feasible.
Shifting compute paradigm
Part of embracing a new way of streamlining power efficiency is accepting that “the way that the X86 and ARM processors — all processors we use today, including GPUs — are built is for an old paradigm,” said Sohmers. The technology world has changed so drastically since cluster computing first emerged in the 1990s. While “parallelization” makes more sense “in terms of cost per flop, of doing the actual job,” technology’s constraints “aren’t the same today,” said Sohmers. While constraints may have freed technology in certain aspects, Sohmers noted that “we have different problems” as well.
A race against time to improve HPC
HPC [High Performance Computing] is “very similar to embedded in the sense that with an embedded system in you have a lot of constraints on power, size, and its really meant to be doing a specific task. HPC is basically just the warehouse sized version of that.” Sohmers stated. He noted that the problem solving is similar both in terms of the difficulties faced and the solutions necessary.
Sohmers explained further, using Amazon.com, Inc. as an example: “The mainframe is focused on IO and Amazon is focused on having many different tasks and being able to spread the commute function to different things dynamically,” said Sohmers. The problem is that “the big iron HPC, the things that are developed there flow into Amazon and the very large distributed scale.” according to Sohmers. Right now, he remarked “we’re facing a nice problem in HPC at the top one percent.” Some large scale companies are already feeling the pinch, and he predicted that pinch will tighten in a “two to five year time frame.” After five years, Sohmers said “every one else” will begin to feel it too.
Sohmers described his position, saying that his company is doing its best to amend these difficulties before they worsen: “by focusing on the problems of the top 1% of computers right now, we’re going to be affecting the design and development of all computers in the future,” he stated emphatically.
@theCUBE @Open Compute Project #theCUBE @SiliconANGLE theCUBE #REXComputing
#ocpsummit
....