Brett Rudenstein, WANdisco, at BigDataSV 2014 with Dave Vellante and Jeff Kelly
@thecube
#BigDataSV
Wikibon’s CEO Dave Vellante and Principal Research Contributor Jeff Kelly co-hosted theCUBE segment dedicated to WANdisco technologies. Brett Rudenstein, Senior Product Manager of Big Data for WANdisco, was interviewed during Day Two of the Strata Conf. 2014 in Santa Clara, California, when he talked about the traction their product has had, and delivered a live demo.
“Last year we announced our Non-Stop Hadoop and the people looked at our technologies wondering ‘is that really possible’?” smiled Rudenstein. “This year it’s clear that it is really possible, everyone is defining their use-cases, and they are really excited about our offering.”
“What kind of questions are they asking now?” wondered Vellante.
“The discussions revolve around being able to maximize resource utilization. Typically, people bought an insurance policy. Placing $100,000 dollars worth of equipment in a disaster-recovery data center is the insurance policy in case of a zero-availability event; if that site’s down, suddenly this equipment has value,” said Rudenstein. “What’s different now, is that everyone wants to be able to take that resource that is idle, and be able to do something with it.”
Nowadays clients are able to run secondary jobs that otherwise wouldn’t have the bandwidth to run in the primary data center, explained Brett.
Big Data suits up
.
Noticing the predominance of suits over hoodies at this year’s Strata Conference, Vellante asked Rudenstein what his audience is mainly consisting of.
“It’s mixed,” replied Rudenstein; “it’s engineers, developers, C-level execs, CIOs and CTOs, people trying to understand how it fits in their environment and how it benefits them for a more holistic global scale.”
Dave Vellante prompted a short HBase one-on-one discussion, and Brett Rudenstein obliged: “HBase is effectively a storage for big beta applications; some people call it a key value store, but the fundamental principle behind it is being able to store billions and billions of rows of data and, in the same time, have (near) real-time access to that data,” explained Rudenstein.
Why HBase?
.
“It’s a popular database,” agreed Vellante, “but why did you pick HBase?”
“From a database perspective, the reason that it’s often picked is because of the level of scale that it’s able to achieve and also because it is fundamentally a Hadoop database. Because HBase stores its log files into HDFS, the first thing that you need is a hardened HDFS whereby you can withstand failure,” answered Rudenstein.
“The first thing we announced last year with our Non-Stop Hadoop, was an Active/Active replication of a NameNode, and geographically aware data center Hadoop. When you have that solid underpinning, you can take on an application like HBase and give it the same characteristics. Oftentimes, when people look at the kinds of NoSQL solutions that are available, they are looking at consistency and lower availability versus eventual consistency and high availability. By taking our Active/Active replication technology, putting it on to HBase, we not only allow HBase to be strongly consistent, but also continuously available,” specified Rudenstein.
“Is there a pre-requisite in order to take advantage of the full Non-Stop HBase technology?” asked Vellante.
“A best practice would be to be able to have that reliable, continuously available data storage – meaning HDFS and Non-Stop NameNode, the Non-Stop Hadoop product.”
“You could do it without a Non-Stop NameNode, but then you would make Hadoop the weak link,” commented Vellante. “So you’re advice is, go Non-Stop NameNode and then apply Active/Active to HBase.”
Brett Rudenstein agreed: “That is the right approach: you want to make sure that the foundation of your house is in a good order before you start building on top of it and that’s exactly what we’ve done.”
Jeff Kelly, Principal Research Contributor with Wikibon, wanted Rudenstein to talk more about the implications of enterprise, now that you can potentially run these cost-saving mission critical applications instead of a costly, proprietary database.
“I think it changes some of the applications that can now participate in the usage of the HBase; you can use it for applications that are mission critical (stock applications, streaming stock quotes). If HBase becomes unavailable, suddenly you no longer have that continuous availability and the business continuity. This opens up all those possibilities for continuous availability that are required for these mission critical applications,”
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
BigData SV 2014 | Santa Clara. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For BigData SV 2014 | Santa Clara
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for BigData SV 2014 | Santa Clara.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
BigData SV 2014 | Santa Clara. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to BigData SV 2014 | Santa Clara
Please sign in with LinkedIn to continue to BigData SV 2014 | Santa Clara. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Brett Rudenstein - BigDataSV 2014 - theCUBE
Brett Rudenstein, WANdisco, at BigDataSV 2014 with Dave Vellante and Jeff Kelly
@thecube
#BigDataSV
Wikibon’s CEO Dave Vellante and Principal Research Contributor Jeff Kelly co-hosted theCUBE segment dedicated to WANdisco technologies. Brett Rudenstein, Senior Product Manager of Big Data for WANdisco, was interviewed during Day Two of the Strata Conf. 2014 in Santa Clara, California, when he talked about the traction their product has had, and delivered a live demo.
“Last year we announced our Non-Stop Hadoop and the people looked at our technologies wondering ‘is that really possible’?” smiled Rudenstein. “This year it’s clear that it is really possible, everyone is defining their use-cases, and they are really excited about our offering.”
“What kind of questions are they asking now?” wondered Vellante.
“The discussions revolve around being able to maximize resource utilization. Typically, people bought an insurance policy. Placing $100,000 dollars worth of equipment in a disaster-recovery data center is the insurance policy in case of a zero-availability event; if that site’s down, suddenly this equipment has value,” said Rudenstein. “What’s different now, is that everyone wants to be able to take that resource that is idle, and be able to do something with it.”
Nowadays clients are able to run secondary jobs that otherwise wouldn’t have the bandwidth to run in the primary data center, explained Brett.
Big Data suits up
.
Noticing the predominance of suits over hoodies at this year’s Strata Conference, Vellante asked Rudenstein what his audience is mainly consisting of.
“It’s mixed,” replied Rudenstein; “it’s engineers, developers, C-level execs, CIOs and CTOs, people trying to understand how it fits in their environment and how it benefits them for a more holistic global scale.”
Dave Vellante prompted a short HBase one-on-one discussion, and Brett Rudenstein obliged: “HBase is effectively a storage for big beta applications; some people call it a key value store, but the fundamental principle behind it is being able to store billions and billions of rows of data and, in the same time, have (near) real-time access to that data,” explained Rudenstein.
Why HBase?
.
“It’s a popular database,” agreed Vellante, “but why did you pick HBase?”
“From a database perspective, the reason that it’s often picked is because of the level of scale that it’s able to achieve and also because it is fundamentally a Hadoop database. Because HBase stores its log files into HDFS, the first thing that you need is a hardened HDFS whereby you can withstand failure,” answered Rudenstein.
“The first thing we announced last year with our Non-Stop Hadoop, was an Active/Active replication of a NameNode, and geographically aware data center Hadoop. When you have that solid underpinning, you can take on an application like HBase and give it the same characteristics. Oftentimes, when people look at the kinds of NoSQL solutions that are available, they are looking at consistency and lower availability versus eventual consistency and high availability. By taking our Active/Active replication technology, putting it on to HBase, we not only allow HBase to be strongly consistent, but also continuously available,” specified Rudenstein.
“Is there a pre-requisite in order to take advantage of the full Non-Stop HBase technology?” asked Vellante.
“A best practice would be to be able to have that reliable, continuously available data storage – meaning HDFS and Non-Stop NameNode, the Non-Stop Hadoop product.”
“You could do it without a Non-Stop NameNode, but then you would make Hadoop the weak link,” commented Vellante. “So you’re advice is, go Non-Stop NameNode and then apply Active/Active to HBase.”
Brett Rudenstein agreed: “That is the right approach: you want to make sure that the foundation of your house is in a good order before you start building on top of it and that’s exactly what we’ve done.”
Jeff Kelly, Principal Research Contributor with Wikibon, wanted Rudenstein to talk more about the implications of enterprise, now that you can potentially run these cost-saving mission critical applications instead of a costly, proprietary database.
“I think it changes some of the applications that can now participate in the usage of the HBase; you can use it for applications that are mission critical (stock applications, streaming stock quotes). If HBase becomes unavailable, suddenly you no longer have that continuous availability and the business continuity. This opens up all those possibilities for continuous availability that are required for these mission critical applications,”