Patrick Toole and Christopher Rocca, Hadapt, at Big Data NYC 2013 with John Furrier and Dave Vellante
Harnessing the power of Big Data is no small feat, requiring organizational commitment, substantial investments in infrastructure and talent and clear project objectives. There's also the matter of putting that power in the hands of users, a challenge that Hadapt has set out to solve. Based in Cambridge, Massachusetts, the startup offers a relational abstraction layer for Hadoop that lets analysts unlock the value of their data without having to go through Hive.
Hadapt engineering head Christopher Rocca and pre-sales manager Patrick Toole dropped by theCUBE at SiliconANGLE's Big Data NYC 2013 gathering to discuss how their firm makes information more consumable. The company's software provides access to data in HDFS via SQL rather than MapReduce or Java, which are more complex and not as common among everyday business users.
According to Toole, Hadapt bridges the data knowledge gaps that plague traditional enterprises by eliminating the need for complex ETL processes. The company helps customers unify their analytics environments, abstract management and set clear goals for delivering actionable insights.
Asked about the differentiators that set Hadapt apart from Cloudera Impala, a rivaling SQL-on-Hadoop solution, Rocca says that his firm's software is smarter and more agile.
"Our fundamental architecture allows us to scale very wide, push queries down to the data nodes, get a lot of parallelism out of the cluster [and] deal with very large datasets not limited to any kind of memory scaling issue," Rocca elaborates. "You want to not just have basic SQL operations on normalized data, you really want a more flexible analytics platform where you can start to bring in data from some other sources," specifically clickstream information and other unnormalized workloads.
Toole adds that Hadapt provides value beyond this core functionality with machine learning algorithms and a flexible schema that can process both structured data and multi-structured information such as text and key-value pairs.
@thecube
#BigDataNYC
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
BigData NYC 2013 | New York. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For BigData NYC 2013 | New York
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for BigData NYC 2013 | New York.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
BigData NYC 2013 | New York. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to BigData NYC 2013 | New York
Please sign in with LinkedIn to continue to BigData NYC 2013 | New York. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Patrick Toole & Christopher Rocca - BigDataNYC 2013 - theCUBE - #BigDataNYC
Patrick Toole and Christopher Rocca, Hadapt, at Big Data NYC 2013 with John Furrier and Dave Vellante
Harnessing the power of Big Data is no small feat, requiring organizational commitment, substantial investments in infrastructure and talent and clear project objectives. There's also the matter of putting that power in the hands of users, a challenge that Hadapt has set out to solve. Based in Cambridge, Massachusetts, the startup offers a relational abstraction layer for Hadoop that lets analysts unlock the value of their data without having to go through Hive.
Hadapt engineering head Christopher Rocca and pre-sales manager Patrick Toole dropped by theCUBE at SiliconANGLE's Big Data NYC 2013 gathering to discuss how their firm makes information more consumable. The company's software provides access to data in HDFS via SQL rather than MapReduce or Java, which are more complex and not as common among everyday business users.
According to Toole, Hadapt bridges the data knowledge gaps that plague traditional enterprises by eliminating the need for complex ETL processes. The company helps customers unify their analytics environments, abstract management and set clear goals for delivering actionable insights.
Asked about the differentiators that set Hadapt apart from Cloudera Impala, a rivaling SQL-on-Hadoop solution, Rocca says that his firm's software is smarter and more agile.
"Our fundamental architecture allows us to scale very wide, push queries down to the data nodes, get a lot of parallelism out of the cluster [and] deal with very large datasets not limited to any kind of memory scaling issue," Rocca elaborates. "You want to not just have basic SQL operations on normalized data, you really want a more flexible analytics platform where you can start to bring in data from some other sources," specifically clickstream information and other unnormalized workloads.
Toole adds that Hadapt provides value beyond this core functionality with machine learning algorithms and a flexible schema that can process both structured data and multi-structured information such as text and key-value pairs.
@thecube
#BigDataNYC