01. Amit Satoor, SAP HANA Vora, Visits #theCUBE!. (00:20)
02. What Are You Doing With Spark. (00:42)
03. What Are Your Customers Telling You They Want. (01:36)
04. Can You Get Customers Where They Want To Be Gradually. (02:50)
05. What Parts Of The Old Business Suite Have Been Upgraded In Terms Of Training. (04:30)
06. Is There A Distiction Between Being Experienced And The Business Process. (07:57)
07. Is The Business Forcasting Process Stable For Regulatory Reasons. (09:20)
08. What Are You Doing With Social Platforms To Extract Data. (10:22)
09. Is Spark Allowing You To Have A Much More Robust Data Points For Companies. (12:42)
10. Will We See Spark As An Analytic Engine With In HANA. (14:04)
Track List created with http://www.vinjavideo.com.
--- ---
Inundated with data? The key to integration | #SparkSummit
by Timothy Walden | Jun 7, 2016
There’s nothing more overwhelming than being bombarded with multiple streams of data. In only a few years, we have gone from gigabytes to terabytes, and the information just keeps coming. As technology weaves itself into the fabric of everyday business, how can organizations most effectively process all the data being generated?
Amit Satoor, senior director of product and solution marketing at SAP SE, talked with John Walls and George Gilbert (@ggilbert41), cohosts of theCUBE, from the SiliconANGLE Media team, during the Spark Summit 2016 about data processing and how to make sense of it all.
Where does it all go?
With so much information being generated, many companies are wondering how to integrate it all. Apache Spark, an open-source data processing engine, acts as a framework that can help process Big Data prior to integration for enterprise use, according to Satoor. It’s all about two challenges: “The size of the data and what sort of process to use.”
Spark is like a bridge between the data and big enterprises. It can process information before it’s needed and make the integration a much smoother process. “You want to make sure the experience is seamless,” said Satoor.
For the nostalgic consumer or business that wants a feeling a familiarity, many of the integration processes remain the same even though the experience changes. As computing evolves and moves toward pattern-based learning, users will have a more interactive and useful experience. All the moving parts, such as Spark, SAP HANA and others, must be slowly integrated in order to create more efficient and effective data process.
From old to new
As the complexity of Spark increases, more data from the past can be integrated into current algorithms. It can “shine a light” on old research and development projects where data was previously going untouched. With usage patterns from the Internet of Things, inputs that already exist can be put to new uses. For example, sensors in airplanes across the globe can be used to gather information that will improve maintenance scheduling and safety regulations across all airlines, according to Satoor.
Technology is always changing and adapting, and so too must the data scientists and platforms companies use. Spark’s framework and variety of modules help to improve a user’s experience and produce new results from old problems, Satoor said.
“Spark is always evolving,” Satoor added, and it is that evolution that keeps everyone a step ahead.
#SparkSummit
#theCUBE
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Spark Summit 2016 | San Francisco. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Spark Summit 2016 | San Francisco
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Spark Summit 2016 | San Francisco.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Spark Summit 2016 | San Francisco. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Spark Summit 2016 | San Francisco
Please sign in with LinkedIn to continue to Spark Summit 2016 | San Francisco. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Amit Satoor, SAP HANA Vora | Spark Summit 2016
01. Amit Satoor, SAP HANA Vora, Visits #theCUBE!. (00:20)
02. What Are You Doing With Spark. (00:42)
03. What Are Your Customers Telling You They Want. (01:36)
04. Can You Get Customers Where They Want To Be Gradually. (02:50)
05. What Parts Of The Old Business Suite Have Been Upgraded In Terms Of Training. (04:30)
06. Is There A Distiction Between Being Experienced And The Business Process. (07:57)
07. Is The Business Forcasting Process Stable For Regulatory Reasons. (09:20)
08. What Are You Doing With Social Platforms To Extract Data. (10:22)
09. Is Spark Allowing You To Have A Much More Robust Data Points For Companies. (12:42)
10. Will We See Spark As An Analytic Engine With In HANA. (14:04)
Track List created with http://www.vinjavideo.com.
--- ---
Inundated with data? The key to integration | #SparkSummit
by Timothy Walden | Jun 7, 2016
There’s nothing more overwhelming than being bombarded with multiple streams of data. In only a few years, we have gone from gigabytes to terabytes, and the information just keeps coming. As technology weaves itself into the fabric of everyday business, how can organizations most effectively process all the data being generated?
Amit Satoor, senior director of product and solution marketing at SAP SE, talked with John Walls and George Gilbert (@ggilbert41), cohosts of theCUBE, from the SiliconANGLE Media team, during the Spark Summit 2016 about data processing and how to make sense of it all.
Where does it all go?
With so much information being generated, many companies are wondering how to integrate it all. Apache Spark, an open-source data processing engine, acts as a framework that can help process Big Data prior to integration for enterprise use, according to Satoor. It’s all about two challenges: “The size of the data and what sort of process to use.”
Spark is like a bridge between the data and big enterprises. It can process information before it’s needed and make the integration a much smoother process. “You want to make sure the experience is seamless,” said Satoor.
For the nostalgic consumer or business that wants a feeling a familiarity, many of the integration processes remain the same even though the experience changes. As computing evolves and moves toward pattern-based learning, users will have a more interactive and useful experience. All the moving parts, such as Spark, SAP HANA and others, must be slowly integrated in order to create more efficient and effective data process.
From old to new
As the complexity of Spark increases, more data from the past can be integrated into current algorithms. It can “shine a light” on old research and development projects where data was previously going untouched. With usage patterns from the Internet of Things, inputs that already exist can be put to new uses. For example, sensors in airplanes across the globe can be used to gather information that will improve maintenance scheduling and safety regulations across all airlines, according to Satoor.
Technology is always changing and adapting, and so too must the data scientists and platforms companies use. Spark’s framework and variety of modules help to improve a user’s experience and produce new results from old problems, Satoor said.
“Spark is always evolving,” Satoor added, and it is that evolution that keeps everyone a step ahead.
#SparkSummit
#theCUBE