01. Mike Williams, Fast Forward Labs, Visits theCUBE . (00:15)
02. Supervised Machine Learning. (00:57)
03. Legal Framework Not Keeping up with Development. (02:52)
04. Assessing and Sustaining Machine Learning. (07:26)
#theCUBE #FastForwardLabs #BigDataSV #SiliconANGLE
--- ---
Can machine learning create liability issues for businesses? | #BigDataSV
by Nelson Williams | Mar 31, 2016
In the new digital era, a business needs a store of data to help inform its decisions and interactions, but data is useless unless someone acts upon it. Unfortunately, it’s very easy to collect more data than any human team could possibly sort through, let alone put into practice. That’s where machine learning comes in. These systems learn from information stores to locate patterns and create rules with computer speed. Machine learning is becoming a valuable, perhaps even necessary, part of business infrastructure.
To gain some insight into machine learning, Peter Burris (@plburris) and Jeff Frick (@jefffrick), cohosts of theCUBE from the SiliconANGLE Media team, joined Mike Williams, research engineer at Fast Forward Labs, during the BigDataSV 2016 event in San Jose, California, where theCUBE is celebrating #BigDataWeek, including news and events from the #StrataHadoop conference.
Rules of thumb
What machine learning does, Williams said, is uncover general patterns from historical data. These patterns, and how the machine works with them, are called “rules of thumb.” The problem, he continued, is these rules of thumb might not always be correct. This raises the possibility of the system doing harm, be it to the business, a product or even a person.
Machine learning, he said, can set in stone biases drawn from the historical data, such as race issues. The people who deploy machine learning models need to be aware of the legal issues this could cause.
Breaking the rules
The machine learning community, Williams said, is lacking a clear set of rules one could write on a postcard that says they’re creating safe machine learning systems. In the meantime, one option is to censor or adjust the data to remove unneeded variables that could bias the model.
Fresh and updated data is a vital part of creating an accurate, safe model. Companies want data scientists to say there’s a problem before a liability appears, Williams said. For now, humans are still part of the process.
@theCUBE
#BigDataSV #StrataHadoop
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
BigData SV 2016 | San Jose. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For BigData SV 2016 | San Jose
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for BigData SV 2016 | San Jose.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
BigData SV 2016 | San Jose. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to BigData SV 2016 | San Jose
Please sign in with LinkedIn to continue to BigData SV 2016 | San Jose. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Mike Williams, Fast Forward Labs | Big Data Silicon Valley 2016
01. Mike Williams, Fast Forward Labs, Visits theCUBE . (00:15)
02. Supervised Machine Learning. (00:57)
03. Legal Framework Not Keeping up with Development. (02:52)
04. Assessing and Sustaining Machine Learning. (07:26)
#theCUBE #FastForwardLabs #BigDataSV #SiliconANGLE
--- ---
Can machine learning create liability issues for businesses? | #BigDataSV
by Nelson Williams | Mar 31, 2016
In the new digital era, a business needs a store of data to help inform its decisions and interactions, but data is useless unless someone acts upon it. Unfortunately, it’s very easy to collect more data than any human team could possibly sort through, let alone put into practice. That’s where machine learning comes in. These systems learn from information stores to locate patterns and create rules with computer speed. Machine learning is becoming a valuable, perhaps even necessary, part of business infrastructure.
To gain some insight into machine learning, Peter Burris (@plburris) and Jeff Frick (@jefffrick), cohosts of theCUBE from the SiliconANGLE Media team, joined Mike Williams, research engineer at Fast Forward Labs, during the BigDataSV 2016 event in San Jose, California, where theCUBE is celebrating #BigDataWeek, including news and events from the #StrataHadoop conference.
Rules of thumb
What machine learning does, Williams said, is uncover general patterns from historical data. These patterns, and how the machine works with them, are called “rules of thumb.” The problem, he continued, is these rules of thumb might not always be correct. This raises the possibility of the system doing harm, be it to the business, a product or even a person.
Machine learning, he said, can set in stone biases drawn from the historical data, such as race issues. The people who deploy machine learning models need to be aware of the legal issues this could cause.
Breaking the rules
The machine learning community, Williams said, is lacking a clear set of rules one could write on a postcard that says they’re creating safe machine learning systems. In the meantime, one option is to censor or adjust the data to remove unneeded variables that could bias the model.
Fresh and updated data is a vital part of creating an accurate, safe model. Companies want data scientists to say there’s a problem before a liability appears, Williams said. For now, humans are still part of the process.
@theCUBE
#BigDataSV #StrataHadoop