Jay Yonamine, Head of Data Science for Global Patents at Google, sits down with Jeff Frick at Chief Data Scientist, USA in San Francisco, CA.
https://siliconangle.com/2016/11/23/bringing-data-tools-to-googles-legal-department-cdsusa/
Bringing data tools to Google’s legal department | #CDSUSA
Disruption of traditional practices has become a big selling point for various technologies in recent years. But no matter how committed to an ideal a company may be, it will likely run into severe difficulties when trying to apply it to firmly established aspects of the enterprise experience, particularly in the legal department.
At the recent Chief Data Scientist, USA event, Jay Yonamine, head of Data Science for Global Patents at Google, sat down with Jeff Frick (@JeffFrick), co-host of theCUBE*, from the SiliconANGLE Media team, to discuss how the many data-crunching tools at Google’s disposal are being applied to the centuries-old process of patenting. Yonamine also discussed the biggest difficulties Google has encountered in doing so.
Culture clash
Yonamine began by noting that on the “customer-facing products” side of Google, almost everything is cutting-edge, with “machine learning, artificial intelligence, data-driven efficiency,” and so on. But on Google’s internal legal side, things aren’t quite as accelerated.
“Legal, as an industry, is sort of traditional, entrenched in established processes,” Yonamine stated, pointing out a perceived tension stemming from the legal and patent departments’ staff wanting to have as much access to the Google’s disruptive technologies as other departments.
“The challenge we face is demonstrating that the new tool, the new machine learning algorithm, the new application, is actually helping us in an observable, measurable way, relative to the sort of entrenched status quo,” he explained.
As Yonamine sees it, a lot of time is spent simply figuring out how to evaluate performance of the existing processes so that a baseline exists for comparison to the newer technologies that could potentially be applied. “That mindset, of a really data-driven approach, not just to building tools, but to measuring the ROI of the tools we build, is a challenge we’re working on,” he said.
Clarity and comparisons
Yonamine also identified two big areas of opportunity for improved data-handling within the patenting process. The first, “the actual prosecution,” involves filing the patent and going through the back-and-forth, including textual adjustments, as well as determining whether the pay-off of the patented item would justify the cost of the patenting process; for Google, this is a process involving tens of thousands of applications on a regular basis.
The second is “transactions,” with companies buying, selling and licensing portfolios, and the contingent assessment of the relevance of those portfolios’ contents to the products of the involved companies.
In each of these areas, time and knowledge constraints make it unfeasible to have value assessment of these portfolios done by people, but Yonamine sees plenty of room for machine learning and algorithms to help with the determinations.
In addition to the patent-analysis tool of Google hosts, he also pointed to the “tremendous push to machine-readability” in the data of patenting as a major step in the right direction, though he acknowledged a few lingering data-streams that provide that data in PDFs, making it “a bit clunky to get at the underlying text in a way that’s machine-readable.”
The big picture
Taking a big picture view, Yonamine picked out cost at scale as a huge driver of improving the patenting process, but emphasized the importance of establishing a clear model of operations with which improvements could be compared.
“To the extent that ‘better’ is subjective, it tends to be a losing proposition to prove the ROI of the machine-learning application if no one can agree on how much better the final result was. So that’s kind of the cultural challenge in a nutshell,” he explained.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the Chief Data Scientist, USA event.
*Disclosure: Corinium Global Intelligence and other companies sponsor some Chief Data Scientist, USA segments on SiliconANGLE Media’s theCUBE. Neither Corinium Global Intelligence nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Chief Data Scientist, USA 2016 | San Francisco. If you don’t think you received an email check your
spam folder.
Sign in to Chief Data Scientist, USA 2016 | San Francisco.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Register For Chief Data Scientist, USA 2016 | San Francisco
Please fill out the information below. You will recieve an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Chief Data Scientist, USA 2016 | San Francisco.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Chief Data Scientist, USA 2016 | San Francisco. If you don’t think you received an email check your
spam folder.
Sign in to Chief Data Scientist, USA 2016 | San Francisco.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open this link to automatically sign into the site.
Sign in to gain access to Chief Data Scientist, USA 2016 | San Francisco
Please sign in with LinkedIn to continue to Chief Data Scientist, USA 2016 | San Francisco. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jay Yonamine, Google | Chief Data Scientist, USA 2016
Jay Yonamine, Head of Data Science for Global Patents at Google, sits down with Jeff Frick at Chief Data Scientist, USA in San Francisco, CA.
https://siliconangle.com/2016/11/23/bringing-data-tools-to-googles-legal-department-cdsusa/
Bringing data tools to Google’s legal department | #CDSUSA
Disruption of traditional practices has become a big selling point for various technologies in recent years. But no matter how committed to an ideal a company may be, it will likely run into severe difficulties when trying to apply it to firmly established aspects of the enterprise experience, particularly in the legal department.
At the recent Chief Data Scientist, USA event, Jay Yonamine, head of Data Science for Global Patents at Google, sat down with Jeff Frick (@JeffFrick), co-host of theCUBE*, from the SiliconANGLE Media team, to discuss how the many data-crunching tools at Google’s disposal are being applied to the centuries-old process of patenting. Yonamine also discussed the biggest difficulties Google has encountered in doing so.
Culture clash
Yonamine began by noting that on the “customer-facing products” side of Google, almost everything is cutting-edge, with “machine learning, artificial intelligence, data-driven efficiency,” and so on. But on Google’s internal legal side, things aren’t quite as accelerated.
“Legal, as an industry, is sort of traditional, entrenched in established processes,” Yonamine stated, pointing out a perceived tension stemming from the legal and patent departments’ staff wanting to have as much access to the Google’s disruptive technologies as other departments.
“The challenge we face is demonstrating that the new tool, the new machine learning algorithm, the new application, is actually helping us in an observable, measurable way, relative to the sort of entrenched status quo,” he explained.
As Yonamine sees it, a lot of time is spent simply figuring out how to evaluate performance of the existing processes so that a baseline exists for comparison to the newer technologies that could potentially be applied. “That mindset, of a really data-driven approach, not just to building tools, but to measuring the ROI of the tools we build, is a challenge we’re working on,” he said.
Clarity and comparisons
Yonamine also identified two big areas of opportunity for improved data-handling within the patenting process. The first, “the actual prosecution,” involves filing the patent and going through the back-and-forth, including textual adjustments, as well as determining whether the pay-off of the patented item would justify the cost of the patenting process; for Google, this is a process involving tens of thousands of applications on a regular basis.
The second is “transactions,” with companies buying, selling and licensing portfolios, and the contingent assessment of the relevance of those portfolios’ contents to the products of the involved companies.
In each of these areas, time and knowledge constraints make it unfeasible to have value assessment of these portfolios done by people, but Yonamine sees plenty of room for machine learning and algorithms to help with the determinations.
In addition to the patent-analysis tool of Google hosts, he also pointed to the “tremendous push to machine-readability” in the data of patenting as a major step in the right direction, though he acknowledged a few lingering data-streams that provide that data in PDFs, making it “a bit clunky to get at the underlying text in a way that’s machine-readable.”
The big picture
Taking a big picture view, Yonamine picked out cost at scale as a huge driver of improving the patenting process, but emphasized the importance of establishing a clear model of operations with which improvements could be compared.
“To the extent that ‘better’ is subjective, it tends to be a losing proposition to prove the ROI of the machine-learning application if no one can agree on how much better the final result was. So that’s kind of the cultural challenge in a nutshell,” he explained.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the Chief Data Scientist, USA event.
*Disclosure: Corinium Global Intelligence and other companies sponsor some Chief Data Scientist, USA segments on SiliconANGLE Media’s theCUBE. Neither Corinium Global Intelligence nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.