In this interview from KubeCon + CloudNativeCon EU, Christopher "CRob" Robinson, CTO of the Open Source Security Foundation (OpenSSF), joins Greg Kroah-Hartman, Linux kernel maintainer, to talk with theCUBE's Rob Strechay and Paul Nashawaty about how the explosion of AI-generated bug reports is reshaping open source security — and the major industry coalition mobilizing to address it. Robinson details a new OpenSSF initiative backed by Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft and OpenAI to secure the rapidly expanding AI ecosystem. Kroah-Hartman reveals that after months of obvious "AI slop," a recent shift in tool quality means maintainers are now receiving legitimate AI-generated vulnerability reports, creating an unprecedented volume of work for core infrastructure projects. The new funding aims to provide developers with token credits, integrated tooling and AI-assisted triage to manage the flood.
The conversation also explores the European Cyber Resilience Act, which requires manufacturers to perform vulnerability management and ship software bills of materials starting September 2025. Kroah-Hartman explains how OpenSSF's best practices badge gives downstream companies a reliable signal that an open source project meets security and compliance standards — a critical differentiator as regulatory pressure intensifies. Robinson highlights how the convergence of CRA compliance obligations and AI-accelerated discovery could overwhelm maintainers with thousands of duplicate patches from organizations facing severe financial penalties. OpenSSF is responding on multiple fronts, from publishing an MLSecOps white paper and launching a free class on secure vibe coding to designing AI-powered advisors that consolidate similar pull requests and surface the strongest candidates for review. From educating upstream developers on identity, access and data handling fundamentals to helping enterprises avoid the security pitfalls of rushing agentic AI into production, the discussion underscores why open source governance must evolve at the same velocity as the tools reshaping it.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon EU 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for KubeCon EU 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for KubeCon EU 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
KubeCon + CloudNativeCon EU 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to KubeCon + CloudNativeCon EU 2026
Please sign in with LinkedIn to continue to KubeCon + CloudNativeCon EU 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Christopher "CRob" Robinson, OpenSSF & Greg Kroah-Hartman, The Linux Foundation
Rebecca Knight and Rob Strechay host a conversation with Christopher Robinson, CTO, OpenSSF, Greg Kroah-Harman, Linux Kernel Developer, Linux Foundation as part of theCUBE’s coverage of Kubecon + CloudNativeCon EU 2026 from Amsterdam, Netherlands
Christopher "CRob" Robinson, OpenSSF & Greg Kroah-Hartman, The Linux Foundation
CRob Robinson
CTOOpenSSF
Greg Kroah-Hartman
The Linux Foundation
In this interview from KubeCon + CloudNativeCon EU, Christopher "CRob" Robinson, CTO of the Open Source Security Foundation (OpenSSF), joins Greg Kroah-Hartman, Linux kernel maintainer, to talk with theCUBE's Rob Strechay and Paul Nashawaty about how the explosion of AI-generated bug reports is reshaping open source security — and the major industry coalition mobilizing to address it. Robinson details a new OpenSSF initiative backed by Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft and OpenAI to secure the rapidly expanding AI ecosystem. Kroah-Har...Read more
exploreKeep Exploring
Why has interest in AI surged recently, and what security and operational challenges has that rapid growth created?add
Why are major AI companies (e.g., Anthropic, AWS, GitHub, Google/DeepMind, Microsoft, and OpenAI) forming a coalition/program to address problems arising from frontier models—such as increased discovery and reporting of vulnerabilities and data-handling issues?add
How will the Linux Foundation's new agentic initiative and funding be used to support AI agent development, security, and developer workflows?add
How should open source projects and commercial developers integrate AI-based coding and security tools (MLSecOps), leverage new funding and donated tools to handle the rising tide of AI-generated bug reports, accelerate CI/CD and code-review processes without overburdening developers, and comply with upcoming EU CRA reporting and SBOM requirements?add
Christopher "CRob" Robinson, OpenSSF & Greg Kroah-Hartman, The Linux Foundation
search
Rob Strechay
>> Hello and welcome back to KubeCon+CloudNativeCon EU live from Amsterdam. Having a great week. I mean, this has been where the community comes together and understands what's really going on and connects with the things. Paul Nashawaty joining me again here. Again, Paul, this has been so much fun. There's been so much going on. AI obviously dominates things, but security is a big thing too.
Paul Nashawaty
>> Security is a big thing. There's a lot going on in security and it's clear across the show floor that that's topic of mind.
Rob Strechay
>> Yeah. So again, very excited to be joined by two experts in this area because I think when you start to look at what's going on in security, we have Crob who's the CTO for the OpenSSF.
CRob Robinson
>> Correct.
Rob Strechay
>> So it's the Open Secure...
CRob Robinson
>> Open Source Security Foundation. We love acronyms.
Rob Strechay
>> I always screw it up when I try to break it down from an acronym perspective. And then we have Greg...
Greg Kroah-Hartman
>> KH, just say Greg KH. It's all good.
Rob Strechay
>> Greg KH, who's a Linux kernel maintainer. So this is fantastic because we get to see from the top level to the kernel where the action's really happening or hopefully not happening if it's secure. So again, I covered it last week. You guys had a huge announcement.
CRob Robinson
>> Yeah, we did.
Rob Strechay
>> Everybody's coming in to help from a perspective of securing AI, which to me, is one of the most or least talked about things. Kind of help us understand what that announcement was about.
CRob Robinson
>> Well, fun fact, AI is hot. If you haven't wandered the floor here-
Rob Strechay
>> Really?
CRob Robinson
>> I know, it's new. I was surprised myself. But there's been an accelerated use over the last year, especially. AI, which has been around for decades, and it used to be called machine intelligence and machine learning. It's kind of evolved into more buzzwordy terms. It's been around forever, but in the last year in particular, the growth has accelerated and the different variables and techniques and tools has exploded. And a year ago, agentic was just a thought in some developer's head. It wasn't a thing. And now, almost every vendor out there is selling an agentic solution. It's crazy. And when you have this much marketing push and pressure and the amount of profit to be made, everybody's very focused on closing that deal, delivering that feature. And they're not necessarily thinking about more traditional boring cybersecurity basics, 101 stuff, thinking about identity, thinking about access, thinking about how data is touched and manipulated. And because of that, my industry peers, and it's been the frontier model folks, the hyperscalers, other members from around the ecosystem, have come together saying, "We recognize this problem." And especially when you get into the frontier model space where developers and researchers and just lay people are using these tools and they're finding a lot of information. It's not always great and Greg could kind of talk about the quality of the reports he gets, but they're finding this and they're not finding it. It used to be a onesie-twosie thing where a researcher would go off for weeks and kind of unlock a puzzle. They're doing this stuff in minutes or hours. And then they're submitting it upstream to folks like Greg, who he's got his job, he's got his backlog and he's fixing bugs and doing security stuff, but it's exponentially more traffic at it. And this coalition that we're putting together, this program we're developing is going to try to help address this both from an upstream developer perspective, giving developers access to these tools and techniques to do it securely, but then also try to help influence some of these systems. Like you might have not done everything exactly the way you wanted to around the agent identity. Maybe we can help coach you to help fix that a little bit.
Rob Strechay
>> Yeah. I mean, just to clarify, I mean, it's Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI. I mean, when you start to look at those names coming together, I would say that gives it a lot of weight that this is a problem.
Paul Nashawaty
>> But take those names, right? Those names are very powerful to kind of put in there. But just not too long ago, Crob, we were on at Open Source Summit.
CRob Robinson
>> Yes.
Paul Nashawaty
>> And we were there when Google donated the A2A to the CNCF. That was the day it happened.
CRob Robinson
>> Yep.
Paul Nashawaty
>> Right? And that was, it seems like eons ago, right? And it wasn't that long ago, right?
CRob Robinson
>> It was six, eight months ago, if that.
Paul Nashawaty
>> Right. And look, to Rob's point, look at all the adoption that's happened since then, right? And Greg, same thing, right? It's like this thing is just kind of growing like crazy. So let's talk about what you did, what you said you were going to do at Open Source Summit, what you did from then to now, and then what's happening today, the announcements that Rob were talking about, but also, I'm interested in what's happening next as well.
CRob Robinson
>> Okay. Well, from the A2A announcement, the Linux Foundation has spun up a whole foundation focused around Agentic and a lot of the members that kind of put together this funding are founding members of that group. And it's interesting. And since then, we've got MoltBook and MoltClaw nonsense where the agents have really have gone crazy. But what we have done is we have an AI MO working group and we try to go out and we're working with the different, both within the LF, but also externally, organizations like Cosy and other groups where we're trying to help get people in a room and talk about how we do these things securely. And we've published a white paper on MLSecOps, how to integrate this stuff into your workflows. And that's more for developers like Greg or if you're in a commercial enterprise. And now, with the unlocking of this funding, we're hoping again to, with the backing of these creators of this amazing technology, hopefully we can kind of brainstorm on how do we get these tools to the people that need it to help them both deliver whatever features and ideas they have through coding assistance, but also using it to help improve the security of the project, whether it's, "Hey, I noticed that you've got a dead branch of code or you have a potential SQL injection." Or actually finding legitimately you're vulnerable to this DON CVE. It's a lot of kind of mom and apple pie basic community working group stuff. And with the delivery of this funding, I expect we're going to go thousand mile, we're going to go ludicrous speed here and really kind of accelerate things.
Rob Strechay
>> So your backlog is just getting deeper every day.
Greg Kroah-Hartman
>> Well, so I'm part of the Linux kernel security team. So we get bug reports all the time, right? We've been getting... Like Daniel from The cURL project publicly said they get lots of AI slop reports and he's had to ban on how they handle security reports. In the kernel, we were getting AI slop. It was obvious. It's like, this is a joke, it's not working, but something switched a month ago and I don't know what, the tools got better or whatnot. We're getting AI generated and health reports that are real. And talking to the other open source maintainers of core infrastructure projects, we're all getting them. So everybody's getting these bug reports because the tools are good enough in finding these bugs. They're low hanging fruit, like Anthropic published like, "Hey, we found 500 bugs." They showed what they did with Firefox, they showed what they do with Ghostscript. And these are easy, simple, tiny things. I mean, AI in this method is pattern matching. It's like, look at this previous bug that was fixed, see where this bug could be applied everywhere else. And AI does that great because again, pattern matching. So these tools are running. We've had static analysis in the past where we have lots more people running these tools now. We're getting lots of bug reports. And so we're have an onslaught of bugs. In the kernel, we can handle this. We have enough people working on this. We can distribute it out, but we've all noticed we're getting a lot. And the kernel's a big project, but there's lots of other much smaller projects and this funding is going to help the maintainers of those other projects get the tools that they need to help manage this. And one example that's actually public now, Linux talked about this last year in Japan. Google is having a code review tool that was AI generated. Now, that's public. It's been donated to the Linux Foundation. It's running on the Linux kernel public mailing list on the patches that are sent. And it's AI coding tools, it's tied in with a bunch of work that Facebook and Meta has done before with a lot of reviews and public and the rules and whatnot. systemd is also involved. So we're using these good pattern matching tools to help maintainers do the review, process patches faster, and hopefully get those fixes merged quicker. And that's a good thing.
Paul Nashawaty
>> Yeah, absolutely. And I think that it is a good thing. I mean, I think that having the visibility into the CI/CD pipeline and accelerating it, one of the things I would ask, and maybe to help the audience understand here is there's a lot of pressures on developers, right, as you know. And there's a lot of... Everything's shifting left and shifting everywhere, right? And the challenge is those pipelines are growing and the checklist for those pipeline's growing. So part of security from a DevSecOps perspective, I'd love to hear your perspective on how you can kind of go slower to go faster, so to speak, right? How do you accelerate the pipeline while keeping developers happy in the context of they can get their job done and meet their business KPIs, but not have to work 16-hour days to do it?
Greg Kroah-Hartman
>> Well, a lot of that is good reporting. So open source projects through the CRA, the law that's coming into effect here in Europe, part of the rules that anybody who uses software and ships it in a product, they have to report the security bugs that they are... To the open source developments.
Paul Nashawaty
>> And that's by September 26th, this year.
Greg Kroah-Hartman
>> This year. Yeah. September 11th this year.
Paul Nashawaty
>> September 11th, right?
Greg Kroah-Hartman
>> Yeah.
Paul Nashawaty
>> And then December 27, all applications the following year.
Greg Kroah-Hartman
>> The following year, then the maintainers and the community, open source stewards it's called, the foundations, we will be reporting to the public, all security bugs that we fix. These are good... For a community point of view, these are things all projects should be doing anyway. OpenSSF has a great one-page paper on here's what community members need to do. They've had the best practices badge for a decade. And so the good thing is if you're integrating software, open source software into your product, go to the best practices badge, see what that project says is they're doing. And then you can know that, oh, for the CRA, I'm going to be fine because these developers are already reporting their security bugs to the proper places. They're already using this in the tool pipeline. And if you're an open source developer, go down that checklist and say, "Here's all the things that... Oh, we could be doing this better and OpenSSF has all the tools and CNCF can plug it in." A simple one, have your project create a software bill of materials. All products that are going to be shipping in the EU has to have a software bill of materials now. We have the tools to generate that. We've added that now to the kernel, the dispatches have been out there for a little while. So all people and all software projects in the stack should be able to generate that. So then when you're a company, you can just ingest that easily and spit it out. So pick a project that provides that. I mean, that would help them and that helps them know that you're taking a software that's actually managed by somebody, a group that can handle this type of stuff. Open source, we don't know who uses our software and we can't dictate use. But on the flip side, it would be really nice as a developer to find out who's using our stuff and what problems they have. And this way, users can feel a little more comfortable that, "Oh, it isn't a developer who just threw something over the wall in GitHub, isn't maintained, isn't going to be managed or whatever. Maybe I shouldn't build that into my product." Or, "Oh, hey, look, this has a gold level on the OpenSSF security badge, best practices. These guys know what they're doing. Let's take from them." That's a good thing to have.
CRob Robinson
>> And just stick with the CRA for a minute. Again, manufacturers are legally obligated to do vulnerability management and they're encouraged to report upstream, which is great. And there are some very severe financial consequences for those manufacturers. So what makes me afraid and what is kind of going in parallel with this rise of AI slop reporting and the use of these tools is not only are we going to have all this traffic we're seeing today, which is not just up into the right trending, it's almost a straight line vertical. We're going to have all of that, but busier, but then we're going to have thousands of manufacturers that have billions of euros on the line and they're going to use AI to create a patch and they, "Hey, Greg, I got a patch for you." And oh, wait, there's 1,000 other organizations sending a slightly similar thing to you and what you notice-
Greg Kroah-Hartman
>> I would love to have that problem.
CRob Robinson
>> What you notice with the Anthropic stuff is, again, these models are trained with a snapshot of time and some of the bugs that got reported were already fixed.
Greg Kroah-Hartman
>> Oh, yeah. That's very true.
CRob Robinson
>> So there's going to be duplicative effort. So again, part of the program we're thinking about putting together with this funding is not only giving developers access to the tools, giving them token credits, but also having folks come in and help integrate those tools and potentially have the robot via advisor to Greg saying, "Hey, you got 1,000 PRs, five of them are similar. I think these are your best choices." And then you can go, you put your expertise on top of that and say, "Yeah, these four are crap, this one's good enough. I can start the actual patch from there potentially."
Greg Kroah-Hartman
>> Maybe. I'd love to see that.
Rob Strechay
>> Well, I mean, I look at it as helping from a product ownership perspective.
Greg Kroah-Hartman
>> Yeah, very much so.
Rob Strechay
>> I think that way, because again, I put my product hat on from when I was running product for a company and you start... To your point, I think we always reported out to our customers what was going on and the bugs and things of that and what we were patching, but there was hundreds, to your point, and it became a prioritization perspective. How do you see this as... I mean, because I look at it and I just say the entire aperture of what needs to be covered is expanding so fast and MLSecOps is one of those things that I don't think people have put enough thought into. How do you see this, all the way from the top down to the kernel, people starting to have to take a different approach to this. How are you helping them understand best practice and this is the direction to go in?
Greg Kroah-Hartman
>> .
CRob Robinson
>> From a pure upstream perspective, the OpenSSF has created a class on how to securely use vibe coding techniques and...
Greg Kroah-Hartman
>> Really?
CRob Robinson
>> We did.
Greg Kroah-Hartman
>> Okay.
CRob Robinson
>> Wheeler put it out. And then we're also working on a more expansive class to touch on other techniques beyond vibe coding, how to effectively and securely use LLMs, how to use agents. So that'll be a forthcoming class later this year, but the vibe coding class is out today. And when you think about when people are developing upstream, they're not necessarily living within a corporate role. Within a corporation, you have rules and policies and you must use these types of tools and these procedures. And a lot of times, developers will upstream because they kind of relieve themselves from those constraints. Again, they're able to express themselves, kind of solve problems how they want to. So they're not necessarily following the similar pattern of rules that you might see in a bank or a hospital or whatever. So again, we're working on trying to educate the developers like, "Here's the threats that are coming. Here's some techniques you can use or some tools."
From downstream, I absolutely agree. The whole MLSecOps idea in our white paper is not widely used enough. People are sprinting forward in this race and they are just grabbing tools off the shelf and throwing them in and they're not thinking about the security problems they're introducing for themselves, the fact that you potentially are exfiltrating sensitive data, that these things don't necessarily have identity or access controls in place. So again, we're trying to kind of work both up and down to educate those constituents and provide guidance. This is how you can get to better outcomes for yourself.
Rob Strechay
>> Yeah. I mean, I think to me, that, A, is a great place to leave it because I think from... I just was talking to a friend who's a CISO at a company and the conversation was, "I don't even know how we do a tabletop at this point." He's like-
CRob Robinson
>> I can help.
Rob Strechay
>> Yeah, I know, but it gets to a point where they're looking for the resources. And I think you guys have a lot of-
CRob Robinson
>> And education....
Rob Strechay
>> stuff that you're putting out and education to go and do this. I think, again, this is someplace that I think is critical for people to take a look at. So I really appreciate you both coming on board today. Thank you for coming in.
CRob Robinson
>> Good to see both of you.
Greg Kroah-Hartman
>> Thanks for having us.
Paul Nashawaty
>> My pleasure.
Rob Strechay
>> And thank you for watching this episode of theCUBE live from KubeCon+CloudNativeCon EU. Stay tuned. We got a lot more coming on.