We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Google Cloud Next 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Google Cloud Next 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Google Cloud Next 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Google Cloud Next 2026
Please sign in with LinkedIn to continue to Google Cloud Next 2026. Signing in with LinkedIn ensures a professional environment.
At Google Cloud Next '26, Mikhail Vink of JetBrains, vice president of business development, discusses artificial intelligence-native development, open platforms and developer governance. Alison Kosik of theCUBE Research and John Furrier of theCUBE Research host the conversation, which focuses on multi-model orchestration, avoiding vendor lock-in, agentic AI, cloud-native infrastructure and the evolving role of developers. Vink outlines JetBrains' approach to integrated development environments, or IDEs, governance tooling and integration of diverse AI provid...Read more
exploreKeep Exploring
How is the rise of AI assistants and agentic AI changing software development, and why does JetBrains advise against relying on a single vendor?add
How are developer tools and roles changing with the rise of AI and agents, are developers building AI-native applications on cloud-native platforms, and what are the main challenges (e.g., ROI, security, quality, governance) slowing enterprise adoption?add
Do you agree that AI development is splitting into two roles—agentic infrastructure engineers who focus on containers, Kubernetes and the plumbing of multi-agent systems, and AI‑native engineers who focus on models—and how do those two roles differ in responsibilities and day-to-day work?add
What concerns do companies have about the sustainability of AI business models and the potential hidden costs of using AI to replace developers?add
>> Welcome back to Google Cloud Next '26. We are streaming live right here in Las Vegas. I'm Alison Kosik, joined alongside by John Furrier. And we've been interviewing some amazing guests over the past three days and we're about to bring in somebody who works for a company that believes that you shouldn't be locked in just to one platform. It's sort of a thinking out of the box kind of thinking.
John Furrier
>> Yeah, open always wins as we always say on theCUBE. And the developer environment right now is so hot because the enablement coming in from the AI infrastructure, more computing power, more GPUs, and the rise of agentic has really set the table. I mean, coding is now breaking through the enterprise and that's going to unleash all kinds of new things. This is going to be a great segment.
Alison Kosik
>> Okay, well let's get into it. I want to bring in Mikhail Vink. He's the vice president of business development with JetBrains. Welcome to theCUBE.
Mikhail Vink
>> Thank you. It is a pleasure to be here.
Alison Kosik
>> So, let me start off by asking you this. The future of development, according to you, isn't about one AI, it's about orchestrating many without losing control. So, this seems like a big bet to be an open platform. First of all, what does that mean? Define what that means. Why now? And why does that matter?
Mikhail Vink
>> Yeah, totally. So, JetBrains here is known mostly for its IDEs. So, JetBrains' IDEs are used by more than 15 million developers worldwide. So, Java and many other languages. So, we see that, of course, the pattern of the software development is changing so rapidly and everyone is using... Well, everyone. 90-plus percent of developers at this point at least tried the AI assistants, agentic AI and all of those things. So, that has already changed their software landscape, how we do develop software. But we see that right now, there is just no single tool you would use. If you look at agentic, there are so many agents, so many models and any other month there is going to be coming up a new frontier model, which is going to be better than the previous one from the competitor. So, we see that there is a lot of switch back and forth between the developers, between different tools. And we see that if you actually start working just with a single vendor at this point, that can be very bad for you as a company to just their vendor lock. So, that's why we believe that within our tools and that goes both for the IDE editors developers use, but also for team level, for the management of all of those collaborations with AI, you need to really work with all the leading providers.
Alison Kosik
>> And you say it's bad because those leading providers are changing so quickly?
Mikhail Vink
>> Yeah. So, there is a notion of leading providers changing so quickly. And also, we see that there is less loyalty. So, before you would know that developer would choose the tool that use it for 20 years. And if you look at AI, so this month that is something from Anthropic, next month it's something from Gemini. And you really, as a developer, need to be top-notch on top of all of those things to get the best from the market.
John Furrier
>> The market, I mean, obviously the coding tooling, it's just been amazing. I would say close to 100. I mean, anyone who's doing anything relevant is getting coding assistant. Human in the loop is a big discussion, we'll get to that in a second, but I want to get into this whole model choice because what you're basically getting at is what Google's saying here and others is that, "Hey, build the scaffolding around the models, get the tooling right and do your job." So, the tooling is changed.
Mikhail Vink
>> Yeah.
John Furrier
>> What has been the biggest, I don't want to say blocker, challenge. Is it tools that are changing, the models that are changing? Because people might lock in with Cursor, they might like something else. So, what are you seeing with the developer... Or does it matter? What's your view on that? The whole coding tool choice, and then open models.
Mikhail Vink
>> Yeah. So, I think that's for the tool, the tools are evolving. So, the tools are going to change and that is more not coming really from the AI itself or the models or anything like that. It's coming from the interface to the developer because right now it's all moving from the actual coding to the AI coding, vibe coding... Well, vibe coding probably too extreme at this point. You cannot really build serious applications there at this point, but you really start working with agents. As a developer, your role has changed. So, as a developer, you orchestrate the agents, you run them, and then you fix the work of the agents, so that has changed. But what you need to understand as a developer right now is that with this change, also all the accountability is still on you. And the biggest challenges in adoption were the return investment from that, security and quality considerations.
John Furrier
>> You mentioned vibe coding and I smile because vibe coding... Everyone talks about vibe coding, but vibe code is like a night out in the town. It's fun. I mean, it's cool and fun, but to get a production workload at scale, you got to have the security, you got to have the right hooks and you can't really vibe code your way there. So, that brings us to, okay, what's the steady state for production? I mean, developers love their own tools. It's like their favorite pet, right? I like this over that.
Mikhail Vink
>> Always.
John Furrier
>> I mean, we used to have eMax and VI debate all the time back in the day, but that same thing is going on now with developers. But the rise of the developer is not slow. I mean, the ecosystem with developers is booming. Share your thoughts on the order of magnitude of the leaning in portion of developers building AI-native and are they building on cloud-native? What's your thoughts on that one?
Mikhail Vink
>> Yeah. So, definitely it's going much faster as honestly we anticipated like the entire models development, agents development. So, yeah, it's transforming everything. So, I think that for cloud-native, just looking back, it was like much slower adoption. To some extent right now, a lot of enterprises already, there were a lot of teams who would be adopting AI. So, that is already happening, but it's like a very long tail of the companies who don't adopt it yet for different considerations. And what we've noticed from conversation a lot of customers is that one of the missing pieces, not the generating the code, not preparing all of those things, but it's actual governance on top. So, that's how we started actually building the governance platform, which is assessing the costs, analyzing what you're actually doing, who is accepting the changes. And so, you're basically checking what the access to the models, developer . There are a lot of enterprise considerations, so that slows down the adoption. I think with cloud-native, that was a bit easier because you would go to a specific cloud, let it be your Google Cloud or AWS, and you would actually get the full stack of all of those things there. But as we discussed right now, it's not possible with a single vendor.
John Furrier
>> Yeah. And AWS, who basically made the market, it was like a slow growth, but now it's kicking up because the foundation's set. Talk about the impact of cloud-native and AI-native because at KubeCon Amsterdam, I wrote a post and my first sentence was, "Cloud-native, meet AI-native." In other words, we're friends. I mean, they go together, right? So, it's not like one or the other, but they're kind of different personas, but they're still developing. So, talk about that intersection of cloud-native engineers and AI-native engineers.
Mikhail Vink
>> So, I think that they're indeed friends, but there is like no 100% overlap. So, at this point, there are so many... Probably cloud-native is like... What is that? You would put the scale at like 80%, 90% at this point. And with the AI-native we are just starting up. So, a lot of cloud-native can switch you in AI-native very quickly because if you're very much hardcore, air-gapped environments, enterprise, yeah, you might have a private cloud, but getting the private model of the OpenAI or Anthropic Sonnet or Opus quality is going to be very difficult at this point. So, there is going to be these gaps. So, I would say that the AI-native and switch to AI-native also going to be helping the companies to become more cloud-native because it's very difficult to sustain all this infrastructure on your side as well.
John Furrier
>> I mean, I remember back in the day you recall shift-left was a huge movement that brought in SEC to DevOps, now We have DevSecOps. So, similar kind of thing happening, it's not really a shift-left, but it's more of, okay, that's going to make the CI/CD pipeline better for the coders. That rose right up top. Kubernetes and containers, really critical how that evolved. That was really strong. Now, you're seeing containers being the key element and Kubernetes running a lot of the agentic because there's a shim layer, not my word, but maybe more of a layer of agentic emerging where those engineers are looking a lot like DevOps. They're thinking about plumbing and then you've got your hardcore AI-native engineers who are just playing with the models and there's just coding away. Do you agree with that pattern and what does that make up look like for that developer? Just draw a distinction between that agentic infrastructure engineer and the hardcore AI-native coding slinging code left and right.
Mikhail Vink
>> Yeah. So, I would say that at this point, of course, all the cloud-native infrastructure we built over the years, like Kubernetes, all cloud runners, things like that, that helps a lot in the current AI infrastructure build out. But then we need to realize that there are a few things which we need if you really go AI-native. It's not only about the model or single agent. You need to have a bunch of agents, like swarm of agents as they call them. You need to control them. But then the story is that you also need to pass the data, pass the context, pass the memory, you need to connect them to the MCP, so to get the context to the agents, because otherwise, you're going to be missing out on the real-world data, structured data. Then, on top of that, you need to connect all of that to the tools and then you need to also adopt the tool. So, it's like you end up with a lot of things. It's like you need to configure a lot of things for your developer environment to be sustainable at this point.
John Furrier
>> Alison and I were talking yesterday about this cloud-native. I think I heard the word database maybe a handful of times over the years. You can't not talk about databases because now the database plays much more bigger role.
Mikhail Vink
>> Yeah, it's a layer. Yeah.
John Furrier
>> Data pipelines. I feel like the big data world is collided with cloud-native, but there's still engineering. It's not like the old-school days of data pipelining. It's not a data scientist exercise. Talk about that because you look at Postgres and the rise of Postgres with vibe coding has been phenomenal.
Mikhail Vink
>> So, yeah, I would say that's generally with AI-native and switch to AI, just everything becomes bigger. Well, first of all, your electricity bill, up to the level that a lot of companies right now are building their own electric plants and looking at their small nuclear reactors and all of those things is super tough market right now. So, everything becomes bigger. You have a lot of data, you have a lot of codes. And in our domain, for example, if you look at the pipeline of the code generation, yes, all the agents are going to tell you, "Yes, we're going to generate these codes."
Almost no agent is going to tell you like, "Okay, can you please remove this million of lines of code? Let me refactor it for you." So, it's not really happening at this point. So, that means like the code bases are bigger, the database is here, you want to get the mock data. Yeah, you just go out and in 10 minutes, you have like hundreds of millions of mock data, which would be like mimicking what you have. So, it's like everything is just scaling and for that, it's actually putting a lot of pressure on the infrastructure as well.
John Furrier
>> What about the cost side? We were talking about some FinOps conversations yesterday, which has been around the cloud for a while. And refactoring code is one thing, but if I switch models, I got to test that. The testing costs go up. So, there's a lot of other kind, I won't say hidden landmines, but there are considerations.
Mikhail Vink
>> Yeah.
John Furrier
>> What are some of the things you're seeing with the developers in terms of as they're playing around switching and playing and coding and playing with the data, what's jumping out at you?
Mikhail Vink
>> So, I think that's one of the first things would be something we think a lot at JetBrains because we need to operate in the AI, but one of the things we believe in that it has to be a sustainable business model when we're talking about AI. And one of the things we see on the market and has been talked a lot because there have been so many questions and discussions on their business model of AI vendors and the cloud vendors that right now everyone more or less operates on the loss and that is something being a private company we prefer not to do because we want to try to build a sustainable business model. And it's very tough in the age of AI because everyone is operating on the loss. So, that's the first thing no one almost known in the market on the customer side understands at this point how much it's going to cost you to do those AI transformations. That's not about the deployment, that's not about infrastructure that is easier to calculate. But okay, you start replacing some of the work developers are doing with AI. You move those developers to the code reviews, doing all of those things to more advanced architectural stuff, they are there, but AI is going to generate a lot of cost for your company. And it's very difficult to estimate right now. And a lot of folks just don't understand that it might be very well more expensive than actually having a developer.
John Furrier
>> What are the coding tools? You guys had a state of the AI developer market. You did some digging on this. What were some of the results?
Mikhail Vink
>> Yeah, so some of the results, of course, the most obvious one is everyone is playing the AI game at this point. And I think that the most astonishing for me was the probably still concerned about the quality. So, almost everyone is concerned about the quality. There is no way to beat that at this point, except using the classic deterministic tooling, because if you have scale LLM agents to verify the quality, well, there's a very high chance they're going to verify a lot of things and that is still-
John Furrier
>> And they're going to tell you, "Hey, great job."
Mikhail Vink
>> "Great job." Absolutely. "That is such a good thing."
John Furrier
>> "Yeah. You're good. Ship it."
Mikhail Vink
>> Yeah, "Ship it." But in reality, you need a lot of deterministic algorithms. So, that is where we see a lot of gaps on the market right now with this intersection of the AI and non-AI tooling, and also AI tooling using the non-AI tooling to verify the work of the agents. That is one of the things we see. But yeah, from the research, probably concerns and quality concerns on the security is the biggest one.
Alison Kosik
>> Looking ahead, what do you think the best engineering teams will be doing differently a year from now?
Mikhail Vink
>> A year from now? So, I think that they would know what is the return investment.
Alison Kosik
>> And what is?
Mikhail Vink
>> No one knows for now, that's probably going to be this year for a lot of us to figure out like from the deployment off AI, from becoming AI-native, what is the actual business outcome you are getting as a team, as a development team?
John Furrier
>> It's interesting when I was in college in the late '80s getting my computer science degree, when you got a job, your title was software engineer. And then, it turned into software developer. And so, what's interesting is that the engineering side of things, I intentionally use that word, really becomes important because the human in the loop here, you mentioned some of the things that are coming down, and you got to figure out, one, the business model, the outcome. You have to engineer the cost structure, where the payout is. So, the role shifts from just writing code to code reviews, let's check the governance. So, that's new kind of engineering. How would you describe that if someone says, "Hey, what's the role of the engineer?" Obviously, the human in the loop, we all know what that means. But the role of the engineer now, software developer, software engineering changes. Are they curating? Are they orchestrating? describe in your words, how would you define this new developer role as they free up their time?
Mikhail Vink
>> Yeah. So, interestingly enough, the role of the developer right now is much wider than it used to be. In regards to that here, it's like last you know this concept of the T-shaped individual, one very deep domain and then there's a lot of other domains which you barely touch. So, right now you can touch more domains with AI and you need to do that. So, you need to orchestrate, yes, you need to do the code reviews for the AI, you need to guide the AI, you need to engineer the most significant pieces of that, and that is tough. So, I would say that it's coming a bit leveling up and you're managing not only other developers and quality-assurance process-
John Furrier
>> It makes me think of the old QA role back in the old days. So, QA was quality assurance. What you mentioned, the quality, that seems to be a common thread. It's not so much QA in the old sense. It's like, I know what good is if I'm an engineer. I'm steering it, I'm making sure it's ready. That's engineering. What skills are required? A little cloud-native, a little bit of... How would you describe that skillset? Because they do have to ensure the quality, that's the human piece of it. Okay. I approve it or I'm going to put my name on it, whatever happens.
Mikhail Vink
>> Yeah. So, I would say that the most critical thing there is a critical thinking. So, it's not just approving what AI gives you, what agents generates. It's like going there really deep and understanding how the system works. So, the technical engineering capabilities are still very, very important and probably going to be even more important than right now because right now the code is just not going to compile. Here, it's generated, it compiles, but it doesn't do what you want and you really need to dive deeper to understand what is happening. And debugging is very difficult when you're working with agents because it generates and creates the code and does the changes which a human might not be approaching in the same way.
John Furrier
>> Yeah, back in the day when SaaS was booming and it's evolving to be AI-native, it was more functional and more capabilities, the word 10X engineer was a buzzword. Remember that, 10X engineer? One engineer can do the work at 10. And agents, it's almost limitless, you have a multiplier-
Mikhail Vink
>> Yes, but there are problems.
John Furrier
>> Lay that out because this is where it's not guaranteed. You can get a multiplier-