In this interview from Appian World, Jason Adolf, vice president of global public sector at Appian, joins theCUBE's Dave Vellante and co-host Alison Kosik to discuss how federal agencies can close the AI readiness gap by building trust and transparency into mission-critical workflows. Adolf argues that most agencies defaulted to building chatbots to satisfy AI mandates — the least disruptive option available — but that approach falls well short of what mission-heavy government operators actually need. He explains that users such as grant administrators and bank examiners require full visibility into how an AI agent reached its conclusion, not just the result itself. Appian's heritage in business process management, Adolf notes, provides the foundation for that auditability — tracking AI actions down to the nanosecond, the same way it has always governed human-driven workflows.
The conversation also explores how AI sovereignty is reshaping the global market, with nations increasingly choosing to insource their model infrastructure rather than rely exclusively on US-based providers. Adolf details how Appian re-architected its low-code platform to support model-agnostic deployments — replacing a Claude-only foundation with an open layer that accommodates models such as Mistral, enabling wins in European government markets. He also unpacks Appian's Kubernetes-based approach, which allows the platform to run in highly secure, air-gapped and edge environments — including military devices in the field and tactical awareness kits on Android hardware. From addressing AI bias in government claims and benefits decisions to hardening its platform for top-secret deployments via Appian Defense Cloud, Adolf makes the case for why governed, explainable AI is becoming the decisive differentiator in mission-critical government operations.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Appian World 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Appian World 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Appian World 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Appian World 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Appian World 2026
Please sign in with LinkedIn to continue to Appian World 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Jason Adolf, Appian
Dave Vellante and Alison Kosik sit down with Jason Adolf, VP Global Public Sector, Appian, at Appian World 2026 in the JW Marriott Orlando, Grande Lakes in Orlando, FL.
In this interview from Appian World, Jason Adolf, vice president of global public sector at Appian, joins theCUBE's Dave Vellante and co-host Alison Kosik to discuss how federal agencies can close the AI readiness gap by building trust and transparency into mission-critical workflows. Adolf argues that most agencies defaulted to building chatbots to satisfy AI mandates — the least disruptive option available — but that approach falls well short of what mission-heavy government operators actually need. He explains that users such as grant administrators and ba...Read more
>> Welcome back to Appian World '26. We are streaming live here, in Orlando. I'm Alison Kosik, alongside Dave Vellante. And it's been a great conference, right? We're on day three.
Dave Vellante
>> Yeah, day three, we're getting deep into the Appian Kool-Aid injection, and it's really making sense. I like their messaging, and the whole notion of taking agents to governed action, that's hard, and enterprises need that, as do governments.
Alison Kosik
>> Yeah, and there's growing pressure on federal agencies to show progress with AI, an issue we're going to dig into right now with Jason Adolf. He's the vice president of Global Public Sector with Appian. Welcome to theCUBE.
Jason Adolf
>> Thank you so much for having me.
Alison Kosik
>> Let's talk about this, federal agencies are under pressure to show AI progress, but you've talked about this readiness gap. What does that gap look like inside government today?
Jason Adolf
>> I think the level of hype versus reality in government is reached a level that is not necessarily sustainable for the agencies themselves, and so what you had was significant mandates to, let's say, do AI. And so what did everybody interpreted that was, "I'm going to declare victory by building a chatbot," right? That was the least destructive thing that they could do, and so everybody went out and built a chatbot. But I think as you all have seen at the conference this couple of days, that's not really where the actual value would be delivered for our mission customers. They want to do things that are more deterministic. And honestly, I don't think yet that the policies are in place at most federal agencies to be able to govern things like agents. I think you hear a lot of talk about it, but I'm not sure that the policies are there for it.
And I also don't think that if you look at our customer base being very mission-heavy, our customers are the operators of the system. They're the owners of the case management solution or the logistics solution. Their burden for what they're willing to accept from AI is very different than if I was an IT person accepting AI into my organization. And so I don't think you've crossed that gulf yet, where the operators, the researchers, the bank examiners, the grant administrators, the contract administrators that use our tools are quite ready to get replaced in some ways by those tools.
Alison Kosik
>> What's it going to take to get there?
Jason Adolf
>> I think it's all about trust and confidence, and so my talk track when I travel has been all about certainty and transparency in AI. And I think what we're seeing more of is the burden for the vendor in when we go into these engagements and have to explain what AI is going to do for them, we have to show it. We have to show it in a way that says "You are going to perform these actions, and now I'm going to have an agent do it, and the agent was going to do it the way you were going to do it, but I'm also going to show you the output. I'm going to allow you to audit it, and give you some confidence that it is doing things the way that you would have done it had you designed this agent yourself." That's really where I think the biggest gap is.
Dave Vellante
>> I think you framed it very well. In the early days of generative AI, people plugged an LLM into a vectorized database, and they said, "Oh, see, we built a RAG-based chatbot." And at the same time, you had this other end of the spectrum, with these frontier model companies talking about AGI and living forever, and so that freaked a lot of people out, and so governments tried to say, "Well, we have to control this." And they started to try to implement policy, and what they missed is there was all these steps in between, like reasoning and agentic. And now we're deep into that, and I think you're right on. It's like, "Wow, we don't really know what to do with this stuff. We don't trust this stuff." And that seems to be the challenge that you guys are going after. So as we step back a little bit, you and I were talking off camera about governments globally trying to take control of their own sort of AI stack. Enterprises are building their own AI stacks, they're not just relying on the cloud. Cloud's a key piece of it, as are organizations and governments around the world, and state and local governments.
Jason Adolf
>> Yeah.
Dave Vellante
>> So, everybody wants to control their AI future, so what's your role in helping them understand and come up with some kind of consistent policy? It's like herding the cats.
Jason Adolf
>> Yeah. I mean, as we were talking about a little bit before, the sovereignty conversation has come up everywhere. And so I do a good deal of travel with Appian, present on topics like this, particularly around risk and governance and AI, and more and more, what you are seeing is nations are attempting to insource their model. So a few years back, when we had ChatGPT, and that was like a one hit wonder, everybody would just use that. Now, I think people are realizing both the data sovereignty aspect of AI, but also the economic impact of AI. So, a lot of countries don't want to funnel all of their AI tools into US-based companies anymore, and so we're seeing, particularly in France, we get a lot around... They have a model called Mistral, and so we have a big... We won to contract with an organization there, in government, and they prefer to use that. And so we've adapted to that by saying, "Originally, all of our services were based on Claude, and so we've now re-architected that so that when you're in the low-code platform, you're actually using our services, but what's underneath that can now be something other than Claude." And I think at all of those different levels of government, that's what they're asking for, because today, Claude is the best at certain things, but as you all have seen, that pace could be tomorrow, it could be something different, or to solve a specific problem, it could be something entirely different in six months.
Dave Vellante
>> Well, to that point, Meta just released a new model this morning, ahead of its earnings, because it's been spending so much money, it had to show some progress. It's trying to catch up. You mentioned Mistral, there's companies like Cohere, which are doing some amazing work, and then there's the leading frontier models. You've got Elon buying Cursor. So things change so quickly, it's very, very hard to predict, so you, as the software vendor, have to provide optionality and not just access, like a Hugging Face library, you've got to actually do some integration with those models, don't you?
Jason Adolf
>> Yeah. And I'll tell you, so if you look at our portfolio of customers, we tend to be very mission-heavy, and so if you look at the space that we play in, we play in a space where a lot of our competitors came from building COTS applications, and then have now become low-code vendors, right? And our thing has always been to be the most flexible. That's been a key tenant of ours, which was we want to land in an infrastructure and allow our customers to use the things they like, to take advantage of the data sources that they have. And so what we have done a very good job of is bringing all that information in, allowing our customers to build very complex systems on top of it, but then still allowing that flexibility and choice, because we know that tomorrow, for those key missions in an agency, somebody could come through and decide to make a significant change to what their constraints are.
Dave Vellante
>> I'll ask this in a weird way, but how biased are you toward the American tech stack? I mean, obviously you're a US-based company. You've got Jensen running around saying the world should be on CUDA. I know it's self-serving, but the American tech stack, let's broaden it beyond CUDA, and that's a good thing obviously, for the country. At the same time, you've got China in particular, doing all these really amazing, wonderful open weight models. You've got other competitors typically... I mean, what OpenAI did with open source is like, "Okay, we're going to deprecate a model and put it out in open source." Enterprises want that open model, so there's some really interesting dynamics here. How do you play that? I mean, do you try to help put your thumb on the scale for US-based companies? You have to be kind of agnostic and a trusted vendor, right?
Jason Adolf
>> So my personal view, and frankly, where Appian has gone with this. So, if you think about a lot of our customers, they are operating in... We have some customers operate in self-managed environments. We have customers operating in very high security environments. And so our position has been, and one of the key advantages that we have is, we've spent a lot of engineering effort on creating parity between our cloud offering, our standard AWS tech stack, and a Kubernetes-based architecture, okay? So where does that benefit us is, in any country, if you choose not to use the US-based tech stack, we've built a standard Kubernetes-based architecture that you could also run right next to whatever AI solutions that you have in country. You can run it on whatever hardware you want to run it on, and we don't really care. We'll support it regardless. And I think what you've seen is, if I were to look across the portfolio of our government customers, we're very heavy in defense, we're very heavy in intelligence, law enforcement. These are areas where they need and desire that flexibility, and I think, to be honest, that's been one of our biggest advantages. Because we are a Washington-based company, and I know you were talking about East Coast AI, but being a Washington company helps, because if you look around the conference and all the speakers, all of these people have done government work. I mean, our founders started by doing work for the Army, right? And so when we take requirements to them for government, they don't look at us like we're speaking Klingon, right? They think about it, and they say, "Okay, well, that makes logical sense that the government would need to do the things you're describing." So I don't look at it as a disadvantage. I look at it like we've created an advantage for ourselves by being able to give people that choice.
Dave Vellante
>> So, there are some obvious front runners, I don't want us to call them winners yet, but clearly OpenAI and Anthropic doing amazing work. I strongly believe that XAI is going to be a major competitor, particularly potentially at the edge. And you never count Zuckerberg out, so there's some obvious front runners. How do you think about those that are... Like you mentioned Mistral, France has an amazing technology community. China, we talked about has an amazing technology... How much integration do you have to do, or do you just basically, as Appian's posture, "Hey, we're open. The entries and exits, entries into and exits from our system are open so anybody can play." Is it the latter or do you have to do tighter integration with those models?
Jason Adolf
>> No, it's the latter, but that's by design, right? And so I can tell you, last week, we were in a all-day demonstration with a military customer that wants to be able to take and do edge-based computing. They want to shrink it down, want to run it on a device, want to run it in a vehicle, and why force them into a tech stack? A lot of our customers are operating in environments where they don't have that much choice, they've authorized a model to operate in a top secret environment. And so they've gone through a lot of work to get these tools to operate in these very austere environments, and so we said, "All right, we're going to be compatible with the things you've done to operate in those austere environments." And that could be in the US, it could be in France, it could be in Australia, in New Zealand. And so I think, again, that is our going-in posture with this set of customers, is that frankly, we don't care.
Dave Vellante
>> Edge is an interesting example. See, I would posit that it's highly likely that the CUDA US stack is going to be the dominant cloud and enterprise and neo-cloud stack for training and enterprise inference. I think edge is a completely different animal. I mean, it's very diffuse and dispersed, and-
Jason Adolf
>> It is, but it isn't as much as you think. And so what's been interesting to us in our R&D with edge devices, okay? So, we've got a whole toy store stuff back at headquarters, we've brought in a lot of military spec devices. We've actually installed Appian on a purpose-built device that runs Jetson Nano, so Nvidia's OS. What we're finding though is that the burden on those devices to run an LLM, or to run these services, is not that high because the AI that's required to operate at the edge isn't the corpus of all information that's ever hit the internet, right? It's typically, I need interaction with a language model that can be shrunk down to the point where it operates on one of these devices, it's very performant, that edge device is supporting 100, 200 clients out in the field. And so I think what we've seen is, and we don't test with things like DeepSeek, but we've seen even the lighter weight versions of a Gemini model and a cloud model, or OpenAI model, perform extraordinarily well for the use cases that are needed in that environment.
Dave Vellante
>> Interesting. I want to play that forward and test that, and stress test that a little bit. So, we were at Mobile World Congress this year, and we put forth a piece of research that John Furrier put out, that I picked up on, and the whole concept was the hyper-converged edge, taking compute storage, networking, and wireless, putting it in cell towers and basically having like an AI factory in the cell tower. Clearly, that's powerful. Appian could run on that. Do you think that scenario that you laid out with the small, efficient language models is perpetual or permanent, or do you think that ultimately the edge is going to have so much like compute power that it's unimaginable the types of things that we can do there?
Jason Adolf
>> So for our military customers, the next holy grail of this will be, I don't need a server side device, I could take the Appian application and run it. We have an application that runs on a mobile device.
Dave Vellante
>> Run on a client, right?
Jason Adolf
>> App store-level device. Today, there's no way to take what we would design in AI and run it on a iPhone 17 Pro. I think what you will see, I think logically, any of us could look at it, and say, "Eventually there will be enough compute power on that device that it ships with a model." And so can we then take... We're doing work now with some governments around TAK, so that's tactical awareness kit, right? So the military goes out with an Android device, and so then the question is, I've now shrunk the footprint of an Appian implementation down to, "I'm going to take this out to a field depot in Afghanistan, then can I shrink it down further to run the entire thing on a Android mobile device?" And so that's the kind of stuff that we're testing now. And what I would tell you is, if I look at our competitors in government, the game isn't anymore, "Can you build an AI coding agent? Can you do agents?" We have some philosophical differences in our approach to that. The game now is "Where can I run it? Can I run it in a secure environment? Can I run it on a ship? Can I run it in the field?" Because what's driving that is I used to have to build it custom, and I don't want to build it custom, I want to build it COTS, but I want my COTS thing to act as if it was custom. And that's where, if you really get into our engineering effort, that's what we're doing.
Dave Vellante
>> That's so well said, Jason. I mean, I remember the days when the government had a massive push for commercial off-the-shelf software that was a mandate, and then of course, they had to customize on top of that, where the pendulum is swinging back in a very strong way. But what I'm understanding is your architecture lends itself to a highly distributed environment, the Kubernetes piece that you talked about, so you can run anywhere, give customers optionality. And then the other piece that we're learning here, at this conference, is your game is taking governed action, and that's a really hard thing to do, that East Coast AI.
Jason Adolf
>> Sure. Yeah. And look, the other part of that used to be, when I first started Appian, we used to say we are a West Coast culture with East Coast business sensibility, and so that was a way to describe what you would see if you came to our office. But look, there's a whole thought around transparency, and so if I look at the adjectives that people care about, or the concepts people care about, transparency, security, we invested heavily in what we call Appian Defense Cloud, which is our IL5 managed service that we've rolled out to a number of our defense customers. We have hardened Appian to be able to run in top secret environments and above. And on the transparency side, what maybe is not, again, is not as obvious to a layperson that's looking at the things that we've been demoing, is our heritage is business process management. By default, we were always a very good auditing product, so auditing down to a level of, down to the nanosecond, everything that was happening, and so what we've done that was smart was applied that to what the AI is doing. So if you think about what our customers are doing, and I'd say this a little tongue in cheek, it's one level of risk to say, "Well, my AI didn't generate the right credit card offer for you." It's another thing to say, "Well, my AI didn't approve the benefit that you needed for disability claims that now you're going to take a year to get appeal." And so our customers can't just accept the answer that is given, our customers need to know how the answer was created. And so again, philosophically, that transparency in how we're doing it, what the agents do, and what the model is bringing back to you, I think is equally as important as how fast it works, what it costs.
Dave Vellante
>> And coming back to where we started, with sovereign AI, that is fundamental and critical for governments and organizations globally, because they need to know what happened when, where, they have to... Explainable AI is kind of the buzzword, but you're delivering that.
Jason Adolf
>> Right. And so the other thing that more recently became a talking point for us was, if you think about it, I think everybody would tell you, "I can give two agents the same problem, they'll come up with two different ways to solve the problem." But we also have to deal with bias, and so if I gave... And if you think about it in the context of demographics, we do a lot of grants work, we do a lot of claims work, would two claims with similar demographics be executed the same way by two different agents? Because if I get denied and you get accepted, well, how does an agency say that we were fair? And so these are the kind of things that I think we're trying to tackle now, is to be able to understand that level of how it's working.
Dave Vellante
>> Great conversation.
Jason Adolf
>> Yeah.
Dave Vellante
>> Really appreciate your time.
Alison Kosik
>> Enjoyed talking with you.
Jason Adolf
>> Thank you so much.
Alison Kosik
>> Thanks so much. And you've been watching theCUBE, the leader in live technology coverage and enterprise tech analysis, and we'll be right back.