In this interview from Appian World 2026, Mark Talbot, area vice president of AI CS incubation at Appian, joins theCUBE's Dave Vellante and co-host Alison Kosik to discuss how enterprises can operationalize AI by embedding intelligent agents directly into process-driven workflows. Talbot walks through Appian's architecture, explaining how its data fabric functions as more than a catalog — acting as a full application platform that harmonizes data across Salesforce, SAP and DocuSign through semantic search and a connected knowledge graph. He draws a sharp distinction between tasks best suited for structured workflow and those where agents thrive, flagging process flexibility and a low cost of error as the key criteria for successful agentic deployment.
The conversation also explores real-world adoption, including a wealth management firm in Australia where Appian replaced manual IT support ticket triage with an AI agent capable of self-service resolution and ticket deduplication. Talbot emphasizes that Appian's low-code approach allows business users to define agent goals in plain natural language — removing the need for expensive forward-deployed engineering talent. He also details how organizations can improve AI accuracy through iterative feedback loops, treating thumbs-up and thumbs-down signals as fuel for prompt and policy refinement. From governing multi-agent systems with overarching workflow in high-stakes regulated industries to identifying "boring, serious AI" as the highest-value use cases, Talbot provides a grounded roadmap for enterprises navigating the shift from AI experimentation to operational reality.
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Appian World 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Register for Appian World 2026
Please fill out the information below. You will receive an email with a verification link confirming your registration. Click the link to automatically sign into the site.
You’re almost there!
We just sent you a verification email. Please click the verification button in the email. Once your email address is verified, you will have full access to all event content for Appian World 2026.
I want my badge and interests to be visible to all attendees.
Checking this box will display your presense on the attendees list, view your profile and allow other attendees to contact you via 1-1 chat. Read the Privacy Policy. At any time, you can choose to disable this preference.
Select your Interests!
add
Upload your photo
Uploading..
OR
Connect via Twitter
Connect via Linkedin
EDIT PASSWORD
Share
Forgot Password
Almost there!
We just sent you a verification email. Please verify your account to gain access to
Appian World 2026. If you don’t think you received an email check your
spam folder.
In order to sign in, enter the email address you used to registered for the event. Once completed, you will receive an email with a verification link. Open the link to automatically sign into the site.
Sign in to gain access to Appian World 2026
Please sign in with LinkedIn to continue to Appian World 2026. Signing in with LinkedIn ensures a professional environment.
Are you sure you want to remove access rights for this user?
Details
Manage Access
email address
Community Invitation
Mark Talbot, Appian
Dave Vellante and Alison Kosik sit down with Mark Talbot, Area VP, CS AI Incubation, Appian at Appian World 2026 at the JW Marriott Orlando, Grande Lakes in Orlando, FL.
In this interview from Appian World 2026, Mark Talbot, area vice president of AI CS incubation at Appian, joins theCUBE's Dave Vellante and co-host Alison Kosik to discuss how enterprises can operationalize AI by embedding intelligent agents directly into process-driven workflows. Talbot walks through Appian's architecture, explaining how its data fabric functions as more than a catalog — acting as a full application platform that harmonizes data across Salesforce, SAP and DocuSign through semantic search and a connected knowledge graph. He draws a sharp dist...Read more
>> Welcome back to Appian World 26. We are streaming live from Orlando. I'm Alison Kosik alongside Dave Vellante. And we're about to dig into AI separating the hype from real business value.>> Yeah, and digging into some of the architecture of Appian, which I'm really fascinated. Place is bumping again, Alison.>> It is bumping. We had the keynote and now everybody's having their conversations. And we're about to dig into our conversation about this with Mark Talbot. He's the director of architecture AI with Appian. Welcome to theCUBE.>> Yeah, thanks for having me. I'm excited to be here.>> Great. Great to have you. So your position at Appian is all about the data, isn't it? And talk to me about what your position actually looks to do and how you distill that data.>> Right. So it's not only about the data, but it's in customer success. So I'm actually working directly with customers to get value out of our latest AI features, such as agents, such as the data fabric, and such as the latest features such as Appian Composer.>> So what are agents good at and what are they not good at?>> Yeah, so I can answer that. So agents are great when the process is very flexible, right? When the process is well-defined, when you have that back office process where you can diagram exactly what the process is going to do, that's not a great use case for an agent. Additionally, great use cases for agents are when the cost of error is low. So for example, if you have a well-regulated process such as pharmacovigilance or financial onboarding, those are still great use cases for process, which Appian does well.>> Okay. So there's architecture in your title. And I want to get into the architecture and geek out a little bit if we can. Let's just start with, how would you describe the architecture? Can you paint a picture of what a block diagram might look like? What are the salient components of that diagram?>> So for example, with your AI architecture, the salient components are first the data, the context that you provide to the AI, and that all exists in our data fabric. So with our data fabric, you can draw out relations between your support cases, between your knowledge base articles that you have for a support application. You also have your tools that are in place. So with agents, they need tools to do work. So your tools are your external web service calls. Your tools are your existing actions and your existing processes.>> Right. Okay. So I wonder if we could pick up on that. So data fabric, my understanding is, allows you to capture not just data, metadata, application logic, process logic that's trapped inside of applications, and you're not moving that data. And then you bring it into data fabric and then you harmonize it inside that way.>> Right. So we have the relationships between the data within Appian. So again, going back to maybe an example with procurement, procurement data lives in many different systems, that data can still live in those outside systems. What we do with data fabric is we synchronize that data within Appian so you can work with that to perform your semantic search, to perform your AI powered decisions with our agent technology.>> So you've got the secret sauce to be able to harmonize that data. So the example I often use is, how do I know if I'm ... Give me the revenue. Well, how do I know if it's quarterly or annual or fiscal or a calendar? Or is it NRR or ARR? Because then we sit around the table arguing about what the data really means. You've got ... Correct me if I'm wrong, but you've got intelligence inside of the Appian architecture to harmonize that data, the semantics, and provide context. Is that accurate?>> Yeah, that's absolutely right. And we provide this to our developers in a low code type of fashion. So you can work with it just like you work with regular pivot tables. You can perform semantic type of searches. So for example, when you're searching for data, a lot of data searches looks for that lexical match, that exact match. With that semantic match, you can find terms that are similar, like couch versus share. Those are going to have semantic similarities and it's able to bring that data back even though they're not that exact match.>> What I'm trying to get to, is it more than a catalog? And is it essentially ... Because we heard from the keynote, you've got Composer, I think it's called. So is it more than a catalog and can it be an application platform? That's what I'm trying to get to.>> Right. . You can also be the application platform that's central to your data. That's what you're going to build your workflow around with that data fabric. That's where you're going to build your AI agents around, even your human centric task.>> And is there a knowledge graph in there? Are you building a map of the enterprise? People, places, things, processes that I can do.>> Exactly. I mean, that's exactly what our interface looks like. So you do have that knowledge graph that relates your different entities, whether they be in Salesforce, whether they be in DocuSign, whether they be in any of the other hundreds of systems such as SAP that we integrate with. You see that knowledge graph and you see how that data is connected.>> I'm laughing because you are such an understated company. It comes from the top, your CEO is just not one of these hype masters. But this is extremely sophisticated software. There aren't a lot of organizations ... I mean, the one example I'll use is Palantir with their ontology and million dollar forward deployed engineers to actually help implement this stuff. I mean, it's very rare to find in a packaged application that type of sophistication. Am I overstating that?>> Right. And that's completely different than our approach because we make our technology accessible to low code developers. And it's simple, it's easy to learn, it's easy to visualize. And you're right, you don't need that million dollar forward deployed engineer to understand our technology or to leverage our AI or leverage our AI agents that we have in our product.>> Okay. So let's get into how people are using this. I mean, you've kind of taken us through the architecture, thank you. So you do have a services organization that does help people adopt it, but they're not like FDEs that are exploding. That's like the hottest title in the world right now. But ultimately for AI to scale, it's got to be simpler and easy to adopt. So what do you see happening in the field with customers in terms of the adoption?>> Right. So what we're doing with customers is we're looking at existing applications and we're performing task audits of their existing applications. And we're determining what tasks are currently human driven task and can be translated into an agent driven task. And we work with a wealth management firm in Australia. And one of the things we worked with them on is to understand their IT support ticket intake process. And one of the things we found out is when a support engineer first gets a support ticket, they first search existing support tickets to see if they've solved this problem before. And then what they do is they search their knowledge base again to see if there's a how-to guide to solve this problem. What we did was with this customer is we moved this approach from the support ticket engineer to the AI agent. And it was actually very straightforward to do and didn't really require a forward deployed engineer with expert techniques in AI or ML. I mean, basically what we did was we gave it a goal, we gave it a set of instructions, and we gave it access to our data fabric. And it was able to provide that first step of triage and provide a solution to the end user. So a couple things are happening now. Users are now able to self-service and provide that initial research themselves before it even gets to the support engineer.>> And those goals ... So the AI can interpret the system, can interpret those goals. So I might have some kind of, I don't know, I think of a metrics tree. Grow revenue, but maintain margins and maybe ... So those sort of policies or guardrails or metrics are infused into the system?>> Right, exactly. And the key is very natural language. And what I've found is that business users themselves are very adept at defining what those goals are that they have. So it no longer requires a developer to come up with those rules, that no longer requires that Ford deployed engineer. Because you're talking in your own business language and you define those goals. So if I'm in solution engineering or if I'm a support technician, I'm saying my goal is you're given a technical problem, your goal is to solve this. You have these tools in place. You have our knowledge base, you have our existing support tickets. That knowledge base doesn't necessarily need to live in Appian because we have our data fabric and it has that information at its disposal.>> Your customer's at the point where they're going beyond, say, a single assistant and they're implementing multiple sophisticated multi-agent interactions, maybe tapping into different applications. And what are the considerations of going from that relatively simplistic now, single agent assistant to this multi-agent platform?>> So I mean, my opinion is that these multi-agent platforms need to be governed by workflow, especially when the cost of error is high. As you think about it, you have those pharmacovigilance applications that look at adverse reactions, right? And these are life and death scenarios that you can't leave just a chance just to agents. So you need an overarching workflow process that delegates work to the individual agents. So for example, you might have an individual agent that reads a report from a physician, interprets if it's high risk or low risk. And then a human still in the loop is part of your overarching process to review that for the adverse reaction in this particular scenario.>> How do transaction systems fit into your architecture? So you've got analytics systems, historicals systems of analytics, you've got the real-time transaction systems. My North Star vision is I want to build a digital twin of my enterprise. People, places, things, activities, processes in real time. So I can understand the state of the business in real time, I can make changes, I can have agents help me execute. So is that aligned with your North Star, your customer's North Star, maybe a piece of it? How do you see that playing out?>> I think it's a piece of it. Again, I see that orchestrated through workflow, through process. So you have an agent that commits a task, and then you have as part of your process itself, your workflow, you have those defined transactions that you write as a result of the work that your agent completed. Or as a result of the work that your people completed as part of that overall process, overarching workflow.>> So how would that work? Appian would help govern and orchestrate the transaction? And then ultimately an Oracle system would make that transaction? What's the glue between the two?>> I mean, it could be an Oracle system, we have many different database providers that we work with. It could be MariaDB.>> Yeah, sorry, any database.>> It could be MySQL. Any of that. Yeah, that's one of our connectors that we have as part of our architecture is part of the data fabrics. When we write to the data fabric, it automatically is going to synchronize with your Oracle or your SQL server, your MySQL.>> And you're not moving the data, right?>> Not necessarily moving the data now.>> Well, you don't have to.>> You don't have to, but you could, right? If that data lives somewhere else, we could sync the data with that other system. But that data could just be written to a transaction within the Appian architecture.>> Interesting.>> Moving to AI use cases, is the highest value AI use case usually the one that's most complex, not the easiest automation target?>> So for me, it's always about the boring kind of serious AI is where the more high value transactions are. So for example, those boring monotonous tasks that you have to do as part of your everyday work. So for a lawyer, it might be document review with a certain set of checklists. And that's a type of work that AI does very well because you have a well-defined goal, it's written in natural language, it's simple and you can get value out of that. So those have been some of the early use cases that we've seen. Those use cases where you have maybe a well-defined job aid or job guide for document review, and we're providing that as a policy to the AI agent and it's able to complete that work.>> When does the human have to be in the loop? Is it always? When does the human not have to be in the loop? Two-part question, maybe a three-part question. And I mean, I'd like to get to a place where agents are taking action and I don't need to babysit them. And I'd like them to learn from the reasoning traces when there's an exception or I need to be in the loop. Is that an evolution or is that antithetical to your philosophy that a human needs to be in the loop?>> Not necessarily always in the loop. So one of the things you can do as part of, again, a workflow platform is you can collect metrics. So for example, you can get that thumbs up, you can get that thumbs down, you can get comments as far as how well the AI is doing the overall process as part of the workflow. So once you get a certain confidence in how the AI is performing, then what you can do, you can decide, is this a good candidate for straight through processing? And you can have those metrics as part of one of your application dashboards. I think another thing to consider is what's the cost of being wrong? What's the cost of error? So for example, in that IT support application I mentioned, if the AI is wrong, it's not that high of a cost. Because if the AI is wrong, you simply ask the AI, okay, create a support ticket, escalate this to a real person. But the value it's providing is high because it's doing that initial triage, it's reducing the number of support tickets that are open. It's de-duplicating support tickets, which that's a high value there. Like you imagine you have a network outage, everybody's opening a support ticket. Then your support team's inundated with all these tickets related to network outage. But if you have that case de-duplication, then it's only one support ticket. And that's again, an example of, okay, the cost of being wrong is low, cost of error is low. But the value is high.>> So a lot of processes maybe need to be re-imagined. Many processes aren't codified in microservices or scripts or whatever. How do you guys approach that? Do you do process mining to identify things that could be done better? Do you see ... Because a lot of this stuff is in tribal knowledge. Are there opportunities for your customers to automate things that weren't automatable in the past because they were sort of too fuzzy?>> Yeah, we've seen that. A lot of that is in tribal knowledge. And one of the things I've found is when we've tried to automate these job aids, the first go around we find out that the accuracy isn't what we've liked. And the reason why is a lot of that information is in tribal knowledge. And we talk to the actual end users, the actual users that are performing the document review, they're finding out that certain steps that we're looking at actually aren't documented. So it's an iterative process. You have to involve UAT testing, you have to get end user feedback. I mean, that's all part of that change management process and making sure you're building the right application.>> And how are you use reinforcement learning to kind of train the AI? I mean, we know that if you tell ... If the AI say, okay, calculate one plus one and it comes out with three, you penalize it.>> Right, right.>> When it comes out to the correct answer, you reward it and it gets actually very good. Incredibly good once it has that reinforcement learning. How are you using that and what are you finding for the results?>> Yeah, it does get very good. And so one of the approaches we're taking again is kind of that thumbs down, thumbs up approach. So for example, when you have a human reviewer, that human in the loop, they provide that thumbs up, thumbs down. Additionally, when it's wrong, we ask it why it's wrong. And then we use that information to go ahead and refine the prompt, refine the action, refine the set of goals, or refine the policy guide that we're providing it to make it more accurate.>> So you're saying we should take, for our own apps, that thumbs up and thumbs down very seriously. And we should take the time, even though it's a pain in the neck->> You should take time.... >> to actually explain it in gory detail like you would a prompt when you want a good result of a prompt.>> Absolutely. That's how you make it better. And the other thing you do, you talked about, do I need to lift and shift or rip and replace? And that's always the debate, what you do in architecture. And I tend to be a fan of, let's look at our existing process, especially if we already have this application in Appian, what are small changes we can do and what are small changes we can make to automate? Like perhaps you're doing procurement inside of Appian and right now it's a manual approval process. And right now you're manually, as a procurement agent, looking at the RFIs, manually looking at the proposals and you're manually scoring. What you can do is say, okay, right now we have that human task as a part of our workflow. What we're going to do is we're going to replace it with the agent, but provide the same set of instructions that we provide to the procurement person. And it's that same reinforcements, that same iterative process where you go from good to better to great.>> Interesting.>> Over the next two to three years, where do you think Agentic AI will create the biggest shift in enterprise work?>> Yeah. So I think the biggest shift ... I think there's a lot of hype with Agentic AI. There's a lot of companies trying to use Agentic for everything. But I think what we're going to discover is there's a place for process, there's a place for agents, and there's a place for people. And we're going to get better at defining what those roles are. So people in process for these well-defined tasks where the cost of error is being wrong for these highly regulated industries. But then leverage the agents where the process isn't as well-defined and when you're going to have that human in the loop.>> Thanks so much for stopping by theCUBE. Great talking with you today.>> All right, thank you.>> Thanks, Mark.>> And you've been watching theCUBE, the leader in high-tech enterprise analysis and live coverage. We'll be right back.