As I was listening to the latest podcast episode of the
Die Prozess Philosophen
in which they has a very interesting guest, being none other than the godfather of process modelling,
August-Wilhelm Scheer
, one of the topics they discussed intrigued me greatly. They went into the topic of AI Agents (or Agentic AI) and how that might impact the need for process modelling.
This got me thinking as I was driving down the ‘longest’ highway in the Netherlands (the A2 for people that like to know) and in this episode of my Process Extraordinaire newsletter I would like to share my two cents on this.
The baseline situation is that in process models we usually define activities as the things that human (or sometimes RPA bots) undertake in order to transform input into output and ensure that the next actor in the process can do his or her work. Until now (and for many companies still into the future) all activities (and even the human interaction steps that make up an activity) are known beforehand. This, by the way, also enables process simulation and process adherence management (where you check the actual execution of a process against a pre-defined baseline process).
Enter AI agents.
One of the main characteristics of an AI agents is that they are partly (if not completely) autonomous. Based on prompted input, they find a way to get to the expected / desired outcome.
According to AWS: a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.
So far, so good.
The question now becomes how this might or will impact the way we think about process modelling. Do we still use the same methods and conventions or do we need to re-think our processes (as
Mirko Kloppenburg
tends to say in his podcast intro)?
I’m hinging on two schools of thought here. Let me explain.
On the one hand, we can still apply the same logic and define the AI Agent as the responsible role for an activity, for which we are no longer capable of drafting a work instruction for . Due to the fact that they are autonomous, we simply don’t know anymore what the AI agent is doing, as long as it gets the job done and nicely provides the right output in order for the next activity in the process to be able to do their job.
So the consequences in this case would be: one less work instruction to draft and maintain.
On the other hand, the AI agent might be taking control of a larger part of the process and this might result in the fact that a process model becomes more of a black box and the alignment between activities morphs into the alignment between processes. This might lead to a reduction in the number of process models needed, because larger parts of these processes have no de facto become black boxes.
My guess here is that the application of AI Agents in business processes will start to take shape in situations that bear the most resemblance to case management, where the exact sequence of executed activities is subordinate to the outcome of that part of the process. I’m thinking about travel industry, healthcare industry and other service related industries.
My next thought immediately is: how will we embed the documentation of AI agents in our business processes, given the way that we document processes today? We might need some new conventions to capture the application of this new technology, or would AI Agents also be able to taken over the actual act of documenting process all together????
I’m not sure if that is a re-assuring thought to end this episode with, but please, do share what you think about this?
Until next time…
Caspar