środa, 18 stycznia 2017

Distribute tasks wisely ... pluggable task assignments jBPM7

User interaction in business processes is one of the most important aspects to make sure that the job is done. But not only that, it should make sure that the job is done:

  • on time
  • by proper actors
  • in least time possible
  • and more...
User tasks in business processes can be assigned either to:
  • user(s) - individuals that are known at the time of task creation - one or more
  • group(s) - groups/roles that are known at the time of task creation - one or more groups
  • users or groups references as process variables
With this users are already equipped with quite few choices that allow to manage user tasks efficiently. So let's review simple scenario:

Here is the simplest process as it can be - single user task.
Such a task can be assigned (at design time - when process is created) to:
  • individuals via ActorId property - it supports comma separated list to specify multiple actors
  • groups via GroupId property - it supports comma separated list to specify multiple groups


... use single actor

So if the task is assigned to single actor then when task is created it:
  • will be assigned directly to that actor - actual owner will be that actor
  • will be moved to Reserved state
  • no one else will be able to work on that task anymore - as it is in reserved state
So that seems like a nice approach, but in reality it will constrain users too much because to change the actor you have to change the process definition. And in many cases there is a need for more users to be able to deal with certain tasks instead of just a single person.

... use multiple actors

To solve that you can use approach with multiple actors (as this is supported to specify set of users as comma separated list). So what will happen then?
  • task will not be assigned to any individual as actual owner as there is no way to tell which one should be selected
  • task will be available to all actors defined in process definition on that task
  • task will be moved to Ready state
  • user to be able to work on task will have to explicitly claim the task
So a bit of improvement but still relies heavily on individuals and manual claim process that can sometimes lead to inefficiency due to delays.

... use groups

naturally to go around the problem with individuals being assigned (as they come and go) better option would be to assign tasks to groups. When task is assigned to group it:
  • will not be assigned to any individuals as by nature group contains many members
  • will be available to all actors that belong to defined group(s) - it's resolved on the time of the query as task is assigned to group so changes to the group do not have to be synced with tasks
  • will be moved to Ready state
  • user to be able to work on task will have to explicitly claim the task
so this improves situation a bit but still have some issues ... manual claim and potential delays to pick tasks from the group "queue".

... use pluggable task assignment 

for this exact purpose, jBPM 7 comes with pluggable task assignment support to let you (or to be precise system) to distribute tasks according to various criteria. The criteria here are what makes the difference, as different business domains will have different ways of assigning tasks. Even different departments within the same organization will differ in that regard. 

So what is the task assignment here? In general this is the logic that will be used to find the best suitable candidate to take the task automatically. To carry on with the example, there is a process that is assigned to a singe group - HR. 
Task assignment strategy will then be invoked when a task is created and strategy can find the best actual owner for it. If it does such a task:
  • will be assigned to selected actor
  • will be moved to Reserved state
  • no one else will be able to work on this task any more 
but if the strategy won't be able to find any suitable candidate (should be rather rare case but still can happen) the task will fallback to default behavior as described above.

Assignment strategy can be based on almost anything that is valuable to the business to make a fact based and efficient decision. That means some strategies can be based on:
  • potential owners (as in this example)
  • task data (input variables)
  • task properties (name, description, project, etc)
  • time when task was created
  • external data not related to task itself

So that gives all the options to the users to build their own strategies based on the specific needs. But before going to the implementation (next article) let's look at...

... what comes out of the box

jBPM 7 comes with two assignment strategies out of the box
  • Potential owner busyness strategy - default
  • Business rules strategy
Potential owner busyness strategy
this strategy simply makes sure that least loaded actors from potential owner list will be selected. Strategy will work on both types of potential owners - users and groups but to be able to effectively find best match it needs to resolve groups to users. Resolve is done by UserInfo configured in the environment - please make sure you have one properly configured otherwise strategy will not work in most efficient way.

Name of the strategy to be used to activate:
PotentialOwnerBusyness




Business rules strategy
This strategy promotes the use of business rules as a way of selecting actual owners. Strategy does not come with any predefined rules but instead expect to be given the KJAR coordinates of the project to be used to perform assignment. The most important factor here is that any rule can be used and it supports dynamic updates of the rules as well by making use of KIE Scanner that can incrementally update knowledge base upon new version of the KJAR being discovered.

Name of the strategy to be used to activate:
BusinessRule

Configuration parameters supported:
  • org.jbpm.task.assignment.rules.releaseId
    • required parameter that points the GAV of the KJAR
  • org.jbpm.task.assignment.rules.scan
    • optional - pool interval for the scanner in case it should be enabled - same as KIE Scanner expects it - in milliseconds
  • org.jbpm.task.assignment.rules.query
    • optional - drools query to be used to retrieve results - if not given all Assignment objects are taken from working memory and first is selected if not empty


... not only on task creation

task assignment is invoked not only on task creation (though that will be the most common case) but it will also get involved when:
  • task is released - here the actual owner who releases the task is excluded from the assignment
  • task nomination
  • task reassignment (deadlines)
  • task forwarding
with that it should provide quite capable self assignment behavior when the strategy is tailored for given needs.

... how to use it

Task assignment is disabled by default and can be easily enabled by specifying system property:
-Dorg.jbpm.task.assignment.enabled=true

then selecting strategy is done by another system property:
-Dorg.jbpm.task.assignment.strategy=NAME OF THE STRATEGY

if org.jbpm.task.assignment.strategy is not given PotentialOwnerBusyness strategy is used by default.

To be able to properly resolve group members users need to select user info implementation by system property:

-Dorg.jbpm.ht.userinfo=db

this one will select data base as source of group members and thus will have to be configured additionally. KIE Server comes with example configuration file in
kie-server.war/WEB-INF/classses/jbpm.user.info.properties

where db queries should be specified how to find users of given group.

Optionally you can create userinfo.properties file in the same directory and specify the group to users mapping in following format:

#groups setup
HR=hr@domain.com:en-UK:HR:[maciek,engUser,john]
PM=pm@domain.com:en-UK:PM:[maciek]

this is only for test purposes. For real environment use either data base or ldap based UserInfo implementation.

That concludes the introduction of task assignment strategies that are completely pluggable. Next article will illustrate how to implement custom strategy.

wtorek, 10 stycznia 2017

Order IT hardware - jBPM 7 case application

In previous article I talked about Traditional vs Modern BPM and now I'd like to show what does that mean - a modern BPM in action. For this I used upcoming feature of jBPM 7 that provides case management capabilities that were already introduced in Case management series that can be found here.

So this is another iteration around Order IT hardware case that allows employees to place requests for new IT hardware. There are three roles involved in this case:

  • owner - the employee who placed the order
  • manager - direct manager of the employee - the owner
  • supplier - available suppliers in the system
It's quite simple case definition that looks like this:


As presented above this process does not look much like a regular process - it's case definition so it's completely dynamic process (so called ad hoc). This means new activities can be added to it at any time or different fragments can be triggered as many times as needed. 

What is worth notice here is Milestone nodes:
  • Order placed
  • Order shipped
  • Delivered to customer
  • Hardware spec ready
  • Manager decision
Milestones are completed based on condition, in this case all conditions evaluate given case instance data - case file. So as soon as data is given the milestone is achieved. Milestones can be triggered manually by signal or can be auto started when the case instance starts.

Here you can watch this application in action


and now we dive into details of how this application was composed...

So application is built with following components:

  • WildFly Swarm as runtime environment
  • KIE Server as backend 
  • PatternFly with AngularJS as front end
This application is fully featured and runnable but should be seen as showcase/PoC that aims at showing intension with modern BPM and case applications. No more centralized deployments to serve all but instead have a tailored apps to do the one thing but do it right.


So once you logon you will see the home screen - the purpose of this system - to order new hardware
Here you can see all available suppliers that can deliver IT hardware:

  • Apple
  • Lenovo
  • Dell
  • Others (for anything that does not match above)
Here the suppliers are considered groups in the BPM world - meaning tasks assigned to selected supplier will match the selection.  That will be then assigned to "Prepare hardware spec" task in the case definition.

Once you fill in the form and place an order you'll be given with order receipt 



At any given time you can take a look at orders

  • My orders - those that you (as logged in user) placed
  • All orders - lists all orders currently opened
From the list you can go into details of particular order to see its status and more


Order details page is build from three parts:

  • on left hand side you can see the progress of your order - matching all milestones in the case with their status - currently no progress at all :(
  • central part is for case details
    • hardware specification document that supplier is expected to deliver
    • comments section to discuss and comment on the order
    • My tasks, any tasks that are assigned to you (as logged in user) in scope of this order
  • right hand side is the people and groups involved in your order - so that can give you a quick link in case you'd like to get in touch

So the first thing to be done here is up to selected supplier - (s)he should provide a document with hardware specification for the placed order

Here is a list of tasks assigned to supplier (as logged in user) that (s)he can take and directly work on by providing hardware specification document


Once document is uploaded, owner of the order can take a look at it via oder details page

Now you can observe some progress on the order - hardware spec was delivered and is available to download in the central section. On the right side you can see the Hardware specification milestone is checked (and green) so it was completed successfully.

Next it's up to manager to look at the order and approve it or reject it



In case manager decided to reject it, the decision and reason will be available in the order details page.



What is important to note here is, since manager rejected the order there is a new option available in the order to request the approval again. This is only available when order was rejected and can be used to change manager's decision.  This in turn will create dynamic task for the manager (as it does not exist in the case definition) and thus allow manager to change his/her decision.

Entire progress of the order is always available in the order details page when all order data base be easily found.

Once manager approved the order, the order is again handed over to supplier to place physical order for shipment.
Then on order page you'll find additional action to mark when the order was shipped and later when it was delivered which is usually done by the order owner.






Last but not least is customer satisfaction survey that is assigned to owner for evaluation.



Owner of the order can then provide his/her feedback over the survey that will be kept in order data (case file).


Order is not closed until it's explicitly closed from order details page... usually by the owner when (s)he feels it's completed, otherwise more activities can be still added to the order.

Conclusion

The idea for this article is how you can leverage modern BPM to build quickly business systems that bring value and still take advantage of the BPM just in slightly different way then traditional. This application was build in less than 4 days ... so any reasonable size application to demo capabilities should be doable in less than a week. That's the true power of the modern BPM!


Feel like to try it yourself? Nothing easier then just do it. Just follow instructions and off you go!!!

Share your feedback


Traditional vs Modern BPM - why should I care?

Traditional BPM vs. modern ... what is this about?

In this article I'd like to touch a bit BPM in general and how it evolves to mainly ask the question - should I still care about BPM? Hmmm ... let's look into it then...

Proper business context put on top of generic process engine (or case management) is complete game changing activity. Users don't see this as a huge "server" installation but instead see what they usually work with - their domain. By domain I mean naming convention, information entities etc instead of generic BPM terms such as:

  • process
  • process instance
  • process variables
Instead of trying to make business people (from different domains, sometimes completely unrelated) to unify on the terminology and being process engine/bpm oriented, modern BPM solution should aim at making themselves being part of the ecosystem rather then be outsider. This does make a lot of sense when you look at it from end user point of view. In many cases end users would like:
  • to feel comfortable in the system that supports my daily work
  • to understand the terminology used in the system - non IT language preferred 
  • to be able to extend its features and capabilities easily 
  • possible to federate with other systems - aggregated UI approach 
these are just few points and I am sure many business users could come up with quite a long list in this area. 

Nevertheless, what's is the point here? Main one is to not look at BPM as traditional setup anymore but use it to innovate your domain by utilizing the knowledge that is present in each and every coworker in your company. The assets that modern BPM should promote is collectively gathered knowledge of your domain and the way you work to make profit. 

I used term 'traditional BPM', what I mean by that is rather big and complex installation of BPM Platforms of any kind. A centralized servers that are meant to do all the heavy lifting for entire company without much of an effort ... at least that what slides say, right? :) Traditional BPM that aimed at solving all problems did not really paid off as it was expected mainly due to complexity it brought. It had few major drawbacks identified throughout number of years:
  • difficult to learn - usually a big product suites
  • difficult to maintain - complexity grow with deployments on the platform where upgrades or maintenance activities become harder and harder
  • difficult to scale - usually because of chosen centralized architecture (which btw was meant to solve the problems with maintainability)
  • generated components had their limitation - first thing that end users end up with was the UI components were either tightly coupled to the product or not powerful enough
Traditional BPM

Due to those BPM initiatives in organizations were usually expensive and time consuming activities. Moreover, many of them failed to deliver because of the outweighs between expectations/promises and the actual capabilities of the product/solution.  
BPM projects often made a promise to bridge the gap between IT and business but in the end (one way or another) IT took control and drifted away of business making the delivered solution not so business focused any more. Some of these were caused by the limitation mentioned above but some were because the chosen architecture was simply not suitable for the needs of the business domain. 

I keep saying "business domain" or "business context" and I really mean it - this is the most important aspect of the work when dealing with business (ops ... I did it again) knowledge. The key to the success is the knowledge to be used in IT solution and not vice versa (IT solution altering the way business is done to fit the product/technology).

So if the traditional BPM does not meet its promises, should I still care about BPM at all?

The short answer is yes, though look for alternatives on how to use BPM - as I call it modern BPM. The slightly longer answer is - traditional BPM did have a success stories as well so it does provide a solid ground for next generation of BPM solutions. It did give proper and stable specifications:
  • BPMN2
  • DMN
  • CMMN
to just name the few. So conceptually it is still valid and useful, what is more important is how it is actually realized. The modern BPM is about scoping your solutions to the domain, in other words proper partitioning of your domain will help you solve or overcome some of the issues exposed by traditional BPM. 

First and foremost principles of the modern BPM is

avoid centralization 

Don't attempt to make huge "farm like" installation that should deal with all your processes within the organization (regardless of its size) - sooner or later it will grow ... Keep it small to the minimum to cover given domain or part of the domain. If your domain is Human Resources think about partitioning it to smaller pieces:
  • employee catalogue 
  • payroll 
  • absence 
  • contract management
  • benefits and development plan
  • etc
with that separation you can easily
  • evolve in isolation
  • maintain separately - upgrades, deployments etc
  • scale individual parts
  • federate systems to build portal like entry points for end users
  • use different technology stack/products for individual parts

always put things in business context

keep in mind why things are done the way they are - that's because of business context. If you understand your domain make sure it is captured as knowledge and then used within the IT solutions - this is where BPM practices come in handy. That's the whole point of having BPM in your toolbox - once the business knowledge (business processes, business rules, information entities) is collected it can be directly used for execution.

make tailored UI

Make use of any tools/frameworks/etc you have available to build state of the art tailored UI for the domain. Don't expect generic platforms to generate complete and comprehensive application for you - reason for that it most likely the platform does not know the domain so what it will provide might be limited. Though don't throw away that idea directly, this might be a good start for extending it to fit your needs. All depends on your domain and business context ... I know again :)

Modern BPM


To summarize, modern BPM is about using the tool in more modern way but still the same tool - process/rule engine. Although the tool needs to be capable of doing so - meaning it should be suitable for lightweight deployment architectures. That can easily scale and evolve but still provide value to the business in a matter of days on both months or years of IT projects. 

Next article will give an example of such system to show what can be done in less than a week of time... stay tuned!

środa, 7 grudnia 2016

KIE Server router - even more flexibility

Yet another article from KIE Server series. This time to tackle next steps when you already get yourself familiar with KIE Server and its capabilities.

KIE Server promotes architecture where there are many KIE Server instances responsible for running individual projects (kjars). Than in turn might be completely independent domains or the other way around - related to each other but separated on different runtime to avoid negative impact on each other.
Untitled Diagram.png


At this point, the client application needs to be aware of all the servers to properly interact with the servers. To put it in details client application needs to know:
  • location (url) of HR KIE Server
  • location (url) of IT KIE Server 1 and IT KIE Server 2
  • containers deployed to HR KIE Server
  • containers deployed to IT KIE Server (just one of them as they are considered to be homogeneous)
While knowing about available containers is not so difficult and can be retrieved from running server, knowing about all the locations is more tricky. Especially in dynamic (cloud) environments where servers can come and go based on various conditions.

To deal with these problems, KIE Server introduces new component - KIE Server Router. Router is responsible to bridge all KIE Servers grouped under same router to provide unified view of all servers. Unified view consists of:
  • Find the right server to deal with requests
  • Aggregate responses from different servers
  • Provide efficient load balancing
  • Deal with changing environment - like added/removed server instances

Then the only thing client knows is the location of the router. Router then exposes most of the capabilities of the KIE Server over HTTP. It comes with two main responsibilities:
  • proxy to the actual KIE Server instance based on contextual information - container id or alias
  • aggregator of data - to collect information from all distinct server instances in single client request

Untitled Diagram-2.png

There are two types of requests KIE Server Router supports from client perspective:
  • Modification requests - POST, PUT, DELETE - HTTP methods are all considered as such. Main requirement to be properly proxied is that it includes container id (or alias) in the URL
  • Retrieval requests - GET - HTTP method are seen as that as well. Though when they do include container id will be handled the same way as modification requests.
There is additional type of requests - administration requests - that KIE Server Router supports and these are strictly to allow router to function properly within changing environment.
  • Registers new servers and containers when server or container starts on any of the KIE Server instance
  • Unregisters existing servers and containers when server or container stops on any KIE Server instance
  • List available configuration of the router - what servers and containers it is aware of
Router itself has very limited information and the most important is to be able to route to correct server instance based on container id. So the assumption it has there will be only one set of servers that will host given container id/alias. Although this does not mean it's only single server. It can be as many servers as needed that can be dynamically added and removed. Proxy will load balance for all known servers for given container.

Other than the available containers and servers, KIE Server Router does not keep any information. This might not cover all possible scenarios but it does cover quite few of them.

Untitled Diagram-3.png

KIE Server Router comes in two pieces:
  • proxy that acts like a server 
  • client that is included in kie server to integrate with the proxy
Router client is responsible to bind into KIE Server life cycle and send notifications to KIE Server Router when configuration is changed:
  • When container is started (successfully) it will register it in the router
  • When container is stopped (successfully) it will unregister it from router
  • When entire server instance is stopped, it will unregister all containers (that are in state started) from router

KIE Server router client is packaged in KIE Server itself but by default is deactivated. It can be activated by setting router URL via system property:
org.kie.server.router 
Which can be one or more valid HTTP urls pointing to one or more routers this server should be registered in.

KIE Server router exposes api that is completely compatible with KIE Server Client interface so you can use the java client to talk to router as you would do when talking to any KIE Server instance. Though it has some limitations:
  • Router cannot be used to deploy new containers - this is due to it will not know given container id yet and thus won’t be able to decide in which server it should be deployed to
  • Router cannot deal with modification requests to endpoints of KIE Server that is not based on container id
    • Jobs
    • Documents
  • Router will return hard coded response when requesting KIE Server info
See basic KIE server Router capabilities in action in following screen cast:


Response aggregators

Retrieval requests are responsible for collecting data from various sources but they must return all the data aggregated into single response - that is well structured. Here response aggregators come into the picture. There are dedicated response aggregators that are per data format:
  • JSON
  • JAXB
  • Xstream 
XML based aggregators (both JAXB and Xstream) use Java SE xml parsers with some hints on what elements are the subject for aggregation. While JSON uses org.json library (which is the smallest one) to aggregate JSON responses.

All aggregated responses are compatible with data model returned by KIE Server and thus can be consumed by KIE Server Client without any issues.

Aggregators support both sorting and pagination of the aggregated results. It does the aggregation and sorting and paging on the router side. Though the initial sorting is done on the actual KIE Server instances as well to make sure it is properly respected on the source data.

Paging on the other side is bit more tricky as it needs to ask KIE Servers to always give from page 0 up to the requested one to properly take into consideration all KIE Servers before returning requested page.

See paging and sorting in action in following screencast.



That concludes quick tour about KIE Server Router that should provide more flexibility in dealing with more advanced environments with KIE Servers.

Comments, questions, ideas as usual - welcome

poniedziałek, 21 listopada 2016

Pluggable container locator and policy support in KIE Server

In this article, a container locator support was introduced. Commonly known as aliases. At that time it was by default using latest available version of the project configured with same alias. This idea was really well received and thus further enhanced.

Pluggable container locator


First of all, not always latest available container is the way to go. There might be a need to have time bound container selection for given alias, for example:

  • there are two containers for given alias
  • even though there is a new version already deployed it should not be used until predefined date

so users might implement their own container locator interface and register it by including that implementation bundled into a jar file on KIE Server class path. As usual, the discovery mechanism is based on ServiceLoader so the jar must include:
  • implementation of ContainerLocator interface
  • file named org.kie.server.services.api.ContainerLocator must be placed in META-INF/services directory
  • include fully qualified class name of the ContainerLocator implementation in META-INF/services/org.kie.server.services.api.ContainerLocator file
Since there might be multiple implementation present on class path, container locator to be used needs to be given via system property:
  • org.kie.server.container.locator where the value should be class name of the implementation of ContainerLocator interface - simple name not FQCN
that will be then used instead of default latest container locator.

so far so good, but what should happen with containers that are now idle or should not be used anymore? Since the container locator will make sure that selected (by default latest) container is going to be used in most of the cases, there might be containers that do no need to be on runtime any longer. Especially important in environments where new versions of containers are frequently deployed which might lead to increased use of memory. Thus efficient cleanup of not used containers is a must. 

Pluggable policy support

For this a policy support was added, but not only for this as policies are general purpose tool within KIE Server. So what's that?

Policy is a set of rules that are applied by KIE Server periodically. Each policy can be registered at different time to be applied. Policies are discovered when KIE Server starts and are registered but are not started by default. 
The reason for this is that the discovery mechanism (ServiceLoader) is based on class path scanning and thus are always performed regardless if they should be used or not. So there is another step required to make the policy to be activated. 

Policy activation is done by system property when booking KIE Server:
  • org.kie.server.policy.activate - where value is a comma separated list of policies (their names) to be activated
When policy manager activates given policy it will respect its life cycle:
  • will invoke start method of the policy
  • will retrieve interval from the policy (invoke getInterval method)
  • schedule periodic execution of that policy based on given interval - if interval is less than 1 it will ignore that policy
NOTE: scheduling is done based on interval for both first execution and then repeatable executions - meaning first execution will take place after interval. Interval must be given in milliseconds.

Similar, when KIE Server stops it will call stop method of every activated policy to properly shut it down.

Policy support can be used for various use cases, one that comes out of the box is to complement container locator (with its default latest only). So there is a policy available in KIE Server that will undeploy other container than latest. This police by default is applied once a day, but can be reconfigured via system properties. KeepLatestContainerOnlyPolicy will attempt to dispose containers that have lower versions, though it might fail on such attempt. Reasons of the failure might vary but most common will be when there are active process instances for container that is being disposed. In that case the container is left as started and the next day (or after another reconfigured period of time) the attempt will be retried. 

NOTE: KeepLatestContainerOnlyPolicy is aware of controller so it will notify controller that the policy was applied and stop container in controller only, but only stop it and not remove it. Same as any policy this one must be activated via system property as well.

This opens the door for tremendous amount of policy implementations, starting with cleanup, through blue-green deployments and finishing at reconfiguring runtimes. All performed periodically and automatically by KIE Server itself.

As always, comments and further ideas are more than welcome.

wtorek, 8 listopada 2016

Administration interfaces in jBPM 7

In many cases, when working with business processes, users end up in situations that were not foreseen before, e.g. task was assigned to a user that left the company, timer was scheduled with wrong expiration time and so on.

jBPM from it's early days had the capabilities to deal with these though it required substantial knowledge on how to use low level apis of jBPM. These days are now over, jBPM version 7 comes with administration api that cover:

  • process instance operations
  • user task operations
  • process instance migration

These administration interfaces are supported in jBPM services and in KIE Server so users have full power of performing quite advanced operations when utilizing jBPM as process engine regardless if that is embedded (jbpm services api) or as a service (KIE Server).

Let's start quickly by looking what sort of capabilities each of the service provide.

Process instance Administration


Process instance administration service provides operations around the process engine and individual process instance, following is complete list of operations supported and their short description:
  • get process nodes - by process instance id - this returns all nodes (including embedded subprocesses) that exists in given process instance. Even though the nodes come from process definition it's important to get them via process instance to make sure that given node exists and have valid node id so it can be used with other admin operations successfully
  • cancel node instance - by process instance id and node instance id - does exactly what the name suggests - cancels given nodes instance within process instance
  • retrigger node instance - by process instance id and node instance id - retrigger by first canceling the active node instance and create new instance of the same type - sort of recreates the node instance
  • update timer - by process instance id and timer id - updates timer expiration of active timer. It updates the timer taking into consideration time elapsed since the timer was scheduled. For example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 1,5 hour from the time it was updated. Allows to update
    • delay - duration after timer expires
    • period - interval between timer expiration - applicable only for cycle timers
    • repeat limit - limit the expiration to given number - applicable only for cycle timers
  • update timer relative to current time - by process instance id and timer id - similar to regular update time but the update is relative to the current time - for example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 2 hours from the time it was updated.
  • list timer instances - by process instance id - returns all active timers found for given process instance
  • trigger node - by process instance id and node id - allows to trigger (instantiate) any node in process instance at any time.

Complete ProcessInstanceAdminService can be found here.
KIE Server client version of it can be found here.


User task administration


User task administration mainly provides useful methods to manipulate task assignments (users and groups), data handling and automatic (time based) notifications and reassignments. Following is complete list of operations supported for user task administration service:
  • add/remove potential owners - by task id - supports both users and groups with option to remove existing assignment
  • add/remove excluded owners - by task id - supports both users and groups with option to remove existing assignment
  • add/remove business administrators  - by task id - supports both users and groups with option to remove existing assignment
  • add task inputs - by task id - modify task input content after task has been created
  • remove task inputs - by task id - completely remove task input variable(s)
  • remove task output - by task id - completely remove task output variable(s)
  • schedules new reassignment to given users/groups after given time elapses - by task id - schedules automatic reassignment based on time expression and state of the task:
    • reassign if not started (meaning when task was not moved to InProgress state)
    • reassign if not completed (meaning when task was not moved to Completed state)
  • schedules new email notification to given users/groups after given time elapses - by task id - schedules automatic notification based on time expression and state of the task:
    • notify if not started (meaning when task was not moved to InProgress state)
    • notify if not completed (meaning when task was not moved to Completed state)
  • list scheduled task notifications - by task id - returns all active task notifications
  • list scheduled task reassignments - by task id - returns all active tasks reassignments
  • cancel task notification - by task id and notification id - cancels (and unschedules) task notification
  • cancel task reassignment - by task id and reassignment id - cancels (and unschedules) task reassignment
NOTE: all user task admin operations must be performed as business administrator of given task - that means every single call to user task admin service will be checked in terms of authorization and only business administrators of given task will be allowed to perform the operation.

Complete UserTaskAdminService can be found here.
KIE Server client version of it can be found here.


Process instance migration


ProcessInstanceMigrationService provides administrative utility to move given process instance(s) from one deployment to another or one process definition to another. It’s main responsibility is to allow basic upgrade of process definition behind given process instance. That might include mapping of currently active nodes to other nodes in new definition.

Migration does not deal with process or task variables, they are not affected by migration. Essentially process instance migration means a change of underlying process definition process engine uses to move on with process instance.

Even though process instance migration is available it’s recommended to let active process instances finish and then start new instances with new version whenever possible. In case that approach can’t be used, migration of active process instance needs to be carefully planned before its execution as it might lead to unexpected issues.Most important to take into account is:
  • is new process definition backward compatible?
  • are there any data changes (variables that could affect process instance decisions after migration)?
  • is there need for node mapping?
Answers to these questions might save a lot of headache and production problems after migration. Best is to always stick with backward compatible processes - like extending process definition rather than removing nodes. Though that’s not always possible and in some cases there is a need to remove certain nodes from process definition. In that situation, migration needs to be instructed how to map nodes that were removed in new definition in case active process instance is at the moment in such a node.

Complete ProcessInstanceMigrationService can be found here.
KIE Server version of it can be found here.

With this, I'd like to emphasize that administrators of jBPM should be well equipped with enough tools for the most common operations they might face. Obviously that won't cover al possible cases so we're are more than interested in users feedback on what else might be there as admin function. So share it!

wtorek, 25 października 2016

Case management - jBPM v7 - Part 3 - dynamic activities

It's time for next article in "Case Management" series, this time let's look at dynamic activities that can be added to a case on runtime. Dynamic means process definition that is behind a case has no such node/activity defined and thus cannot be simply signaled as it was done for some of the activities in previous articles (Part 1 and Part 2).

So what can be added to a case as dynamic activity?

  • user task
  • service task - which is pretty much any type of service task that is implemented as work item 
  • sub process - reusable

User and service tasks are quite simple and easy to understand, they are just added to case instance and immediately executed. Depending of the nature of the task, it might start and wait for completion (user task) or it might directly finish after execution (service tasks). Although most of the service tasks (as defined in BPMN2 spec - Service Task) will be invoked synchronously it might be configured to run in background or even wait for external signal to be completed - all depends on the implementation of the work item handler.
Sub process is slightly different in the expectations process engine will have - process definition that is going to be created as dynamic process must exists in kjar. That is to make sure process engine can find that process by its id to execute it. There are no restriction on what the subprocess will do, it can be synchronous without wait states or it can include user tasks or other subprocesses. Moreover such created subprocess will have correlation key set with first property being the case id of the case where the dynamic task was created. So from case management point of view it will belong to that case and thus see all case data (from case file - see more details about case file in Part 2).

Create dynamic user task

To create dynamic user task there are few things that must be given:
  • task name
  • task description (optional though recommended to be used)
  • actors - list of comma separated actors to assign the task, can refer to case roles for dynamic resolving 
  • groups - same as for action but referring to groups, again can use case roles
  • input data - task inputs to be available to task actors
Dynamic user task can be created via following endpoint:

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

where 
  • itorders is container id
  • IT-0000000001 is case id
Method::
POST

Body::
{
 "name" : "RequestManagerApproval",
 "data" : {
  "reason" : "Fixed hardware spec",
  "caseFile_hwSpec" : "#{caseFile_hwSpec}"
  }, 
 "description" : "Ask for manager approval again",
 "actors" : "manager",
 "groups" : "" 
}

this will then create new user task associated with case IT-000000001 and the task will be assigned to person who was assigned to case role named manager. This task will then have two input variables:
  • reason
  • caseFile_hwSpec - it's defined as expression to allow runtime capturing of process/case data
There might be a form defined to provide user friendly UI for the task which will be then looked up by task name - in this case it's RequestManagerApproval (and the form file name should be RequestManagerApproval-taskform.form in kjar).

Create dynamic service task

Service tasks are slightly less complex from the general point of view, though they might need more data to be provided to properly perform the execution. Service tasks require following things to be given:
  • name - name of the activity
  • nodeType - type of a node that will be then used to find the work item handler
  • data - map of data to properly deal with execution
Service task can be created with the same endpoint as user task with difference in the body payload.
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

where 
  • itorders is container id
  • IT-0000000001 is case id
Method::
POST

Body::
{
 "name" : "InvokeService",
 "data" : {
  "Parameter" : "Fixed hardware spec",
  "Interface" : "org.jbpm.demo.itorders.services.ITOrderService",
  "Operation" : "printMessage",
  "ParameterType" : "java.lang.String"
  }, 
 "nodeType" : "Service Task"
}

In this example, an java based service is executed. It consists of an interface that is a public class org.jbpm.demo.itorders.services.ITOrderService, public printMessage method with single argument of type String. Upon execution Parameter value is passed to the method for execution.

Number and names, types of data given to create service tasks completely depends on the implementation of service task's handler. In this example org.jbpm.process.workitem.bpmn2.ServiceTaskHandler was used.

NOTE: For any custom service tasks, make sure handler is registered in deployment descriptor in WorkItem Handler section where the name is same as nodeType used when creating a dynamic service task.

Create dynamic subprocess

Dynamic subprocess expects only optional data to be provided, there are no special parameters as for tasks so it's quite straight forward to be created. 

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/processes/itorders-data.place-order

where 
  • itorders is container id
  • IT-0000000001 is case id
  • itorders-data.place-order is the process id of the process to be created
Method::
POST

Body::
{
 "any-name" : "any-value"
}

Mapping of output data

Typically when dealing with regular tasks or subprocesses to map output data, users define data output associations to instruct the engine on what output of the source (task or sub process instance) to be mapped to what process instance variable. Since dynamic tasks do not have data output definition there is only one way to put output from task/subprocess to the process instance - by name. This means the name of the returned output of a task must match by name process variable to be mapped. Otherwise it will ignore that output, why is that? It's to safe guard case/process instance of putting unrelated variables and thus only expected information will be propagated back to case/process instance.

Look at this in action

As usual, there are screen casts to illustrate this in action. First comes the authoring part that shows:
  • creation of additional form to visualize dynamic task for requesting manager approval
  • simple java service to be invoked by dynamic service task
  • declaration of service task handler in deployment descriptor


Next, it is shown how it actually works in runtime environment (kie server)




Complete project that can be imported and executed can be found in github.

So that concludes part 3 of case management in jBPM 7. Comments and ideas more than welcome. And that's still not all that is coming :)