Eric D. Schabell: July 2009

Friday, July 31, 2009

A JBoss BRMS workshop report

 The last week I have spent in Atlanta, GA at the RedHat offices attending a JBoss Business Rules Management System (BRMS) workshop. This was intended to give us a deeper understanding of the lastest rules platform offering that RedHat put together and was intended for a very technical audience.

I wanted to provide a bit of insight into the workshop to give you an idea what you can expect to receive if you should contemplate looking deeper into the JBoss BRMS.

Before I dig into the details and provide my impressions, I wanted to include some side notes on a trip to the USA. As you can see, the breakfast can just be awesome. I love pancakes/biscuits&gravy!

It would also not be America if you did not include something about the cars (picture is of a 1930's hotrod) and a baseball game (picture in Mariners stadium).

The workshop was held in the RedHat JBoss offices and included participants from the all over the globe; Japan, China, India, Argentina, France, Netherlands, and USA . The location was great, the instructors very good, and the kitchen was well stocked with coffee, soda, and snacks (see pictures). It was a three day show covering the following topics:

Day 1:
  • Introduction to JBoss BRMS
  • Rules development process
  • The Drools rule language
  • JBoss DevelStudio - Drools IDE
  • RETE

This day started with a look at rule engines in general and what the BRMS actually includes. It demonstrated these components in action with a demo. The look at a rules development process provided a framework for making your rules projects successful. Covering the Drools rule language took us through the basics, conditions, consequences, and a closer look at the API for those programming interactions with the BRMS. We installed the JBoss DevelStudio, tooling (free if you use the JBoss DevelStudio), and looked at the various components there including the debugging facilities. The day finished with a closer look at how the rule engine really works. Forward chaining and the RETE network was discussed in great detail as we were technical geeks wanting to know exactly how it all worked of course. There were labs for all these topics, allowing us to dig into the details on our own laptops.

Day 2:
  • Decision tables
  • Business Rules Manager: administration and guided rule editor
  • Business Rules Manager: QA and deployment
  • JBDS BMR integration
  • Domain specific languages
Today started with a look at decision tables, as used by a business person to manage their rule base. We imported them and took a closer look at manipulating them through the API. The next topic was the Business Rules Manager (BRM) which is the product branded from the community web interface project better know as Guvnor. We looked at administration, assets, packages, rules, QA, and testing using this interface. My personal opinion is that the testing facilities are one of the stronger points to using this product. Next topic was the integration of the development environment with the BRM interface. We looked at creating a rules repository, exploring it and then adding assets from the developers IDE to be then manipulated/used in the BRM interface. Very good stuff! The day closed out with a detailed look at Domain specific languages (DSL). What a mapping file is, rule files using a DSL, the programming API when using a DSL, and using the DSL in the BRM interface. Again, there were labs for all these topics, allowing us to dig into the details on our own laptops. On a side note, Burr Sutter (Product Manager) joined us for most of the workshop (see picture) to add comments about product direction and jumped in on any questions we might have had with regards to the various products.

Day 3:
  • Advanced rule authoring
  • Execution control
  • Rule flow
  • Complex event processing (CEP)
  • Performance considerations
The final day started by digging into advanced rule authoring concepts like conditional elements, field constraints, nested accessors, property change listeners, and the dynamic rules API. We discussed execution control topics including salience, control facts, groups, duration, no loop, and truth maintenance. We took a closer look into the rules flow and how you can create one, add constraints, deploy it, and run it. This was a lot of fun for me personally as I come from a jBPM background and know that product very well. After lunch we discussed the more advanced topic of Complex event processing (CEP). This included event declaration, semantics, streams support, event cloud, session clock, temporal reasoning, sliding window support, rule based partitioning, and memory management. Performance considerations was a topic that represented the best practices of using the Jboss BRMS. What are the various ways to prevent larger projects from performing poorly due to bad practices or bad rule writing. Again, there were labs for all these topics, allowing us to dig into the details on our own laptops (see picture).

I will be taking this all back to the Europe region and hopefully will be spreading the word soon at a location near you! 

Monday, July 13, 2009

Creating a jBPM process repository for unit testing with par files

My latest project involves general infrastructure improvements based on my experiences with jBPM. There have been many discussions about different aspects involving a generic process structure to enable the launching of new products in a more efficient, timely, and cost effective manner.

One of the more interesting elements in the general planning is to provide the development organization with a process repository. It will enable developers to leverage the existing Maven infrastructure to provide projects with exiting Process Archives, better known to jBPM developers as PAR files.

These files are similar to JAR files and contain the process definition along with any supporting Java classes needed to run. The interesting part is that you want to define simple process components (granularity is important) so that they can be leveraged through reuse. The idea then being that a new project can simply shop for existing process components and then import these exiting PAR files into a project where they plug into the project in process-state nodes (these being the jBPM sub-flows).

This requires that a simple process is the initial building block and needs to be setup with proper unit testing. For example, a simple process (which is left empty for posting purposes) that is used to transform input into some XML format via a single web service call. This call takes place in a state node, so you can see the process definition of the SuperProcess which calls the self contained SubProcess by fishing it out of the PAR repository included in the projects dependencies.

Here the SuperProcess definition:

<process-definition name="superprocess">

  <start-state name="start">
     <transition to="XML transform" name="to_process_state" />
  </start-state>

  <process-state name="XML transform">
    <sub-process name="xmltransform">
    <transition to="end" name="to_end" />
  </process-state>

  <end-state name="end" />

</process-definition>

The SubProcess definition:

<process-definition name="xmltransform">

  <start-state name="start">
    <transition to="end" name="to_end" />
  </start-state>

  <end-state name="end" />

</process-definition>

The unit test for the SubProcess (trivial in this example):

@Test
public void testXmltransformProcess() throws Exception {

  // Extract a process definition from the processdefinition.xml file.
  ProcessDefinition superProcessDefinition = 
    ProcessDefinition.parseXmlResource("xmltransform/processdefinition.xml");

  // Test it.
  assertNotNull("Definition should not be null", superProcessDefinition);

  // Setup superprocesses.
  ProcessInstance superProcessInstance = new ProcessInstance(superProcessDefinition);

  assertEquals(
    "Instance is in start state", 
    superProcessInstance.getRootToken().getNode().getName(), "start");

  // Move the process instance out of start state.
  superProcessInstance.signal();
  
  assertEquals(
    "Instance is in end state", 
    superProcessInstance.getRootToken().getNode().getName(), 
    "end");

  assertTrue("Instance has ended", superProcessInstance.hasEnded());

}

Now the unit test for the SuperProcess:

@Test
public void testSuperProcess() throws Exception {

  // Extract a process definition from the par file.
  ProcessDefinition subProcessDefinition = 
    ProcessDefinition.parseParResource("xmltransform.par");

  ProcessDefinition superProcessDefinition = 
    ProcessDefinition.parseXmlResource("superprocess/processdefinition.xml");

  // Test it.
  assertNotNull("Definition should not be null", subProcessDefinition);
  assertNotNull("Definition should not be null", superProcessDefinition);

  // Setup super and subprocesses.
  ProcessState processState = (ProcessState) superProcessDefinition.getNode("XML transform");
  processState.setSubProcessDefinition(subProcessDefinition);
        
  ProcessInstance superProcessInstance = new ProcessInstance(superProcessDefinition);
  ProcessInstance subProcessInstance   = new ProcessInstance(subProcessDefinition);
   
  // Get our process tokens.
  Token superToken = superProcessInstance.getRootToken();
  Token subToken   = subProcessInstance.getRootToken();

  assertEquals(
    "Instance is in start state",
    superProcessDefinition.getStartState(), 
    superToken.getNode());

  // Move the process instance from its start state to the first state.
  superToken.signal();

  // Subprocess in start?
  assertEquals(
    "SubProcess instance is in start state", 
    subProcessDefinition.getNode("start"), 
    subToken.getNode());
  
  // Move subprocess out of start.
  subToken.signal();
  
  // Subprocess ended?
  assertEquals(
    "SubProcess instance is in end state", 
    subProcessDefinition.getNode("end"), 
    subToken.getNode()); 
  assertTrue("SubProcess instance has ended", subProcessInstance.hasEnded());

  // Superprocess ended?
  assertEquals(
    "SuperProcess instance is in end state", 
    superProcessDefinition.getNode("end"), 
    superToken.getNode()); 
    assertTrue("SuperProcess instance has ended", superProcessInstance.hasEnded());

Of course these are really simple examples. We are looking into unit testing much more complex processes that include task-nodes, state-nodes, and processes with several layers (deep) of process-states. Anyone have experiences with these kinds of unit tests?

Note: these are jBPM v3.x processes.

Thursday, July 9, 2009

Bedrijven hebben open source in eigen handen

My opinion article (in Dutch) on how companies could profit more from open source projects by thinking outside of the box to solve their issues.

Bedrijven hebben open source in eigen handen

Check out the comments. It remains hard for some people to accept that there are developers in the open source world that can and will provide good quality work. ;-)

Wednesday, July 8, 2009

Quick start setup of postgresql on Fedora

Today I needed to have postgresql running on a new Fedora installation so I could setup a demo for JBoss Operation Network.

This is what I did:

# First install server as root.
#
$ yum install posgresql-server

# init the database (note SELinux might complain
# if not in permissive mode) which is found in
# /var/lib/pgsql/data/*.
#
$ service postgresql initdb

# start the database service.
#
$ service postgresql start

# to setup users and databases we need to use
# the default 'postgres' user account, so login
# to this session.
#
$ su -l postgres

# create a new user for the jboss demo user 
# and database in my case.
#
$ createuser --no-superuser --no-createdb --no-createrole jbossdemo

$ createdb --owner jbossdemo jbossdemo

# leave postgres account back to root.
#
$ exit

# allow local connections to this account by adding the
# following lines to the pg_hba.conf:
#
#  TYPE  DATABASE  USER      CIDR-ADDRESS   METHOD
#  local jbossdemo jbossdemo                trust
#  host  jbossdemo jbossdemo 127.0.0.1/32   trust
#  host  postgres  jbossdemo 127.0.0.1/32   trust
#
$ vim /var/lib/pgsql/data/pg_hba.conf

# reload service to acquire changes.
#
$ service postgresql reload

# now install admin console for normal db 
# administration. Search internet for repos
# should your install not have this package.
#
$ yum install pgadmin3

Now I can install JBoss ON and use the jbossdemo account and database for my installation demo.

# some handy administration tips for your installation.
#
$ yum install postgresql-contrib

# install server instrumentation functions as 'postgres' user.
#
$ su -l postgres

$ psql --file /usr/share/pgsql/contrib/adminpack.sql

# schedule maintenance in daily cron job at
# /etc/cron.daily/pgsqlmaint. Put this in the 
# file:
#
#  #!/bin/sh
#  su -c 'vacuumdb --all --full --analyze' postgres
#  su -c 'reindexdb --all' postgres
#
$ vim /etc/cron.daily/pgsqlmaint

# back to root.
#
$ exit

# make it executable.
#
$ chmod a+x /etc/cron.daily/pgsqlmaint

# often going to need more transactions than the default
# allows so adjusts this as follows.
$ su -l postgres

# change this line to look as folows:
#
#  max_prepared_transactions = 100
#
$ vim /var/lib/pgsql/data/postgresql.conf

# back to root.
#
$ exit

# reload service.
#
$ service postgresql reload

Friday, July 3, 2009

Help push the open sourcing of a jBPM Migration Tool

Just back from vacation and saw this go by on the mailinglists of the jBPM project. A possible solution that many of us have encountered for migration of existing process definitions (running processes) over to a new version of the process definition. Check out the overview at Caleb Powell's blog.

It appears that this might be getting pushed into the jBPM project as an apart tool for the 3.x versions, so drop over to the blog and post your support for them to contribute this tooling. Really great stuff!

===================== UPDATE ======================
This looks like it will happen, see the jBPM forum thread.