(Article guest authored together with John Hurlocker, Senior Middleware Consultant at Red Hat in North America)
In this weeks tips & tricks we will be slowing down and taking a closer look at possible Red Hat JBoss BRMS deployment architectures.
When we talk about deployment architectures we are referring to the options you have to deploy a rules and/or events project in your enterprise.
This is the actual runtime architecture that you need to plan for at the start of your design phases, determining for your enterprise and infrastructure what the best way would be to deploy your upcoming application. It will also most likely have an effect on how you design the actual application that you want to build, so being aware of your options should help make your projects a success.
This will be a multi-part series that will introduce the deployment architectures in phases, starting this week with the first two architectures.
In this series we will present four different deployment architectures and discuss one design time architecture while providing the pros and cons for each one to allow for evaluation of each one for your own needs.
The basic components in these architectures shown in the accompanying illustrations are:
A deployable rule package (e.g. JAR) is included in your application’s deployable artifact (e.g. EAR, WAR).
In this architecture the JBoss BRMS server acts as a repository to hold your rules and a design time tool. Illustration 1 shows how the JBoss BRMS server is and remains completely disconnected from the deployment or runtime environment.
The JBoss BRMS API contains a KieScanner that monitors the rules repository for new rule package versions. Once a new version is available it will be picked up by the KieScanner and loaded into your application, as shown in illustration 2.
The Cool Store demo project provides an example that demonstrates the usage of JBoss BRMS KieScanner, with an example implementation showing how to scan your rule repository for the last freshly built package.
In this weeks tips & tricks we will be slowing down and taking a closer look at possible Red Hat JBoss BRMS deployment architectures.
When we talk about deployment architectures we are referring to the options you have to deploy a rules and/or events project in your enterprise.
This is the actual runtime architecture that you need to plan for at the start of your design phases, determining for your enterprise and infrastructure what the best way would be to deploy your upcoming application. It will also most likely have an effect on how you design the actual application that you want to build, so being aware of your options should help make your projects a success.
This will be a multi-part series that will introduce the deployment architectures in phases, starting this week with the first two architectures.
The possibilities
A rule administrator or architect work with application team(s) to design the runtime architecture for rules and depending on the organizations needs the architecture could be any one of the following architectures or a hybrid of the designs below.In this series we will present four different deployment architectures and discuss one design time architecture while providing the pros and cons for each one to allow for evaluation of each one for your own needs.
The basic components in these architectures shown in the accompanying illustrations are:
- JBoss BRMS server
- Rules developer / Business analyst
- Version control (GIT)
- Deployment servers (JBoss EAP)
- Clients using your application
Illustration 1: Rules in application |
Rules deployed in application
The first architecture is the most basic and static in nature of all the options you have to deploy rules and events in your enterprise architecture.A deployable rule package (e.g. JAR) is included in your application’s deployable artifact (e.g. EAR, WAR).
In this architecture the JBoss BRMS server acts as a repository to hold your rules and a design time tool. Illustration 1 shows how the JBoss BRMS server is and remains completely disconnected from the deployment or runtime environment.
Pros
- Typically better performance than using a rule execution server since the rule execution occurs within the same JVM as your application
Cons
- Do not have the ability to push rule updates to production applications
- requires a complete rebuild of the application
- requires a complete re-testing of the application (Dev - QA - PROD)
Illustration 2: KieScanner deployment |
Rules scanned from application
A second architecture that you can use to slightly modify the previous one, is to add a scanner to your application that then monitors for new rule and event updates, pulling them in as they are deployed into your enterprise architecture.The JBoss BRMS API contains a KieScanner that monitors the rules repository for new rule package versions. Once a new version is available it will be picked up by the KieScanner and loaded into your application, as shown in illustration 2.
The Cool Store demo project provides an example that demonstrates the usage of JBoss BRMS KieScanner, with an example implementation showing how to scan your rule repository for the last freshly built package.
Pros
- No need to restart your application servers
- in some organizations the deployment process for applications can be very lengthy
- this allows you to push rule updates to your application(s) in real time
Cons
- Need to create a deployment process for testing the rule updates with the application(s)
- risk of pushing incorrect logic into application(s) if the above process doesn’t thoroughly test
Looking to Automate your business? |
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.