Setting of JBPM 6.4 (latest) process environment

I had a great vacation where I took opportunity to learn JBPM and how to setup JBPM in Wildfly. Below are the steps to setup a fully functional process designer where we can develop a process with dynamic UI and deploy it in a RESTful service and call the application using an angular sample and invoke a human workflow. For some reason, the Jboss does not start when you are in a company firewall.

  • Have java7 or java8 in your machine, Download and unzip wildfly-8.2.0.Final.zip into a folder
  • Go to <WILDFLYHOME>/bin directory and run add-user.bat, create 2 users,
    1. Management user: admin (pass)/ with no role
    2. Application user: kieserver /kieserver1!/ with kie-server as the role
    3. Application user: user4/pass/ with admin as the role
  • Go to <WILDFLYHOME>/standalone/configuration and rename standalone.xml as standalone1.xml and and rename standalone-full.xml to standalone.xml and replace below code under security-domains block in the xml file.


<security-domain name="other" cache-type="default">
<authentication>
<login-module code="Remoting" flag="optional">
<module-option name="password-stacking" value="useFirstPass"/>
</login-module>
<login-module code="RealmUsersRoles" flag="required">
<module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/>
<module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/>
<module-option name="realm" value="ApplicationRealm"/>
<module-option name="password-stacking" value="useFirstPass"/>
</login-module>
</authentication>
</security-domain>
<security-domain name="jboss-web-policy" cache-type="default">
<authorization>
<policy-module code="Delegating" flag="required"/>
</authorization>
</security-domain>
<security-domain name="jboss-ejb-policy" cache-type="default">
<authorization>
<policy-module code="Delegating" flag="required"/>
</authorization>
</security-domain>

 

Let me know your feedback. Also try various processes like HR, Evaluation, and other processes in Kie-workbench.

Writing Medium Complex Web Application using Spring Boot – Part 2

For folks in hurry, get the latest code from Github and run the application as per the instructions in README file.

This Blog explains how we setup our development environment for building this application.

Setting up Postgres Database with schema:

The application schema file is present under folder called sqlscripts/schema.sql . Install Postgres database and start the database using below command:

# Create a database with user as "postgres" and password as "password"
#than run this command
<POSTGRES_HOME>/bin/postgres -D ..\data

Open the pgAdmin windows application and create a database called “pg” and open the above script and run it.

First let us look at the POM file.

This POM file is configured to work with reverse engineering of Table to JPA components. We are using “org.apache.openjpa.jdbc.meta.ReverseMappingTool” as below:

             <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.2.1</version>
                <configuration>
                    <mainClass>org.apache.openjpa.jdbc.meta.ReverseMappingTool</mainClass>
                    <arguments>
                        <argument>-directory</argument>
                        <argument>target/generated-sources/java</argument> <!-- or target/generated/model -->
                        <argument>-accessType</argument>
                        <argument>fields</argument>
                        <argument>-useGenericCollections</argument>
                        <argument>true</argument>
                        <argument>-package</argument>
                        <argument>org.hcl.test.model</argument>
                        <argument>-innerIdentityClasses</argument>
                        <argument>false</argument>
                        <argument>-useBuiltinIdentityClass</argument>
                        <argument>false</argument>
                        <argument>-primaryKeyOnJoin</argument>
                        <argument>false</argument>
                        <argument>-annotations</argument>
                        <argument>true</argument>
                        <argument>-p</argument>
                        <argument>src/main/resources/META-INF/reverse-persistence.xml</argument>
                        <argument>-nullableAsObject</argument>
                        <argument>false</argument>
                    </arguments>
                    <includePluginDependencies>true</includePluginDependencies>
                </configuration>
                <dependencies>
                    <dependency>
                        <groupId>javax.validation</groupId>
                        <artifactId>validation-api</artifactId>
                        <version>1.1.0.Final</version>
                    </dependency>
                    <dependency>
                        <groupId>org.apache.openjpa</groupId>
                        <artifactId>openjpa-all</artifactId>
                        <version>2.4.0</version>
                    </dependency>
                    <dependency>
                        <groupId>org.postgresql</groupId>
                        <artifactId>postgresql</artifactId>
                        <version>9.3-1100-jdbc4</version>
                    </dependency>
                </dependencies>
            </plugin>

The reverse-persistence.xml file looks like this,

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
    <persistence-unit name="openjpa">
        <provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>        
            <properties>
            <property name="openjpa.ConnectionUserName" value="postgres"/>
            <property name="openjpa.ConnectionPassword" value="password"/>
            <property name="openjpa.ConnectionURL" value="jdbc:postgresql://localhost:5432/pg"/>  
            <property name="openjpa.ConnectionDriverName"  value="org.postgresql.Driver"/>
            <property name="openjpa.jdbc.DBDictionary" value="postgres" />
            <property name="openjpa.Log" value="SQL=TRACE"/>
        </properties>
    </persistence-unit>
</persistence>

Once we configure the above steps, running below command will generate all the JPA POJO’s

mvn clean exec:java test

There is also a why to create dynamic schema without the need for postgres. This can be an exercise for you

Writing Medium Complex Web Application using Spring Boot – Part 1

Introduction

The Pivotal team has done a great job of coming with Spring Boot microservice application stack. In the next few blogs, I will demonstrate the capabilities of Spring Boot to write a Medium Complex application (Not a Todo application) using Industry leading polyglot stack like, Angular.js, Spring JPA, Spring Boot and Postgres Database in the backend. I will also demonstrate how to run this application in Heroku.

Design consideration

  • With respect to Spring Boot
    • Takes off lot of xml headaches of Spring MVC and follows a strong convention over configuration
    • Works like Ruby, with lot of starter jar dependencies like spring-boot-starter-web or like spring-boot-starter-data-jpa, which helps in quickly setup all the dependencies. There are whole bunch of them
    • Encourages the concept of Microservices, where we can build low foot print self contained services, which we can deploy in PaaS like Heroku
  • In addition to these goodies, this sample application also imposes few of its own,
    • Minimum Boilerplate coding. Minimum amount of Handcrafted POJOs. Almost no xml configuration.
    • Design is more based on Test Driven Development (TDD) including Java and Javascript side

The Architecture

Untitled

The Application

What it does?

This is a sample application that helps user to register himself and purchases books online. It has roughly 10 screens and 2 actors the user and the admin. It will have below capabilities,

  • User can do following actions
    • Register himself
    • Login to the system
    • View the books, and add to the shopping card
    • Checkout the books
  • Currently Admin is hardcoded in the system and admin can do the following,
    • Approve the order to be shipped

Technology Stack

Frontend:

Angular.js, Bootstrap

Serverside:

Spring Boot: spring-boot-starter-web, spring-boot-starter-jpa, spring-boot-starter-test

Database:

Postgres for the application and H2 for Junit testing.

Modules:

The blog is divided into multiple section:

  • Setup the environment for development
  • JUnit testing of DAO layer
  • JUnit testing of Service layer
  • JUnit testing of Controller
  • JUnit testing of Javascript and integrating with Build system

High Performance Scientific Computing on the Cloud?

What is this blog about?

Scientific Computing has been dominated and owned by elite scientists and engineers from premium universities and large engineering and aeronautical companies. There was little scope for students and enthusiasts to understand and learn:

With the advent of Amazon, Rackspace, Penguin as Cloud leaders, has this changed? Let us explore in this blog. This blog targets Geeks like me who want to setup an opensource solver on the HPC cloud.

Background on what I have been doing:

From past one year, I have been working with a worlds leading aeronautical company, automating their aircraft analysis and design workflow processes. There were lot of security restrictions to access any of their Solvers or understand how their High Performance Computing Servers work. This is very near to my background in mechanical engineering and a while ago had done a project in CFD and Finite Element Analysis using Ansys, I wanted to understand what is going on. Being a geek, I was curious to understand the nuts and bolts of High Performance Scientific Computing. Below is what I understood so far.

Nuts and Bolts of High Performance Scientific Computing

Solvers:

There are lot of open source solvers, there is a good discussion on, Why isn’t open source CFD solution for everyone?. One of the leading one is Stanford University Unstructured (SU2) solver. SU2 is back bone of some of the cloudbased tools like SimScale. Interestingly the sourcecode is in Github. So let us dive into it. It will take hardly couple of hrs to set this solver on Ubuntu Linux and run a simple airfoil mesh generation and view the airfoil on an opensource plot viewer, ParaView.

For quick start, you need gc++ and make utility. Git clone SU2_EDU source code and Run the SU2_EDU first, the instructions are provided in the link itself. Before running, go to bin directory and do few tweaks to the configuration file ConfigFile_RANS.cfg as below.

# Replace OUTPUT_FORMAT= TECPLOT to PARAVIEW
OUTPUT_FORMAT= PARAVIEW

#run command ./SU2_EDU
# Select option1
# Select the file airfoil_rae2822_lednicer.dat
# it will run for 2000 iterations and creates flow.vtk and solver_flow.vtk files.

once your run is complete, open ParaView GUI and load the flow.vtk and solver_flow.vtk files and you can see the airfoil mesh and volume grids.

Cloud based HPC:

A quick googling will show there are 2 leaders in HPC on the cloud, Rackspace and Penguin Computing. Penguin has a free plan to run a 5min HPC job. You just need to register and run the job to experience how to run a Solver on HPC.

Integrating Solver with HPC:

If you want to run the solver in a Cloud based HPC server the key is to re-build su2 with OpenMPI capabilities as below,

./configure --with-MPI=mpicxx --with-Metis-lib=/usr/local/metis-4.0.3 --with-Metis-include=/usr/local/metis-4.0.3/Lib

Once you build it, you can upload the binaries on the cloud and run the solver in as many CPU’s as you can, the response time will be considerable good. one way to test the details of the job is,

qstat -w <the PBS id returned>

Final Verdict

It is absolutely possible to setup a open source decent solver on the HPC cloud and run few solutions. Hope this blog was helpful.

Physics Engine and HTML5 Canvas

Intent of this blog space: The intent of this blog space is to build a testbed for testing various concept of Physics Engine and modelling computer graphical elements and visually testing them using any device by using Physics Engine and HTML5 Canvas. For the people in hurry,

  • Clink here to see the progress of the testbed
  • Get the latest code of my Github and follow how to setup the application and run locally.

In this blog I would be discussing about what is Physics Engine and what are the various frameworks that supports Physics Engine. I will also be discussing technologies that supports Physics Engine to support HTML5 Canvas. What is Physics Engine? Physics Engine is a computer simulation tool that is used extensively in Game Development, Education Purpose and various applications. A quick YouTube search will yield various links as an application of Box2D. This tool aids in applying all the Physical laws like Gravity, Friction, Force on object displayed on the screen. There are lot of different tools and languages that supports Physics Engine. Box2D API is one of the standard frameworks that is supported in various programming languages like, C++, Java, Javascript, Clojure and Python. And all these tools also support a testbed to write application using Physics Engine and test them in a GUI. There is also a IDE build around Physics Engine called iforce2d RUBE. What is HTML5 Canvas? HTML5 Canvas will become defacto standard for display graphics on the web page. The closest competitor for this is Flash, WebGL. In the next few blogs I will discuss about the architecture of how I built the application, stay tuned.