Category Archives: Java

Setting of JBPM 6.4 (latest) process environment

I had a great vacation where I took opportunity to learn JBPM and how to setup JBPM in Wildfly. Below are the steps to setup a fully functional process designer where we can develop a process with dynamic UI and deploy it in a RESTful service and call the application using an angular sample and invoke a human workflow. For some reason, the Jboss does not start when you are in a company firewall.

  • Have java7 or java8 in your machine, Download and unzip wildfly-8.2.0.Final.zip into a folder
  • Go to <WILDFLYHOME>/bin directory and run add-user.bat, create 2 users,
    1. Management user: admin (pass)/ with no role
    2. Application user: kieserver /kieserver1!/ with kie-server as the role
    3. Application user: user4/pass/ with admin as the role
  • Go to <WILDFLYHOME>/standalone/configuration and rename standalone.xml as standalone1.xml and and rename standalone-full.xml to standalone.xml and replace below code under security-domains block in the xml file.


<security-domain name="other" cache-type="default">
<authentication>
<login-module code="Remoting" flag="optional">
<module-option name="password-stacking" value="useFirstPass"/>
</login-module>
<login-module code="RealmUsersRoles" flag="required">
<module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/>
<module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/>
<module-option name="realm" value="ApplicationRealm"/>
<module-option name="password-stacking" value="useFirstPass"/>
</login-module>
</authentication>
</security-domain>
<security-domain name="jboss-web-policy" cache-type="default">
<authorization>
<policy-module code="Delegating" flag="required"/>
</authorization>
</security-domain>
<security-domain name="jboss-ejb-policy" cache-type="default">
<authorization>
<policy-module code="Delegating" flag="required"/>
</authorization>
</security-domain>

 

Let me know your feedback. Also try various processes like HR, Evaluation, and other processes in Kie-workbench.

Writing Medium Complex Web Application using Spring Boot – Part 2

For folks in hurry, get the latest code from Github and run the application as per the instructions in README file.

This Blog explains how we setup our development environment for building this application.

Setting up Postgres Database with schema:

The application schema file is present under folder called sqlscripts/schema.sql . Install Postgres database and start the database using below command:

# Create a database with user as "postgres" and password as "password"
#than run this command
<POSTGRES_HOME>/bin/postgres -D ..\data

Open the pgAdmin windows application and create a database called “pg” and open the above script and run it.

First let us look at the POM file.

This POM file is configured to work with reverse engineering of Table to JPA components. We are using “org.apache.openjpa.jdbc.meta.ReverseMappingTool” as below:

             <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>exec-maven-plugin</artifactId>
                <version>1.2.1</version>
                <configuration>
                    <mainClass>org.apache.openjpa.jdbc.meta.ReverseMappingTool</mainClass>
                    <arguments>
                        <argument>-directory</argument>
                        <argument>target/generated-sources/java</argument> <!-- or target/generated/model -->
                        <argument>-accessType</argument>
                        <argument>fields</argument>
                        <argument>-useGenericCollections</argument>
                        <argument>true</argument>
                        <argument>-package</argument>
                        <argument>org.hcl.test.model</argument>
                        <argument>-innerIdentityClasses</argument>
                        <argument>false</argument>
                        <argument>-useBuiltinIdentityClass</argument>
                        <argument>false</argument>
                        <argument>-primaryKeyOnJoin</argument>
                        <argument>false</argument>
                        <argument>-annotations</argument>
                        <argument>true</argument>
                        <argument>-p</argument>
                        <argument>src/main/resources/META-INF/reverse-persistence.xml</argument>
                        <argument>-nullableAsObject</argument>
                        <argument>false</argument>
                    </arguments>
                    <includePluginDependencies>true</includePluginDependencies>
                </configuration>
                <dependencies>
                    <dependency>
                        <groupId>javax.validation</groupId>
                        <artifactId>validation-api</artifactId>
                        <version>1.1.0.Final</version>
                    </dependency>
                    <dependency>
                        <groupId>org.apache.openjpa</groupId>
                        <artifactId>openjpa-all</artifactId>
                        <version>2.4.0</version>
                    </dependency>
                    <dependency>
                        <groupId>org.postgresql</groupId>
                        <artifactId>postgresql</artifactId>
                        <version>9.3-1100-jdbc4</version>
                    </dependency>
                </dependencies>
            </plugin>

The reverse-persistence.xml file looks like this,

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
    <persistence-unit name="openjpa">
        <provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>        
            <properties>
            <property name="openjpa.ConnectionUserName" value="postgres"/>
            <property name="openjpa.ConnectionPassword" value="password"/>
            <property name="openjpa.ConnectionURL" value="jdbc:postgresql://localhost:5432/pg"/>  
            <property name="openjpa.ConnectionDriverName"  value="org.postgresql.Driver"/>
            <property name="openjpa.jdbc.DBDictionary" value="postgres" />
            <property name="openjpa.Log" value="SQL=TRACE"/>
        </properties>
    </persistence-unit>
</persistence>

Once we configure the above steps, running below command will generate all the JPA POJO’s

mvn clean exec:java test

There is also a why to create dynamic schema without the need for postgres. This can be an exercise for you

Writing Medium Complex Web Application using Spring Boot – Part 1

Introduction

The Pivotal team has done a great job of coming with Spring Boot microservice application stack. In the next few blogs, I will demonstrate the capabilities of Spring Boot to write a Medium Complex application (Not a Todo application) using Industry leading polyglot stack like, Angular.js, Spring JPA, Spring Boot and Postgres Database in the backend. I will also demonstrate how to run this application in Heroku.

Design consideration

  • With respect to Spring Boot
    • Takes off lot of xml headaches of Spring MVC and follows a strong convention over configuration
    • Works like Ruby, with lot of starter jar dependencies like spring-boot-starter-web or like spring-boot-starter-data-jpa, which helps in quickly setup all the dependencies. There are whole bunch of them
    • Encourages the concept of Microservices, where we can build low foot print self contained services, which we can deploy in PaaS like Heroku
  • In addition to these goodies, this sample application also imposes few of its own,
    • Minimum Boilerplate coding. Minimum amount of Handcrafted POJOs. Almost no xml configuration.
    • Design is more based on Test Driven Development (TDD) including Java and Javascript side

The Architecture

Untitled

The Application

What it does?

This is a sample application that helps user to register himself and purchases books online. It has roughly 10 screens and 2 actors the user and the admin. It will have below capabilities,

  • User can do following actions
    • Register himself
    • Login to the system
    • View the books, and add to the shopping card
    • Checkout the books
  • Currently Admin is hardcoded in the system and admin can do the following,
    • Approve the order to be shipped

Technology Stack

Frontend:

Angular.js, Bootstrap

Serverside:

Spring Boot: spring-boot-starter-web, spring-boot-starter-jpa, spring-boot-starter-test

Database:

Postgres for the application and H2 for Junit testing.

Modules:

The blog is divided into multiple section:

  • Setup the environment for development
  • JUnit testing of DAO layer
  • JUnit testing of Service layer
  • JUnit testing of Controller
  • JUnit testing of Javascript and integrating with Build system

An insight on Big data

OLTP: Refers to Online Transaction Processing

It’s a bunch of programs that used to help and manage transactions (insert, update, delete and get) oriented applications. Most of the OLTP applications are faster because the database is designed using 3NF. OLTP systems are vertically scalable.

OLAP:  Refers to Online Analytical Processing

The OLAP is used for Business Analytics, Data Warehousing kind of transactions. The data processing is pretty slow because it enables users to analyze multidimensional data interactively from multiple data sources. The database schema used in OLAP applications are STAR (Facts and Dimensions, normalization is given a pass, redundancy is the order of the day).  OLAP systems are horizontally scaling.

OLTP vs. OLAP: There are some major difference between OLTP and OLAP.

Data Sharding:

Traditional way of database architecture implements vertical scaling that means splitting the table into number of columns and keeping them separately in physical or logically grouping (tree structure). This will lead into performance trouble when the data is growing. Need to increase the memory, CPU and disk space each and every time when we hit the performance problem.

To eliminate the above problem Data Sharding or Shared nothing concept is evolved, in which the database are scaled horizontally instead of vertically using the master/slave architecture by breaking the database into shards and spreading those into number of vertically scalable servers.

The Data Sharding concept is discussed detail in this link.

MPP (Massive Parallel Processing systems): Refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel.

MPP is also known as cluster computing or shared nothing architecture discussed above.

The examples for MPP are TeraData, GreenPlum.

Vertical and Horizontal Scaling:

Horizontal scaling means that you scale by adding more machines into your pool of resources where vertical scaling means that you scale by adding more power (CPU, RAM) to your existing machine.

In a database world horizontal-scaling is often based on partitioning of the data i.e. each node contains only part of the data, in vertical-scaling the data resides on a single node and scaling is done through multi-core i.e. spreading the load between the CPU and RAM resources of that machine.

With horizontal-scaling it is often easier to scale dynamically by adding more machines into the existing pool – Vertical-scaling is often limited to the capacity of a single machine.

Below are the differences between vertical and horizontal scaling.

CAP Theorem: The description for the CAP Theorem is discussed in this link.

Some insight on CAP: http://www.slideshare.net/ssachin7/scalability-design-principles-internal-session

Greenplum:  Greenplum Database is a massively parallel processing (MPP) database server based on PostgreSQL open-source technology. MPP (also known as shared nothing architecture) refers to systems with two or more processors which cooperate to carry out an operation – each processor with its own memory, operating system and disks.

The high-level overview of GreenPlum is discussed in this link.

Some Interesting Links:

http://www.youtube.com/watch?v=ph4bFhzqBKU,

http://www.youtube.com/watch?v=-7CpIrGUQjo

Hbase:  HBase is the Hadoop database. It is distributed, scalable, big data storage. It is used to provide real-time read and write access to large database which uses cluster (master/slave) architecture to store/retrieve data.

  • Hadoop is open source software developed by Apache to store large data.
    • MapReduce is used for distributed processing of large data sets on clusters (master/slave). MapReduce takes care of scheduling tasks, monitoring them and re-executing any failed tasks. The primary objective of Map/Reduce is to split the input data set into independent chunks and send to the cluster. The MapReduce sorts the outputs of the maps, which are then input to the reduce tasks. Typically, both the input and the output of the job are stored in a file system.
    • The Hadoop Distributed File System (HDFS) primary objective of HDFS is to store data consistently even in the presence of failures. HDFS uses a master/slave architecture in which one device (the master) controls one or more other devices (the slaves). The HDFS cluster consists of one or more slaves who actually contain the file system and a master server manages the file system namespace and regulates access to files.

Hbase documentation is provided in this link.

Some Interesting Links:

http://www.youtube.com/watch?v=UkbULonrP2o

http://www.youtube.com/watch?v=1Qx3uvmjIyU

http://www-01.ibm.com/software/data/infosphere/hadoop/hbase/

1. Do GreenPlum achieve MPP? Is GreenPlum uses Hadoop file system?

A)     GreenPlum VS MPP: GreenPlum follows MPP(Massive Parallel Processing) architecture. The architecture is discussed with the use case here.

B)      GreenPlum VS Hadoop: Yes, GreenPlum can use Hadoop internally. The use case is discussed detail in this link.

2. What are the different storage methodologies? Compare them.

Data Storage methodologies: There are three different methodologies, they are.

  • Row based storage
  • Column based storage
    • Difference between row and column based database are described in this link.
    • NoSQL : Is described detail in this link. It has three different concepts,
      • Key Value 
      • §  Document Store
      • §  Column Store

Some key points on different storage methodologies are,

Storage Methodologies Description Common Use Case Strength Weakness Size of DB Key Players
Row-based Data structured or stored in Rows. Used in transaction processing, interactive transaction applications. Robust, proven technology to capture intermediate transactions. Scalability and query processing time for huge data. Sybase, Oracle, My SQL, DB2 Sybase, Oracle, My SQL, DB2
Column-based Data is vertically partitioned and stored in Columns. Historical data analysis, data warehousing and business Intelligence. Faster query (specially ad-hoc queries) on large data. Not suitable for transaction, import export seep & heavy computing resource utilization. Several GB to 50 TB. Info Bright, Asterdata, Vertica, Sybase IQ, Paraccel
NoSQL-Key Value Stored Data stored in memory with some persistent backup. Used in cache for storing frequently requested data in applications. Scalable, faster retrieval of data , supports Unstructured and partial structured data. All data should fit to memory, does not support complex query. Several GBs to several TBs. Amazon S3, MemCached, Redis, Voldemort
NoSQL- Document Store Persistent storage of unstructured or semi-structured data along with some SQL Querying functionality. Web applications or any application which needs better performance and scalability without defining columns in RDBMS. Persistent store with scalability and better query support than key-value store. Lack of sophisticated query capabilities. Several TBs to PBs. MongoDB, CouchDB, SimpleDb
NoSQL- Column Store Very large data store and supports Map-Reduce. Real time data logging in Finance and web analytics. Very high throughput for Big Data, Strong Partitioning Support, random read-write access. Complex query, availability of APIs, response time. Several TBs to PBs HBase, Big Table, Cassandra

Integrating Kickstrap with Spring MVC application

Get the latest code from Github,

In continuation of my earlier blogs on Introduction to Spring MVC, in this blog I will talk about Kickstrap, Less and how Kickstrap can be integrated with our Spring MVC based bookstore application.

If you go to Kickstrap website they mention that it is Twitter Bootstrap on Steroids. It is heavily based on Less which is a dynamic stylesheet language. You can download Kickstrap from here.

Advantages of using Kickstrap is,

As a 1st step we need to copy the Kickstrap in the application folder as below,

kickstrap bookstore-app

kickstrap bookstore-app

The tiles definition in tiles.xml in Spring MVC is as below,

<tiles-definitions>
<definition name="template" template="/WEB-INF/templates/template.jsp">
<put-attribute name="header" value="/WEB-INF/templates/header.jsp"/>
<put-attribute name="footer" value="/WEB-INF/templates/footer.jsp"/>
</definition>
...
</tiles-definitions>

In the template.jsp you need to add Kickstrap.less and less-1.3.0.min.js as below,

<link rel="stylesheet/less" type="text/css" href="/bookstore-example-with-mvc/resources/css/kickstrap.less">
<script src="/bookstore-example-with-mvc/resources/css/Kickstrap/js/less-1.3.0.min.js"></script>

If you see the template.jsp, we use some of the css class of Kickstrap like “container”, “row-fluid” and others. row-fluid helps in responsive web design.

Changing to different theme

In Kickstrap changing to different theme is easy. Open the theme.less, currently we are using cerulean theme, as below

@import "Kickstrap/themes/cerulean/variables.less";
@import "Kickstrap/themes/cerulean/bootswatch.less";

Now run the below command and open the browser and type http://localhost:8080/bookstore-example-with-mvc,

mvn clean tomcat7:run

The theme looks as below,

cerulean-theme

cerulean-theme

Below are the themes when you drilldown Kickstrap folder as shown below,

themes-bookstore-app

themes-bookstore-app

If you want to change it to, let us say cyborg theme, you need to change kickstrap.less as below,

@import "Kickstrap/themes/cyborg/variables.less";
@import "Kickstrap/themes/cyborg/bootswatch.less";

Now run the below command and open the browser and type http://localhost:8080/bookstore-example-with-mvc, you will see the change in theme,

mvn clean tomcat7:run

The theme is changed and looks as below,

cyborg-theme

cyborg-theme

I hope this blog helped you.

Reference:

Pro Spring MVC: With Web Flow by by Marten Deinum, Koen Serneels

JUnit testing of Spring MVC application – Introduction

For people in hurry, here is the Github code and the steps to run the sample.

Spring MVC is one of the leading Java Web application frameworks along with Struts. In the next few blogs, I will take an use case and demonstrate how to develop a good quality web application in an enterprise Java world. I will also demonstrate latest capabilities of Spring like Annotation based configurations and its advantages over XML based configurations. Prerequisite for this application is Java 7.x, Tomcat 7.x, Maven 3.x and STS IDE.

The use case I will be talking about is a Bookstore application, where in user register to this application and purchase books. There is also an administration task like creating the book catalog.

Spring MVC application architecture

Source Java9s: Spring MVC Architecture

Source Java9s: Spring MVC Architecture

For this sample application we extensively use the Latest Spring MVC annotation capability. The major advantage with this is we will not be depending on any xml configuration including web.xml. I personally like XML configuration with namespace, because it is readable. The advantage of annotation is, since everything is Java, if we use STS IDE, any typo can be caught at compilation time,  and we can write a good quality code and even measure the code coverage. We also used Kickstrap as the css engine for this application. Here is a blog on Integrating Kickstrap with Spring MVC application.

What is code coverage

Code coverage is a measure of how well you unit tested your code. If you use a good IDE like STS IDE we can install ecobertura using the update site. Once you setup ecobertura, you can start developing your application. When you run your JUnit test in “Cover as” mode, it will show the code coverage of your JUnit tests as below,

eCobertura STS-IDE view

eCobertura STS-IDE view

In the subsequent blogs I will use test driven development (TDD) to build each layer and gradually increase the code coverage and write better quality code. The layers I will be developing in TDD in the order are,

Reference:

Pro Spring MVC: With Web Flow by by Marten Deinum, Koen Serneels