Category Archives: Scala

Integrating Play 2.1 with Slick 1.0.0 Database query DSL

For people in hurry here is the code and the steps to setup.

Refer this blog on Play 2.0: Building Web Application using Scala for details on Play and Scala.

In the next few blogs I will be writing enterprise class web application using Play and Scala. I will be covering topics like Database modeling and Security. Now officially Scala has adopted Slick as the Database query DSL. But when you create a new Play application and add Slick library as mentioned in this blog, it is not working. It gives a strange error like No suitable driver found in tests.

I also found slick-play2-example, this sample expect us to create a DAL layer, but we cannot directly play with slick in our test code.

After much further research I came across this plugin. I tried integrating this with my Play application and it worked like a charm.

We will define a Coffee class and a Supplier class. One Coffee has multiple Suppliers. The class design is as below,

case class Coffee(name: String, supID: Int, price: Double, sales: Int, total: Int)

object Coffees extends Table[Coffee]("COFFEES") {
def name = column[String]("COF_NAME", O.PrimaryKey)
def supID = column[Int]("SUP_ID")
def price = column[Double]("PRICE")
def sales = column[Int]("SALES")
def total = column[Int]("TOTAL")
def * = name ~ supID ~ price ~ sales ~ total <> (Coffee.apply _, Coffee.unapply _)
// A reified foreign key relation that can be navigated to create a join
def supplier = foreignKey("SUP_FK", supID, Suppliers)(_.id)
}

case class Supplier(id: Int, name: String, street: String, city: String, state: String, zip: String)

// Definition of the SUPPLIERS table
object Suppliers extends Table[Supplier]("SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey) // This is the primary key column
def name = column[String]("SUP_NAME")
def street = column[String]("STREET")
def city = column[String]("CITY")
def state = column[String]("STATE")
def zip = column[String]("<a class="zem_slink" title="ZIP (file format)" href="http://en.wikipedia.org/wiki/ZIP_%28file_format%29" target="_blank" rel="wikipedia">ZIP</a>")
// Every table needs a * projection with the same type as the table's type parameter
def * = id ~ name ~ street ~ city ~ state ~ zip <> (Supplier.apply _, Supplier.unapply _)
}

Below is the ScalaTest to test various capabilities of Slick,

DB.withSession{ implicit session =>

//Populate sample data
val testSuppliers = Seq(
Supplier(101, "Acme, Inc.",      "99 Market Street", "Groundsville", "CA", "95199"),
Supplier( 49, "Superior Coffee", "1 Party Place",    "Mendocino",    "CA", "95460"),
Supplier(150, "The High Ground", "100 Coffee Lane",  "Meadows",      "CA", "93966")
)
Suppliers.insertAll( testSuppliers: _*)

val testCoffees= Seq(
Coffee("Colombian",         101, 7.99, 0, 0),
Coffee("French_Roast",       49, 8.99, 0, 0),
Coffee("Espresso",          150, 9.99, 0, 0),
Coffee("Colombian_Decaf",   101, 8.99, 0, 0),
Coffee("French_Roast_Decaf", 49, 9.99, 0, 0)
)
Coffees.insertAll( testCoffees: _*)

//Assert coffee data equals to the test list of coffee
Query(Coffees).list must equalTo(testCoffees)

//List all coffee less than $10
val q1 = for { c <- Coffees if c.price < 10.0 } yield (c.name)

q1 foreach println
println("**************");

//return all suppliers for coffee less than $9.0
val q2 = for { c <- Coffees if c.price < 9.0
s <- c.supplier } yield (c.name, s.name)

q2 foreach println
println("**************");

//return all suppliers for coffee using zip
val q3 = for {
(c, s) <- Coffees zip Suppliers
} yield (c.name, s.name)

q3 foreach println
println("**************");

//Union
val q4 = Query(Coffees).filter(_.price < 8.0)
val q5 = Query(Coffees).filter(_.price > 9.0)
val unionQuery = q4 union q5
unionQuery foreach println
println("**************");

//Union second approach
val unionAllQuery = q4 unionAll q5
unionAllQuery foreach println
println("**************");

//Group by
val r = (for {
c <- Coffees
s <- c.supplier
} yield (c, s)).groupBy(_._1.supID)

//Aggregation
val r1 = r.map { case (supID, css) =>
(supID, css.length, css.map(_._1.price).avg)
}

r1 foreach println
}

In my next blog, I will take a real example and implement using Slick. I hope this blog helped.

Advertisements

ScalaTest a MapReduce using Akka

For people in hurry here is the MapReduce with ScalaTest and Akka code and steps

I was trying to learn Scala and I wanted to kill several birds in one shot. Let me tell you, I am not disappointed, I feel comfortable working with Scala. If you are coming from Java world, Scala is comparatively more complex, but once you get past initial hurdle you will like it. I wanted to learn

One usecase I wanted to tryout was a simple Word Count MapReduce, this is a hello world of MapReduce. MapReduce is a function programming technique popularly associated with Hadoop Parallel computing. There is a good MapReduce example using Java and Akka. There is a decent MapReduce example, and an other one here and an another one. Below diagram from the source describe the flow,

Word Count MapReduce with Akka and Java by Munish Gupta

Word Count MapReduce with Akka and Java by Munish Gupta

In this example, I am taking advantage of Akka‘s Actor supports for breaking chunk of tasks and processing in parallel and aggregate the final results.

For a starter, SBT is a build tool similar to Maven, extensively used for Scala development. Refer project/plugins.sbt, this has integration with Eclipse IDE. Once you get the code from github, run the below command, you notice 2 files .project and .classpath got generated.


sbt eclipse

Now import the project in Eclipse as Import => “Existing project into workspace”. Once the project is imported into Eclipse, we can take advantage of IntelliSense and other IDE features and develop the application in a easy way compared to writing the scala code in TextPad.

As always, I will start writing a test, the lowest level test is the aggregation test, which takes a map of words and the number of times it has occurred and aggregates it. I used WordSpec for this ScalaTest as below,

"Aggregrate actor" must {
"send back Map message" in {
// create the aggregate Actor
val aggregateActor = system.actorOf(Props[AggregateActor]);
var map: Map[String, Int] = Map[String, Int]("ak" -> 1, "kp" -> 2)
aggregateActor ! map
var map1: Map[String, Int] = Map[String, Int]("ak" -> 1, "kp" -> 2)
aggregateActor ! map1
Thread.sleep(1000)
var output = Map("kp" -> 4, "ak" -> 2)
aggregateActor ! "DISPLAY_LIST"
expectMsg(output)
}
}

Now I write a Reduce unit test, which takes a Result object and create a Map object and publish it to Aggregator object for future aggregation.

"Reduce actor" must {
"send back Map message" in {
// create the aggregate Actor
val aggregateActor = system.actorOf(Props[AggregateActor]);
// create the list of reduce Actors
val reduceRouter = system.actorOf(Props(new ReduceActor(aggregateActor)))
val list: List[Result] = List[Result](new Result("kp", 1), new Result("ak", 2))
reduceRouter ! list
val list1: List[Result] = List[Result](new Result("ak", 1), new Result("kp", 2))
reduceRouter ! list1
Thread.sleep(1000)
var output = Map("kp" -> 3, "ak" -> 3)
aggregateActor ! "DISPLAY_LIST"
expectMsg(output)
}
}

Write a Map unit test to take a line and create a Result object. If you notice carefully, the map and reduce object implements Akka’s Roundrobin Routers where the line is processed by multiple threads in a roundrobbin way.

"Map actor" must {
"send back Map message" in {
// create the aggregate Actor
val aggregateActor = system.actorOf(Props[AggregateActor]);
// create the list of reduce Actors
val reduceRouter = system.actorOf(Props(new ReduceActor(aggregateActor)).withRouter(RoundRobinRouter(nrOfInstances = 2)))
// create the list of map Actors
val mapRouter = system.actorOf(Props(new MapActor(reduceRouter)).withRouter(RoundRobinRouter(nrOfInstances = 2)))
var line = "Aditya Krishna Kartik Manjula"
mapRouter ! line
Thread.sleep(1000)
var output = Map("Kartik" -> 1, "Krishna" -> 1, "Aditya" -> 1, "Manjula" -> 1)
aggregateActor ! "DISPLAY_LIST"
expectMsg(output)
}
}

We write a listController which tests end to end integrating map/reduce/aggregator and assert the values,

"List Reader Controller actor" must {
"send back Map message" in {
// create the aggregate Actor
 val aggregateActor = system.actorOf(Props[AggregateActor]);
// create the list of reduce Actors
 val reduceRouter = system.actorOf(Props(new ReduceActor(aggregateActor)).withRouter(RoundRobinRouter(nrOfInstances = 2)))
// create the list of map Actors
 val mapRouter = system.actorOf(Props(new MapActor(reduceRouter)).withRouter(RoundRobinRouter(nrOfInstances = 2)))
val controller = system.actorOf(Props(new ControllerActor(aggregateActor, mapRouter)))
val lineReadActor = system.actorOf(Props[LineReadActor])
var list = List[String]("Aditya Krishna Kartik Manjula", "Manjula Anand Aditya Kartik", "Anand Vani Phani Aditya", "Kartik Krishna Manjula Aditya", "Vani Phani Anand Manjula")
lineReadActor.tell(list, controller)
Thread.sleep(1000)
var output = Map("Anand" -> 3, "Kartik" -> 3, "EOF" -> 1, "Krishna" -> 2, "Vani" -> 2, "Phani" -> 2, "Aditya" -> 4, "Manjula" -> 4)
 aggregateActor ! "DISPLAY_LIST"
 expectMsg(output)
 }
 }

Finally we will write a fileReaderActor and pass a large file to it to do MapReduce.

Now if you see the actual code, refer MapActor.scala, there is a keyword called yield. yield helps you to create another datatype from one collection. The syntax is as below,

def evaluateExpression(line: String): List[Result] = {
var result = for (word <- line split (" ") toList; if !STOP_WORDS.contains(word.toLowerCase))
yield (new Result(word, 1))
return result
}

Refer ReduceActor.scala you will find the code as “result =>” called as Lambda expression. Lambda expressions are fundamental to functional programming language like Scala,

def reduce(list: List[Result]): Map[String, Int] = {
var results: Map[String, Int] = new HashMap[String, Int]

list.foreach(result => {
if (results.contains(result.word)) {
results(result.word) += result.noOfInstances
} else {
results(result.word) = result.noOfInstances
}
})
return results;
}

Conclusion

If you notice, in this example, I have used and learnt,

  • Scala programming aspects like, Collection, keyword yield, lambda expressions
  • Akka‘s Roundrobin Routers for threading and concurrency
  • SBT for integration with Eclipse
  • ScalaTest for TDD

I hope this blog helped.

Play 2.0: Building Web Application using Scala

For people in hurry, here is the code and some steps to run few demo samples.

Disclaimer: I am still learning Scala and Play 2.0, please point to me if something is incorrect.

Few years back I had tried Scala and it was difficult for me to learn it with the lack of good IDE and an integrated stack for building any application primarily Web Application. After the advent of Play 2.0, Scala IDE for eclipse and Play 2.0 integration with Eclipse things have improved. Lift is also another framework to do Scala Web Application development

If you see the net you will see quite a few articles comparing Java with Scala like, From Scala back to Java and Scala or Java? Exploring myths and facts, There are also articles comparing Scala to other languages Ruby like, Scala, is it comparable to Ruby?. Primarily the debate is about if Scala is ready for Enterprise development.

In this blog I will call an external REST call using Akka Actor and display results using Scala. While building this sample application, I will consider few factors and how Play 2.0 and other tools help in this.

Scala IDE support

Refer the blog Setup and use Play framework 2.0 in Scala IDE 2.0. It has a good writeup on how we create a Scala project in Play 2.0, how we Eclipsify, how we import a project in Eclipse and start development. The IDE has a good intellisense support. When I run the application in STS IDE, I had to add target/scala-2.9.1/src_managed/main in the build path.

Database access support

Refer the blog Tutorial: Play Framework 2 with Scala, Anorm, JSON, CoffeeScript, jQuery & Heroku. This has a good writeup on how we write a simple Database CRUD based application in Scala. It also demonstrates how we can return JSON object to the browser. Play 2.0 is bundled with Anorm a lightweight Object mapper. This is not a ORM tool, this is just a simple data retriever and object mapper. If you see the application I demonstrated the usage. Refer the .scala class models.Bar for more details. Below is a simple code snippet of this,

def findAll(): Seq[Bar] = {
DB.withConnection { implicit connection =>
SQL("select * from bar").as(Bar.simple *)
}
}

Akka/Actor: taking advantage of asynchronous model

As mentioned in my earlier blog, Harnessing New Java Web Development stack: Play 2.0, Akka, Comet, Play 2.0 uses Netty an asynchronous event-driven network application framework for high performance Web Application. In the below snipped, I will demonstrate how we retrieve a remote webservice payload in a non blocking. I have not benchmarked this yet.

def listProducts() = Action {
val system = ActorSystem("MySystem")
val myActor = system.actorOf(Props[ProductJsonActor], name = "myactor")

Async {
implicit  val timeout= Timeout(20.seconds)

(myActor ? "hello").mapTo[String].asPromise.map {
response => Ok(response).as("application/json")
}
}
}

def receive = {
case _ => {
val response = WS.url("https://www.googleapis.com/shopping/search/v1/public/products").get()
val body = response.await(5000).get.body
log.debug("body****" + body)
sender ! body
}
}

Language capabilities: OOP vs Functional Programming

There are good blogs comparing Functional Programming and OOP Language like Java. The concept of Functional Programming is not that easy to understand in the beginning, but once we get used to it, we can write good compact code.

Conclusion

Language capability wise Scala has good support. Java 8 has lot of features which are similar to Scala and if you get into Scala bandwagon you will not be left out to take advantage of the new capabilities of Java. Here is a good comparison of Java 8 vs Scala. With Play 2.0 I clearly demonstrated how to build a decent Web application. Hope this blog helped you.