Behavior Driven Development (BDD) of Postfix calculator

What is a postfix expression?

An expression where in the operator is placed after the operands is called a postfix expression. For example an expression (also called infix expression) 2 + 3 in postfix is 2 3 +, expression 2 + 3 * 4 in postfix is 2 3 4 * +

In this article we will look at how to develop an postfix expression evaluator using BDD approach. Our evaluator would handle Addition, Subtraction, Multiplication and Division of floating and integer numbers.

BDD in action

Let us create a maven application for our BDD experiment. I am using Eclipse as my IDE. Once you have your project created, add the following dependencies in the pom.xml

<dependency>
  <groupId>info.cukes</groupId>
  <artifactId>cucumber-java</artifactId>
  <version>1.2.5</version>
  <scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/info.cukes/cucumber-junit -->
<dependency>
  <groupId>info.cukes</groupId>
  <artifactId>cucumber-junit</artifactId>
  <version>1.2.5</version>
</dependency>

In BDD we first write acceptance tests. These acceptance tests are tests which would test the complete module/application there by declaring it to satisfy the user requirements. And these can be captured at the time of creating the acceptance criteria for a user story (concept widely used in scrum based agile software development methodology). These tests tend to be more user requirement focused and are often written in collaboration with the customer or customer representative or the product owner.

At the stakeholders end

As the people involved in the development of acceptance tests tend to non-technical, there is a Domain Specific Languge (DSL) available which helps in capturing the acceptance tests. This DSL is called Gherkin. And a tool called Cucumber helps in generating tests and then executing them. This generation and execution of tests is all behind the scenes and is carried out by the developer.

Let us see how a sample Gherkin file (ends with a .feature extension) looks like:

#comment
# there can be only 1 feature in a .feature file
Feature: Feature name
    feature description

# there can be multiple scenarios, which means multiple possibilities the feature can be used 
Scenario: Scenario 1
Given some input "abc"
And another input "xyz"
When user performs some action
Then the result should be "pqr"

# similarly we can have multiple scenarios

The above feature file is pretty clear and is mostly english language based. (This is the beauty of DSLs). There are few key words in the above file like Feature, Scenario, Given, And, When, Then and few more. And there are some restrictions like there can be only 1 Feature, And at the beginning of new line should be And and not and, then and can occur with in the sentence as normal and and so on. (I know too many ands). But such restrictions are what is imposed by the DSL.

Generating such feature files is quite simple and we can easily collaborate with stakeholders to capture their requirements and specifications by using examples and these examples can be written in form of Scenarios.

Developers End

The developer can take this feature file and generate java code. This java code is nothing but collection of methods backing each Given, When, Then clauses written in the feature file. There are different ways to generate it and I will show you one such way in this article.

Dive into code

Let us now dive into the code.

Create a feature file postfix-evaluator.feature in the location src/test/resources

Feature: Testing Post Fix evaluator
	We would use this to test our post fix evaluator
	
Scenario: Testing the evaluator with sum only
Given User enters "2 3 5 + +"
When User asks for result
Then result should be "10"

Scenario Outline: Testing the evaluator with complex expressions
Given User enters <expression>
When User asks for result
Then result should be <result>

Examples:
    | expression | result |
    | "3 4 5 + -" | "-6" |
    | "5 1 2 + 4 * + 3 -" | "14"  |
    | "5 2 3 ^ + 5 8 + " | "13"  |
    | "2 1 12 3 / - +" | "-1" |
    | "6 3 - 2 ^ 11 - " | "-2" | 

You know Scenario, but what is this Scenario Outline? It is a parameterized version of Scenario which means that the Given, When, Then specified for the Scenario Outline are executed for each of the test input provided in the Example section.

But how do we link the Examples and Given, When, Then?

The first row of the Example section indicate the name of the variables to which the value is assigned. And the same variable name can be used in the parameterized clauses of Given, When, Then. And you parameterize the clauses using .

Now to generate the Java code for this feature file, let us create a JUnit test runner TestPostFixEvaluator.java in src/test/java/bdd

package bdd;

import org.junit.runner.RunWith;

import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;

@RunWith(Cucumber.class)
@CucumberOptions(features = "src/test/resources/")
public class TestPostFixEvaluator {

}

Run the above test and you will see a message like:

6 Scenarios ([33m6 undefined[0m)
18 Steps ([33m18 undefined[0m)
0m0.000s


You can implement missing steps with the snippets below:

@Given("^User enters \"([^\"]*)\"$")
public void user_enters(String arg1) throws Throwable {
    // Write code here that turns the phrase above into concrete actions
    throw new PendingException();
}

@When("^User asks for result$")
public void user_asks_for_result() throws Throwable {
    // Write code here that turns the phrase above into concrete actions
    throw new PendingException();
}

@Then("^result should be \"([^\"]*)\"$")
public void result_should_be(String arg1) throws Throwable {
    // Write code here that turns the phrase above into concrete actions
    throw new PendingException();
}

The above message contains the missing steps. So let us copy the above missing steps into class PostFixEvaluatorSteps in the package bdd under src/test/java and also add the code to test our post fix evaluator as shown below

package bdd;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;
import evaluator.PostFixEvaluator;

public class PostFixEvaluatorSteps {
  PostFixEvaluator evaluator;
  Double computedResult;
  
  @Given("^User enters \"([^\"]*)\"$")
  public void user_enters(String expression) throws Throwable {
    evaluator = new PostFixEvaluator(expression);
  }

  @When("^User asks for result$")
  public void user_asks_for_result() throws Throwable {
    computedResult = evaluator.evaluate();
  }

  @Then("^result should be (\\d+)$")
  public void result_should_be(Double result) throws Throwable {
    assertEquals(result, computedResult);
  }
  
  @Then("^result should be \"([^\"]*)\"$")
  public void result_should_be(String result) throws Throwable {
    assertTrue(Double.parseDouble(result) == computedResult);
  }
 
}

Running the above will give up all sorts of abuses from the java compiler. So let us create a PostFixEvaluator class in the package evaluator under src/main/java with the class definition as shown below:

package evaluator;

import java.util.Stack;

public class PostFixEvaluator {
  
  public final String expression;
  
  public PostFixEvaluator(String expression) {
    this.expression = expression;
  }
    
  public Double evaluate(){ return 0d; }
}

And if you run the test TestPostFixEvaluator, you will see that all the tests as failing as shown below:
bdd1

To implement a Post fix evaluator we make use of Stack. The way it works is:
1. If you encounter an operand, push it to stack
2. If you encounter a binary operator pop two elements and push the result to stack
3. If you encounter a unary operator pop 1 element and push the result to stack
4. If you have come to the end of expression then pop the stack to get the result.

There are various error conditions which we can handle:
1. If at the end of expression stack is empty, then we dont have any result
2. If at the end of expression stack has more than 1 element then we dont have enough operators, so expression is invalid
3. If on encountering an operator, we dont have enough operands in the stack, then the expression is invalid.
and so on.

We can update the PostFixEvaluator class with the code below (I havent considered error scenarios. We can easily add some negative tests and then write code to pass those negative tests, refactoring becomes easy as we already have tests for our feature).

package evaluator;

import java.util.Stack;

public class PostFixEvaluator {
  
  public final String expression;
  
  public PostFixEvaluator(String expression) {
    this.expression = expression;
  }
  
  Stack<Double> pfStack = new Stack<Double>();
  
  public Double evaluate(){
    String [] exprArray = expression.split("\\s+");
    for ( String elem : exprArray){
      if ( isOperator(elem)){
          Double operand2 = pfStack.pop();
          Double operand1 = pfStack.pop();
          switch (elem) {
          case "*":
            pfStack.push(operand1 * operand2);
            break;
          case "+":
            pfStack.push(operand1 + operand2);
            break;
          case "-":
            pfStack.push(operand1 - operand2);
            break;
          case "/":
            pfStack.push(operand1 / operand2);
            break;
          case "^":
            pfStack.push(Math.pow(operand1, operand2));
            break;
          default:
            throw new RuntimeException("Unsupported operator");
        }
      }else{
        pfStack.push(Double.parseDouble(elem));
      }
    }
    
    if ( pfStack.isEmpty()){
      throw new RuntimeException("Stack is empty, no result found");
    }
    return pfStack.pop();
  }
  
  private boolean isOperator(String element){
    switch (element) {
    case "+":
    case "-":
    case "/":
    case "*":
    case "^":
      return true;
    default:
      return false;
    }
  }
  
}

It is a pretty naive design, one can refactor and also refactor to use Strategy design pattern. This is all possible because we have acceptance tests, if we make some error during the refactoring, then the acceptance tests will flag them.

Let us run the test TestPostFixEvaluator now and see that all the scenarios are getting executed successfully:
bdd2

This was a simple introduction to BDD. In the next article I will show how we can apply TDD exclusively and then an article with a mix of BDD and TDD.

The source for this is available on Github: https://github.com/sanaulla123/bdd-tdd-demo

Looking forward to hear your feedback!

Advertisements

Gotcha: Migrating from Spring Security 3.2.x to Spring Security 4.x

Here is a simple gotcha to keep in mind while migrating to newer Spring Security version 4.x from Spring Security 3.2.x

What’s the problem?

The Spring security configuration expressions hasRole('role_name') and hasAuthority('role_name') are no longer the same.

The catch is: hasAuthority checks for any role name passed to the expression without prepending ‘ROLE_’ (which is the default role prefix), where as hasRole checks for any role name passed to the expression by prepending ‘ROLE_’ to the role name.

Below is the snapshot of the class definition for SecurityExpressionRoot for both the versions of Spring Security which defines the methods hasRole and hasAuthority.

springsec-gotcha

In Spring Security 3.2.x hasAuthority and hasRole are checking for the presence of given role name in getAuthoritySet() [The getAuthoritySet() retrieves the GrantedAuthority list for the user]

In Spring Security 4.x hasAuthority is invoking the API hasAnyAuthorityName passing the prefix as null whereas hasRole is invoking the API hasAnyAuthorityName passing the default prefix which is ‘ROLE_’ (the same has been highlighted in the image above)

There is another interesting API in Spring Security 4.x (again highlighted in the image above) called getRoleWithDefaultPrefix() as shown in image below:

springsec4-2.png

Interesting to see above how the role name is being prefixed with the role prefix.

What is the fix?

  1. Either you append all your roles with ‘ROLE_’ prefix. OR
  2. Use hasAuthority as the replacement for hasRole expression without the need for changing the role names OR
  3. Override the defaultRolePrefix with null or empty string so that the same expression hasRole works with the same role names. [I need to figure out how to do this. It should be possible because the setter for the property is public]

New @RequestParam annotations in Spring Boot 1.4 (Spring Framework 4.3)

Earlier in Spring/Spring Boot to Map a GET or POST or DELETE or any HTTP method request handler we would write something like below:

@RestController
@RequestMapping("/api/books")
public class BookAPIController {
  @RequestMapping
  public ResponseEntity<?> getBooks(){
  
  }
  
  @RequestMapping("/{book_id}")
  public ResponseEntity<?> getBook(@PathVariable("book_id") String bookId){
  
  }
  
  @RequestMapping(method = RequestMethod.POST)
  public ResponseEntity<?> addNewBook(@RequestBody Map<String, Object> requestBody){
	
  }
  
  @RequestMapping(method = RequestMethod.POST, value="/{book_id}")
  public ResponseEntity<?> editBook(@PathVariable("book_id") String bookId){
	
  }
  
  @RequestMapping(method = RequestMethod.DELETE, value="/{book_id}")
  public ResponseEntity<?> deleteBook(@PathVariable("book_id") String bookId){
	
  }
}

But with Springframework 4.3 and Spring Boot 1.4 (which now uses Springframework 4.3) we have some new annotations to map the HTTP methods to request handlers for the following HTTP methods: GET, POST, PUT, PATCH, DELETE. These new annotations are namely: @GetMapping, @PostMapping, @PutMapping, @PatchMapping, @DeleteMapping. So the above code now looks like:

@RestController
@RequestMapping("/api/books")
public class BookAPIController {
  @GetMapping
  public ResponseEntity<?> getBooks(){}
  
  @GetMapping("/{book_id}")
  public ResponseEntity<?> getBook(
    @PathVariable("book_id") String bookId
  ){}
  
  @PostMapping
  public ResponseEntity<?> addNewBook(
    @RequestBody Map<String, Object> requestBody
  ){}
  
  @PostMapping("/{book_id}")
  public ResponseEntity<?> editBook(
    @PathVariable("book_id") String bookId
  ){}
  
  @DeleteMapping("/{book_id}")
  public ResponseEntity<?> deleteBook(
   @PathVariable("book_id") String bookId
  ){}
}

These new annotations aid in improving the code readability and also reducing the annotation text to some extent

Java Gotcha: Parse string using SimpleDateFormat with months greater than 12

I was the other day trying to parse a date string into a date object using SimpleDateFormat to check for the validity of the date string. I had the SimpleDateFormat defined as: SimpleDateFormat expiryDateFormat = new SimpleDateFormat("dd/MM/yyyy");. The date string I was trying to parse was: 10/26/2016 which is clearly invalid with respect to the pattern defined in the SimpleDateFormat. I found that the parsing went through and then it failed while trying to insert into DB due to invalid date. I was utterly confused. Then I wrote a small program to isolate the issue with SimpleDateFormat:

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;

public class Solution {

  public static void main(String[] args) {

    SimpleDateFormat expiryDateFormat = 
        new SimpleDateFormat("dd/MM/yyyy");
    try {
      Date date =expiryDateFormat.parse("10/26/2016");
      System.out.println(date);
    } catch (ParseException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }

  }
}

The above code gave an output: Sat Feb 10 00:00:00 AST 2018. I was surprised at this behavior. Then I uncovered a secret method setLenient() using which we can put the SimpleDateFormat parsing to be strict and report errors on slightest of mismatch and not try to interpret the input by adjusting the value. So an updated code looks like:

import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;

public class Solution {

  public static void main(String[] args) {

    SimpleDateFormat expiryDateFormat = 
        new SimpleDateFormat("dd/MM/yyyy");
    expiryDateFormat.setLenient(false);
    try {
      Date date =expiryDateFormat.parse("10/26/2016");
      System.out.println(date);
    } catch (ParseException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }

  }
}

which indeed throws an exception as expected:

java.text.ParseException: Unparseable date:"10/26/2016"
	at java.text.DateFormat.parse(DateFormat.java:366)
	at Solution.main(Solution.java:12)

Getting rid of Getters and Setters in your POJO

We all have read in Java books about encapsulation of fields in Java class and also when ever you code you are asked to take special care in encapsulating the fields and providing explicit Getters and Setters. And these are very strict instructions. Lets step back a bit and find out the reason behind encapsulating the fields. Its all done to have a control over the access and modification of the fields. One might want to allow the user of the class to access data from only few fields or control the update of data of the fields in the class and so on. And on other occassions the frameworks would need these getters and setters to populate your POJOs(Plain Old Java Objects).

Now the pain involved in adding these getters and setters is quite a bit and this pain has been reduced by the IDEs which allow you to generate the getters and setters for the fields. But these generated code make your class definition very verbose and hide the actual business logic, if any, which you might have it inside the class definition. There have been lot of ways by which you can get away with defining the getters and setters explicitly and I have even blogged about using Project Lombok to use annotations to declare the getters and setters. I have come across another approach to avoid defining the getters and setters and this approach doesn’t even auto generate the code or use annotations to define them. I am sure I have read this approach somewhere but unable to recall, so its something which has been used and I am trying to create an awareness among my readers about this approach via this blog post.

Let me first define the class with the getters and setters and then show how to get rid of them

class TaskWithGettersSetters {
  public TaskWithGettersSetters(String title, String notes,
      LocalDateTime deadline, String assignedTo) {
    this.title = title;
    this.notes = notes;
    this.addedOn = LocalDateTime.now();
    this.deadline = deadline;
    this.assignedTo = assignedTo;
  }

  public TaskWithGettersSetters() {
  }

  private String        title;
  private String        notes;
  private LocalDateTime addedOn;
  private LocalDateTime deadline;
  private String        assignedTo;

  public String getTitle() {
    return title;
  }

  public void setTitle(String title) {
    this.title = title;
  }

  public String getNotes() {
    return notes;
  }

  public void setNotes(String notes) {
    this.notes = notes;
  }

  public LocalDateTime getAddedOn() {
    return addedOn;
  }

  public void setAddedOn(LocalDateTime addedOn) {
    this.addedOn = addedOn;
  }

  public LocalDateTime getDeadline() {
    return deadline;
  }

  public void setDeadline(LocalDateTime deadline) {
    this.deadline = deadline;
  }

  public String getAssignedTo() {
    return assignedTo;
  }

  public void setAssignedTo(String assignedTo) {
    this.assignedTo = assignedTo;
  }

}

There is nothing to explain in the above code, pretty clear with fields being private and public getters and setters. The class definition is about 60 lines. Let see how we can define class without providing getters and setters:

class Task {

  public Task(String title, String notes, LocalDateTime deadline,
      String assignedTo) {
    this.title = title;
    this.notes = notes;
    this.addedOn = LocalDateTime.now();
    this.deadline = deadline;
    this.assignedTo = assignedTo;
  }

  public final String        title;
  public final String        notes;
  public final LocalDateTime addedOn;
  public final LocalDateTime deadline;
  public final String        assignedTo;

}

The above is what I call class definition on diet 🙂 It is less verbose and is just 18 lines. You must be scared looking at the public modifiers for the fields and also confused looking at the final modifiers to the field. Let me explain the ideology behind this approach:

  1. As the fields are final they cannot be modified after initialized so we need not worry about the scare of data in the field getting modified. And we have to provide a constructor which will initialize these fields, otherwise compiler will shout at you for not understanding what final modifier is.
  2. The data in the fields can be accessed by using the fields directly and not via the getter methods.
  3. This approach enforces immutability of objects i.e if we have to update the field we have to create a new object with the updated value of the field.

Now having Immutable objects provides lots of advantages few of them being:

  • Writing concurrent code is quite easy because we need not worry about getting locks on the object as we are never going to modify the object, we can just read the object data and cannot modify due to the use of final.
  • Immutable objects leads to having lot of short lived objects which helps in reducing the GC overhead involved in managing long lived objects and objects with lot of live references.

We can even provide a factory method for creating instances of Task. Lets see the above class in action:

import java.time.LocalDateTime;

public class GettingRidOfGettersSettersDemo {
  public static void main(String[] args) {
    //One can make use of Factory method to initialize the data
    Task task1 = new Task("Task 1", "some notes", LocalDateTime.now().plusDays(5), "sana");
    //Very clean approach to access the field data - no getYYY() noise 
    System.out.println(task1.title + " assigned to " + task1.assignedTo);
    Task task2  = new Task("Task 2", "some notes", LocalDateTime.now().plusDays(6), "raj");
    System.out.println(task2.title + " assigned to " + task2.assignedTo);
  }
}

Update:
Thanks a lot for the comments and your thoughts both here and on DZone. I spent some time in identifying how one can work without the need for getters and setters in scenarios mentioned where without getters and setters its not possible. One such scenario is marsalling and unmarshalling of JSON and another scenario is where we have a List of some values as property and we need to give an read only access to the users of the object. The below are examples of using POJOs without getters and setters in JSON marshalling and unmarshalling using GSON and Jackson JSON libraries:

The below is the code for using GSON JSON Library:

public class GsonParserDemo {

  public static void main(String[] args) {
    HashMap<String, Object> jsonData = new HashMap<String, Object>();
    jsonData.put("name", "sanaulla");
    jsonData.put("place", "bangalore");
    jsonData.put("interests", Arrays.asList("blogging", "coding"));
    Gson gson = new Gson();

    String jsonString = gson.toJson(jsonData);
    System.out.println("From Map: " + jsonString);
    

    Person person = gson.fromJson(jsonString, Person.class);
    
    System.out.println("From Person.class: " + gson.toJson(person));
  }

  class Person {
    public final String name;
    public final String place;
    private final List<String> interests;

    public Person(String name, String place, List<String> interests) {
      this.name = name;
      this.place = place;
      this.interests = interests;
    }
 
    public List<String> interests(){
      return Collections.unmodifiableList(interests);
    }
  }
}

The output of above code is:

From Map: {"name":"sanaulla","place":"bangalore","interests":["blogging","coding"]}
From Person.class: {"name":"sanaulla","place":"bangalore","interests":["blogging","coding"]}

To note : GSON doesn’t use constructor nor getters and setters to map JSON to Java class.

The below is the code for using Jackson JSON Library:

public class JacksonParserDemo {
  public static void main(String[] args) throws JsonGenerationException,
      JsonMappingException, IOException {
    HashMap<String, String> jsonData = new HashMap<String, String>();
    jsonData.put("name", "sanaulla");
    jsonData.put("place", "bangalore");

    ObjectMapper objectMapper = new ObjectMapper();

    String jsonString = objectMapper.writeValueAsString(jsonData);
    System.out.println("Json from map : " + jsonString);

    Person person = objectMapper.readValue(jsonString, Person.class);
    System.out.println("Json from Person : "
        + objectMapper.writeValueAsString(person));
  }

}
class Person {
  
  public final String name;
  
  public final String place;

  @JsonCreator
  public Person(@JsonProperty("name") String name,
      @JsonProperty("place") String place) {
    this.name = name;
    this.place = place;
  }

}

The output of the above code is:

Json from map : {"name":"sanaulla","place":"bangalore"}
Json from Person : {"name":"sanaulla","place":"bangalore"}

I am investigating some concerns raised about Object Relational Mappers and the Joda Time.

Using Google Guava Cache for local caching

Lot of times we would have to fetch the data from a database or another webservice or load it from file system. In cases where it involves a network call there would be inherent network latencies, network bandwidth limitations. One of the approaches to overcome this is to have a cache local to the application.

If your application spans across multiple nodes then the cache will be local to each node causing inherent data inconsistency. This data inconsistency can be traded off for better throughput and lower latencies. But sometimes if the data inconsistency makes a significant difference then one can reduce the ttl (time to live) for the cache object thereby reducing the duration for which the data inconsistency can occur.

Among a number of approaches of implementing local cache, one which I have used in a high load environment is Guava cache. We used guava cache to serve 80,000+ requests per second. And the 90th percentile of the latencies were ~5ms. This helped us scale with the limited network bandwidth requirements.

In this post I will show how one can add a layer of Guava cache in order to avoid frequent network calls. For this I have picked a very simple example of fetching details of a book given its ISBN using the Google Books API.

A sample request for fetching book details using ISBN13 string is:
https://www.googleapis.com/books/v1/volumes?q=isbn:9781449370770&key=API_KEY

The part of response which is useful for us looks like:
SampleResponse

A very detailed explanation on the features of Guava Cache can be found here. In this example I would be using a LoadingCache. The LoadingCache takes in a block of code which it uses to load the data into the cache for missing key. So when you do a get on cache with an non existent key, the LoadingCache will fetch the data using the CacheLoader and set it in cache and return it to the caller.

Lets now look at the model classes we would need for representing the book details:

  • Book class
  • Author class

The Book class is defined as:

//Book.java
package info.sanaulla.model;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;

public class Book {
  private String isbn13;
  private List<Author> authors;
  private String publisher;
  private String title;
  private String summary;
  private Integer pageCount;
  private String publishedDate;

  public String getIsbn13() {
    return isbn13;
  }

  public void setIsbn13(String isbn13) {
    this.isbn13 = isbn13;
  }

  public List<Author> getAuthors() {
    return authors;
  }

  public void setAuthors(List<Author> authors) {
    this.authors = authors;
  }

  public String getPublisher() {
    return publisher;
  }

  public void setPublisher(String publisher) {
    this.publisher = publisher;
  }

  public String getTitle() {
    return title;
  }

  public void setTitle(String title) {
    this.title = title;
  }

  public String getSummary() {
    return summary;
  }

  public void setSummary(String summary) {
    this.summary = summary;
  }

  public void addAuthor(Author author){
    if ( authors == null ){
      authors = new ArrayList<Author>();
    }
    authors.add(author);
  }

  public Integer getPageCount() {
    return pageCount;
  }

  public void setPageCount(Integer pageCount) {
    this.pageCount = pageCount;
  }

  public String getPublishedDate() {
    return publishedDate;
  }

  public void setPublishedDate(String publishedDate) {
    this.publishedDate = publishedDate;
  }
}

And the Author class is defined as:

//Author.java
package info.sanaulla.model;

public class Author {

  private String name;

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
  }

Lets now define a service which will fetch the data from the Google Books REST API and call it as BookService. This service does the following:

  1. Fetch the HTTP Response from the REST API.
  2. Using Jackson’s ObjectMapper to parse the JSON into a Map.
  3. Fetch relevant information from the Map obtained in step-2.

I have extracted out few operations from the BookService into an Util class namely:

  1. Reading the application.properties file which contains the Google Books API Key (I haven’t committed this file to git repository. But one can add this file in their src/main/resources folder and name that file as application.properties and the Util API will be able to read it for you)
  2. Making an HTTP request to REST API and returning the JSON response.

The below is how the Util class is defined:

//Util.java
 
package info.sanaulla;

import com.fasterxml.jackson.databind.ObjectMapper;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.ProtocolException;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;

public class Util {

  private static ObjectMapper objectMapper = new ObjectMapper();
  private static Properties properties = null;

  public static ObjectMapper getObjectMapper(){
    return objectMapper;
  }

  public static Properties getProperties() throws IOException {
    if ( properties != null){
        return  properties;
    }
    properties = new Properties();
    InputStream inputStream = Util.class.getClassLoader().getResourceAsStream("application.properties");
    properties.load(inputStream);
    return properties;
  }

  public static String getHttpResponse(String urlStr) throws IOException {
    URL url = new URL(urlStr);
    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
    conn.setRequestMethod("GET");
    conn.setRequestProperty("Accept", "application/json");
    conn.setConnectTimeout(5000);
    //conn.setReadTimeout(20000);

    if (conn.getResponseCode() != 200) {
      throw new RuntimeException("Failed : HTTP error code : "
              + conn.getResponseCode());
    }

    BufferedReader br = new BufferedReader(new InputStreamReader(
          (conn.getInputStream())));

    StringBuilder outputBuilder = new StringBuilder();
    String output;
    while ((output = br.readLine()) != null) {
      outputBuilder.append(output);
    }
    conn.disconnect();
    return outputBuilder.toString();
  }
}

And So our Service class looks like:

//BookService.java
package info.sanaulla.service;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Optional;
import com.google.common.base.Strings;

import info.sanaulla.Constants;
import info.sanaulla.Util;
import info.sanaulla.model.Author;
import info.sanaulla.model.Book;

import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Properties;

public class BookService {

  public static Optional<Book> getBookDetailsFromGoogleBooks(String isbn13) throws IOException{
    Properties properties = Util.getProperties();
    String key = properties.getProperty(Constants.GOOGLE_API_KEY);
    String url = "https://www.googleapis.com/books/v1/volumes?q=isbn:"+isbn13;
    String response = Util.getHttpResponse(url);
    Map bookMap = Util.getObjectMapper().readValue(response,Map.class);
    Object bookDataListObj = bookMap.get("items");
    Book book = null;
    if ( bookDataListObj == null || !(bookDataListObj instanceof List)){
      return Optional.fromNullable(book);
    }

    List bookDataList = (List)bookDataListObj;
    if ( bookDataList.size() < 1){
      return Optional.fromNullable(null);
    }

    Map bookData = (Map) bookDataList.get(0);
    Map volumeInfo = (Map)bookData.get("volumeInfo");
    book = new Book();
    book.setTitle(getFromJsonResponse(volumeInfo,"title",""));
    book.setPublisher(getFromJsonResponse(volumeInfo,"publisher",""));
    List authorDataList = (List)volumeInfo.get("authors");
    for(Object authorDataObj : authorDataList){
      Author author = new Author();
      author.setName(authorDataObj.toString());
      book.addAuthor(author);
    }
    book.setIsbn13(isbn13);
    book.setSummary(getFromJsonResponse(volumeInfo,"description",""));
    book.setPageCount(Integer.parseInt(getFromJsonResponse(volumeInfo, "pageCount", "0")));
    book.setPublishedDate(getFromJsonResponse(volumeInfo,"publishedDate",""));

    return Optional.fromNullable(book);
  }

  private static String getFromJsonResponse(Map jsonData, String key, String defaultValue){
    return Optional.fromNullable(jsonData.get(key)).or(defaultValue).toString();
  }
}

Adding caching on top of the Google Books API call

We can create a cache object using the CacheBuilder API provided by Guava library. It provides methods to set properties like

  • maximum items in cache,
  • time to live of the cache object based on its last write time or last access time,
  • ttl for refreshing the cache object,
  • recording stats on the cache like how many hits, misses, loading time and
  • providing a loader code to fetch the data in case of cache miss or cache refresh.

So what we would ideally want is that a cache miss should invoke our API written above i.e getBookDetailsFromGoogleBooks. And we would want to store maximum of 1000 items and expire the items after 24 hours. So the piece of code which builds the cache looks like:

private static LoadingCache<String, Optional<Book>> cache = CacheBuilder.newBuilder()
  .maximumSize(1000)
  .expireAfterAccess(24, TimeUnit.HOURS)
  .recordStats()
  .build(new CacheLoader<String, Optional<Book>>() {
      @Override
      public Optional<Book> load(String s) throws IOException {
          return getBookDetailsFromGoogleBooks(s);
      }
  });

Its important to note that the maximum items which you want to store in the cache impact the heap used by your application. So you have to carefully decide this value depending on the size of each object you are going to cache and the maximum heap memory allocated to your application.

Lets put this into action and also see how the cache stats report the stats:

package info.sanaulla;

import com.google.common.cache.CacheStats;
import info.sanaulla.model.Book;
import info.sanaulla.service.BookService;

import java.io.IOException;
import java.util.Properties;
import java.util.concurrent.ExecutionException;

public class App 
{
  public static void main( String[] args ) throws IOException, ExecutionException {
    Book book = BookService.getBookDetails("9780596009205").get();
    System.out.println(Util.getObjectMapper().writeValueAsString(book));
    book = BookService.getBookDetails("9780596009205").get();
    book = BookService.getBookDetails("9780596009205").get();
    book = BookService.getBookDetails("9780596009205").get();
    book = BookService.getBookDetails("9780596009205").get();
    CacheStats cacheStats = BookService.getCacheStats();
    System.out.println(cacheStats.toString());
  }
}
[/cpde]

And the output we would get is:

{"isbn13":"9780596009205","authors":[{"name":"Kathy Sierra"},{"name":"Bert Bates"}],"publisher":""O'Reilly Media, Inc."","title":"Head First Java","summary":"An interactive guide to the fundamentals of the Java programming language utilizes icons, cartoons, and numerous other visual aids to introduce the features and functions of Java and to teach the principles of designing and writing Java programs.","pageCount":688,"publishedDate":"2005-02-09"}
CacheStats{hitCount=4, missCount=1, loadSuccessCount=1, loadExceptionCount=0, totalLoadTime=3744128770, evictionCount=0}

This is a very basic usage of Guava cache and I wrote it as I was learning to use this. In this I have made use of other Guava APIs like Optional which helps in wrapping around existent or non-existent(null) values into objects. This code is available on git hub- https://github.com/sanaulla123/Guava-Cache-Demo. There will be concerns such as how it handles concurrency which I havent gone detail into. But under the hood it uses a segmented Concurrent hash map such that the gets are always non-blocking, but the number of concurrent writes would be decided by the number of segments.

Some of the useful links related to this:
http://guava-libraries.googlecode.com/files/ConcurrentCachingAtGoogle.pdf

Book Review: Murach’s Java Servlets And JSP 3rd Edition

Murach’s Java Servlets and JSP is the ONLY book you need to learn Web App Development in Java using JSP and Servlets. The book covers all the concepts required you to build a complete Web application in Java. You will find topics covering:
– UI Development using HTML, Javascript
– Building Servlets for handling requests.
– Using JSP to create UI templates
– Building Data access layer to communicate with DB.

Its a completely hands on guide and just mere reading will be of no help. The book also covers concepts and techniques related to secure programming and also some advanced concepts in servlets.

Few among numerous salient features:
Completely hands on guide
Highly suitable for people who are familiar with Java language
Focuses on best practices where ever relevant. For example in the chapters which explains about JSP there is a guideline not to mix java code with JSP and instead make use of JSP Template Library.
Lot of reusable code snippets – useful if someone is looking to implement a subset of the feature explained.

I would highly recommend this book to –
“Any developer familiar with Java programming language looking to learn Web application development using Servlets and JSP”.

One can purchase the latest edition here [though currently only imported edition is available].

PS: I got a copy of the book in return for the review.