Sunday, 27 October 2013

Running Hadoop locally without installation

If we want to take Hadoop for a test-drive without installing the whole distribution, we can do it quite easily.

First of all, let's create a maven project with the following dependencies:

There is a known issue with running newer version on Windows, so the older one is chosen.
Cygwin is also required to be installed when running on Windows.

We will create a job to count the words in files (it's a well-known example taken from the official tutorial).

Our mapper would look like:

The mapper splits the lines from file into words and pass on each word as the key with the value of one.

Here comes the reducer:

The reducer receives all values for given key and counts them.

All that is left is a main class that will run the job:

We are setting job's mapper, reducer and classer for key and value.
Input and output paths are set as well.
You can run it directly and check the output file with the result.
The whole project can be found on github.

Saturday, 21 September 2013

Transaction management with Spring Data JPA

Spring provides an easy way to manage transactions.
Let's see how to make our methods transactional without using any XML configuration.

We will start with Spring Java config for our application:

We have an entity representing a bank account:

We also have a repository from Spring Data JPA for Account objects:

The TransferService allows transferring money from one account to another:

Just for example sake, we are adding money to one account and before subtracting from the second one, we check if it has enough funds.
If the method wasn't transactional, we would introduce a major bug.
However, @Transactional annotation makes it eligible for rollback if the exception is thrown.
It's also important to note that many methods from the repository are transactional with default propagation, so the transaction from our service will be reused.
Let's make sure that it works by writing an integration test:

If we remove @Transactional annotation the test will fail.
Be aware that managing transactions with Spring have some traps that are described in this article.
The whole project can be found at github.

Thursday, 29 August 2013

Changing application behavior at runtime with JMX

Sometimes we need to be able to change the behavior of our application without a restart.
JMX, apart from its monitoring capabilities, is a perfect solution for this.
Spring provides great JMX that will ease our task.

Let's start with a simple service, which behavior we will change at runtime.

DiscountService calculates discount based on a globalDiscount - it's value is harcoded for simplicity purposes, it would probably be read from some configuration file or database in more realistic example.

First of all, in order to expose methods to manage globalDiscount we need to add @ManagedResource annotation to our class and add the methods with @ManagedOperation annotation.
We could also use @ManagedAttribute if we would treat these methods as simple getter and setter for globalDiscount.

Class with needed methods and annotations would look like:

In Spring configuration we just need to define the bean for DiscountService and enabling exporting MBeans with MBean server.

We can run the application with:

Now we're ready to manage our service with jconsole:

Our service is exposed locally, but if we want to be able to connect to it remotely, we will need to add following beans to the Spring configuration:

Service is now exposed via RMI.
We can invoke the exposed methods programmatically, which allows us to write some scripts and manage services without using jconsole.

Let's write an integration test to check that it works correctly.
We will need a Spring config for the test with the RMI client.

The test will increment the value of globalDiscount.
Take a closer look at exposed methods invocation, which is very cumbersome, especially if the method has parameters.

The whole project can be found at github.

Sunday, 21 July 2013

Caching with Spring Cache

From time to time we need to use a cache in our application.
Instead of writing it on our own and reinventing the wheel, we can use Spring Cache.
Its biggest advantage is unobtrusiveness - besides few annotations we keep our code intact.

Let's see how to do it.
At first let's have a look at some fake service that is the reason for using cache (in real life it may be some database call, web service etc.)

We will cache the invocations of isVipClient(...) 

Let's start with adding dependencies to our project:

We'd like to use xml-free spring config, so we will define our beans in java:

We have our SlowService defined as a bean. We also have the cacheManager along with a cache name and its implementation - in our case it's based on ConcurrentHashMap. We can't forget about enabling caching via annotation.

The only thing to do is to add annotation @Cacheable with a cache name to our method:

How to test our cache?
One way to do it is to check if the double invocation of our method with the same parameter will actually execute the method only once.
We will use springockito to inject a spy into our class and verify its behaviour.
We need to start with a spring test context:

Basically the context will read the configuration from java class and will replace the slowService bean with a spy. It would be nicer to do it with an annotation but at the time of this writing the @WrapWithSpy doesn't work (

Here's our test:

This was the very basic example, spring cache offers more advanced features like cache eviction and update.

The whole project can be found at github.

Saturday, 29 June 2013

Handling command line arguments with args4j

From time to time, we need to write a tool that is using command line arguments as input.
Having an interface similar to unix command line tools is not a trivial task but with args4j it becomes quite easy.

We're going to write a fake tool called FileCompressor which process of invocation would look like:

where -i is input file name, -o output file name and -p the priority of the process.

Let's start with defining the dependencies of our project:

We'll need args4j, the rest is for testing.

Our fake FileCompressor requires a configuration that will be populated by args4j.

The configuration object has fields that will be mapped to command line arguments with args4j annotations.

And here is the class that will be invoked:

First of all, the arguments are parsed and configuration object is populated with them.
If there is a parsing problem, the exception is printed along with the usage (args4j can automatically print the usage of our application based on used annotations).

Testing our application requires some effort.
To begin with, we would like to mock completely the FileCompressor. There is no setter for it so we'll use Spring's ReflectionTestUtils together with Mockito.

We've also prepared some test data.

Let's check if the configuration object is populated properly and if the collabolator was invoked:

We would also like to know if the usage was printed when invalid arguments were passed.
To do that we need to use a spy, as we want to have the original behavior of populating argument, but we want to verify if printUsage method was invoked.

The last thing to check is if the exception was printed to the output stream when invalid arguments were passed.
Similarly we need to use a spy.

The whole project can be found at github.

Sunday, 19 May 2013

Testing JavaScript with Jasmine

When developing a web application, sooner or later we need to deal with JavaScript. To keep high code quality we must write unit tests.
Jasmine is a nice testing framework that allows us to write tests in a BDD manner.
Let's see how to use it in our project.

First of all, we need to incorporate it into our build. There is a maven plugin that will let us execute JavaScript tests within test build phase. We will add it to our pom.xml :
Apart from executing tests within build, we can run them during development phase.
The maven goal jasmine:bdd starts the Jetty server and under http://localhost:8234 URL we can see the results of executing test fixtures.
Reloading the page will execute the latest version of our code.

Let's see it in action. We have a simple JavaScript function residing in src/main/javascript/simple.js that increments the number passed as an argument.

The test (in Jasmine called spec) is located in src/test/javascript/simple_spec.js:

A test begins with a call to the global Jasmine function describe. 
Then the specific test cases are defined by calling function it.
We're testing the sunny day scenario and border case that throws the exception.
At the beginning as a sanity check we can test if the function is defined.
Jasmine has a lot of advanced features, we can even use mocks, expectations and argument matchers.

Nothing stops us from writing tests first and going through red-green-refactor cycle.
Let's write a function that will count the number of specific elements on our page.

The behaviour of our component is defined by following spec:

Before testing the case with one element, we're adding it to the document. In more complex case we could load the HTML content.

To implement, it we'll use jquery (we need to add it to src/main/javascript):

After executing mvn test we'll see a nice output:

Whole project can be found at github.

Thursday, 25 April 2013

Testing Spring Integration

Adding Spring Integration to our project (apart from many advantages) brings drawbacks as well. One of them is more difficult testing.

Let's try to test the flow from MessageRouter to Persister via JSONHandler (please refer to the previous post

First of all, it won't be a unit test but an integration test as we need to start-up the spring context and test the flow between components.

Let's prepare spring xml config:

We're importing the application config and overriding beans with mockito mocks.

The test would look like:

Spring injects all the required components. We're mocking the behaviour, sending message to the channel and then verifying. Mocks need to be reset as well if we want to have more test methods.

By looking at the source code we don't see which object is mocked. To solve this problem (and simplify the code as well) we can use springockito.
Then in our config file we only need to import the application config without overriding any beans.

The overriding part is done in the test by using annotations:

To use springockito we need to change the context loader to SpringockitoContextLoader. 
The @ReplaceWithMock annotation is self-descriptive.
The context is dirtied after execution of each test method which is basically equivalent to resetting the mocks.
Whole project can be found at github.

Wednesday, 20 March 2013

Spring Integration vs plain Java code

Adding frameworks to our system may solve some problems and cause too much complexity at the same time.

Let's imagine a simple flow:

We have a MessageRouter object that routes received message to either XMLHandler or JSONHandler. The processed message in both handlers are then passed to Persister that is storing the message.

Modelling it is not that difficult. Here's our MessageRouter class:

And finally persister: We can see it in action by running the following class:
Everything seems fine here, but we have a problem - tight coupling.
Message router knows about handlers and handlers know about persister.
We would gain loose coupling and high cohesion if they weren't aware of each other which would automatically make them concentrate on one task only.

Spring Integration can help us achieve this.
It is a separate module of Spring, enabling lightweight messaging within application and supporting Enterprise Integration Patterns.

All the arrows from the above picture will be represented as channels.

The XML Spring context configuration for the channels looks pretty straightforward:

Actually it's not even required as Spring can create them by default.

Apart from the channels, our MessageRouter only role is to return the channel name to which the message will be passed.
Also the handlers and persister need to become Service Activators (their methods will be invoked when the message will go through the channel).
The config for those:
To see it in action we can run the following class:
Notice that the code is concise and simple.
The components do not depend on each other.
What is more, if we wanted to modify the flow and move Persister component in front of the MessageRouter, we would need to change the xml config to:

Changing the flow in the first version that is using plain Java code would require much more modifications.

Nevertheless we increased the complexity of our application. Now we depend on a framework and we need to maintain additional configuration in an XML file.

Another big disadvantage is testing. We could easily unit test the previous code using mocks. Now we need to test the Java code and Spring Integration configuration as well, which is not that simple.
I'll show how to do it in the next post.

Friday, 15 February 2013

MarkLogic Java client

MarkLogic is a NoSQL document database that allows to handle XML efficiently.

Let's take a look how to setup MarkLogic database instance on a local machine and write a simple application that will perform CRUD and searching operations on XML documents.

First of all, we will need to download MarkLogic server (account is required).
Installation and starting procedures are described here - they're pretty straightforward.
When server is already started, we need to create a new database with REST API instance and a user having write access - follow this link.
REST is used by the Java client as a communication protocol but we can also use it manually in our browser.

Once database is created we can start writing the client code.

Let's start with setting up required dependencies:
To use MarkLogic Java client we need to specify MarkLogic maven repository. We'll also use xml-matchers library to compare created XML documents.

Here's an example of XML document that will be representing person:
Let's define an interface that will allow some simple CRUD and searching operations:
The sample implementation could look like:
In order to perform CRUD operations we need to have DocumentManager object (in our case XMLDocumentManager as we're handling XML). It is thread-safe object (can be shared across multiple threads) and its usage is quite intuitive. Each operation needs a specific handle object - as our interface declared String, we'll use StringHandle that is being populated with the result by manager.

To do query operations QueryManager is required. There are many types of queries, we'll use searching by element value.
It's a little bit more complicated than simple CRUD operations - SearchHandle object is initially populated by running the query on query manager.
Then we're iterating over each SearchHandle's result represented by MatchDocumentSummary object and retrieve its URI, that is given to DocumentManager that reads full document. 
Please note that the number of returned documents has been limited to 10.

The integration test (it requires running MarkLogic server):
In setUp() method DatabaseClientFactory creates DatabaseClient based on the given credentials (they need to be the same as the one used during setting up the database).
Once we have the client we can create managers needed by the implementation.

One thing to note: when manager cannot find the document it throws ResourceNotFoundException.

The whole project can be found at github.

Wednesday, 6 February 2013

Spring Data JPA sample project

In previous post I showed how to setup a sample project with JPA and Hibernate.
Even though it wasn't difficult, there was a main disadvantage - we need to do a lot of coding around our DAO objects even if we want only simple operations.
Spring Data JPA helps us reduce data access coding.

Let's start with defining dependencies:

Compared to previous project there are more dependencies because of Spring. spring-test is needed to allow our test use the spring context. And this time we're going o use HSQLDB.

pesistence.xml  is much smaller because the persistence configuration will be defined in spring context.

Please note that Hibernate is still the persistence provider. The Person class is exactly the same as before.

The context is defined as:

We need to specify dataSourceentityManagerFactory and transactionManager.
The pl.mjedynak package will be scanned by spring to do autowiring.
Spring Data JPA introduces the concept of repository which is higher level of abstraction than the DAO.
In our context we define a package with the repositories.

The only thing to do to be able to manage our Person class is to create an interface that will extend CrudRepository - it gives us basic operations like save, find, delete etc.

We can of course expand it with more sophisticated methods if we want.

The integration test is almost the same as before, except it needs to use spring context.

The whole project can be found at:

Friday, 25 January 2013

Hibernate JPA sample project

Here's a simple example of a project that uses JPA with Hibernate as default implementation.

Let's start with the required dependencies:

We're going to use the embedded Derby database.

We will also need the persistence.xml placed in META-INF directory on our classpath :

We're setting Hibernate as our persistence provider and Derby as JDBC driver.
Entity class Person is also specified as belonging to this persistent unit.

It's defined as:

In order to manage Person we need some kind of DAO.  Let's define an interface PersonDao:

For simplicity it has only methods for adding person and finding all persons.

The sample implementation could look like:

And the most important - integration test that checks if everything is glued together correctly:

The whole project can be found at:

To sum up:
it's quite easy to create JPA with Hibernate project, however every entity class needs it's own DAO (unless we use a generic one).
In the next post I'll take a look at Spring Data Jpa project that solves that impediment.