When we're working with legacy code, before making any changes we should write unit tests that will prevent us from breaking existing functionality. However most of the time, writing tests for legacy code is not easy.
Let's take a look at a simple example, that will try to show the challenges that we may face:
We have a class that calculates a discount for a customer based on his name and the amount of product that he's buying.
The problem hides in a static method call CustomerService.isImportantCustomer(). In the old code we can see a lot of static methods. Let's say that the one in our example is calling the database.
We need to mock it in order to write a proper unit test.
First of all, we will extract the static call to a separate method:
Refactoring using IDE (extracting methods, renaming etc.) is considered to be safe. Once we did that we can use a nice feature of Mockito - spies.
If we declare our tested class as a spy, we can mock its method the same way as with standard mocks. The test for our class could look like:
The Mockito API for spies is similar to the one for mocks (the syntax is different for overriding void methods).
Monday, 4 June 2012
Sunday, 13 May 2012
Testing getters and setters with openpojo
Testing getters and setters is a little bit controversial topic. However, if we decide to do it or if we are forced to, we can ease the task with openpojo library.
Generally standard getters and setters can be generated by your IDE but testing them is a manual work - it takes some time and seems to be boring. So if the code was generated, why can't we automate the testing?
openpojo doesn't generate the unit test but it allows to test your POJO with a very small effort.
Let's take a look at an example. We have a Person class which is a POJO with generated getters and setters.
To test it we need to create just one class:
Our class defines which rules openpojo will apply and what tests it will execute. We also specify which package will be tested - if we have a lot of classes we can test them all in one go. In our case we're only testing if getters and setters exist and if they behaving in a conventional way.
Running the class with any coverage tool will show us that the get and set methods are executed.
There are more rules that we can apply (like checking if there are no public fields, no primitive values etc.). We can write our own rules as well.
Generally standard getters and setters can be generated by your IDE but testing them is a manual work - it takes some time and seems to be boring. So if the code was generated, why can't we automate the testing?
openpojo doesn't generate the unit test but it allows to test your POJO with a very small effort.
Let's take a look at an example. We have a Person class which is a POJO with generated getters and setters.
To test it we need to create just one class:
Our class defines which rules openpojo will apply and what tests it will execute. We also specify which package will be tested - if we have a lot of classes we can test them all in one go. In our case we're only testing if getters and setters exist and if they behaving in a conventional way.
Running the class with any coverage tool will show us that the get and set methods are executed.
There are more rules that we can apply (like checking if there are no public fields, no primitive values etc.). We can write our own rules as well.
Thursday, 19 April 2012
JMS with Spring configured in Java
Handling JMS using standard JMS API is a tedious task - you need to create a lot of boilerplate code.
Spring provides an abstraction layer that eases this pain.
In the following example I'm using ActiveMQ (which needs to be started before running any of the code below) but Spring works with other major JMS implementations as well.
In order to send a JMS message we can use JMS template. Here's our sender that's using it:
Spring beans needs to be configured in a context. Let's define Spring configuration in Java code:
JMS template must have a connection factory with a broker url and a destination (in our case it's a queue).
The XML configuration equivalent would look like:
Java configuration provides not only compile-time checking but in my opinion it is more straightforward and easier to control.
Unfortunately we can't escape XML completely, we need to turn on component scanning so that Spring will read configuration from classes in our package:
The example code that would run the sender:
Apart from the sender we can also create the receiver. One of the solutions is to make him asynchronous by implementing MessageListener interface.
The Spring config looks as follows:
Our receiver is set in the message listener container with the connection factory and the destination.
Receiver is missing the same XML configuration as sender, and the running code is analogical.
Here's the list of maven dependencies:
Spring provides an abstraction layer that eases this pain.
In the following example I'm using ActiveMQ (which needs to be started before running any of the code below) but Spring works with other major JMS implementations as well.
In order to send a JMS message we can use JMS template. Here's our sender that's using it:
Spring beans needs to be configured in a context. Let's define Spring configuration in Java code:
JMS template must have a connection factory with a broker url and a destination (in our case it's a queue).
The XML configuration equivalent would look like:
Java configuration provides not only compile-time checking but in my opinion it is more straightforward and easier to control.
Unfortunately we can't escape XML completely, we need to turn on component scanning so that Spring will read configuration from classes in our package:
The example code that would run the sender:
Apart from the sender we can also create the receiver. One of the solutions is to make him asynchronous by implementing MessageListener interface.
The Spring config looks as follows:
Our receiver is set in the message listener container with the connection factory and the destination.
Receiver is missing the same XML configuration as sender, and the running code is analogical.
Here's the list of maven dependencies:
Monday, 2 April 2012
Testing asynchronous calls with awaitility
Testing asynchronous systems is not an easy task. Let's take a simple example:
We have a class that creates file based on a given file name. The interesting thing is, that it does it asynchronously using a thread pool and returns immediately to the caller.
Let's try to create a test for it. Our first attempt could look like that:
The horrible thing about it is that Thread.sleep() invocation. Test should be fast, making them wait unnecessary is very poor solution. And what if the test sometimes fails because of overloaded hardware? Are we going to sleep even more?
To eliminate unneeded waiting, we may come up with a concept of validator:
We no longer need to sleep for a long time, but the code has been significantly polluted. Of course we can refactor our validator, make it more reusable but why reinvent the wheel? There is a nice and small library - awaitility - that will do the same for us.
In a very expressive way, we achieve the same result. Timeout, polling delay and polling interval are of course configurable.
We have a class that creates file based on a given file name. The interesting thing is, that it does it asynchronously using a thread pool and returns immediately to the caller.
Let's try to create a test for it. Our first attempt could look like that:
The horrible thing about it is that Thread.sleep() invocation. Test should be fast, making them wait unnecessary is very poor solution. And what if the test sometimes fails because of overloaded hardware? Are we going to sleep even more?
To eliminate unneeded waiting, we may come up with a concept of validator:
We no longer need to sleep for a long time, but the code has been significantly polluted. Of course we can refactor our validator, make it more reusable but why reinvent the wheel? There is a nice and small library - awaitility - that will do the same for us.
In a very expressive way, we achieve the same result. Timeout, polling delay and polling interval are of course configurable.
Friday, 23 March 2012
logback - successor of log4j
The old and good known log4j seems to be a standard framework for logging in Java applications, despite some serious disadvantages like boilerplate configuration, lack of good documentation and overcomplicated architecture.
The authors of log4j dispatched on another journey and created its successor - logback, which addresses old problems and adds a lot of enhancements.
Configuration is now more concise (it can be even written in groovy) and well documented.
slf4j api is used natively, so the implementation can be changed easily.
The issue with many instances of RollingFileAppender writing to the same file was also resolved.
In order to add logback to your project you need to add two dependencies:
You're basically ready to go, because default configuration is applied, when no other is found.
When following class is run:
it will print something like this:
But the coolest feature is automatic reloading of configuration file.
When we add this example configuration (saved as logback.xml) to the classpath:
we can change it on the fly and logback will automatically apply changes (at configured interval, in our case 5 seconds) without a need to restart the application.
The authors of log4j dispatched on another journey and created its successor - logback, which addresses old problems and adds a lot of enhancements.
Configuration is now more concise (it can be even written in groovy) and well documented.
slf4j api is used natively, so the implementation can be changed easily.
The issue with many instances of RollingFileAppender writing to the same file was also resolved.
In order to add logback to your project you need to add two dependencies:
You're basically ready to go, because default configuration is applied, when no other is found.
When following class is run:
it will print something like this:
But the coolest feature is automatic reloading of configuration file.
When we add this example configuration (saved as logback.xml) to the classpath:
we can change it on the fly and logback will automatically apply changes (at configured interval, in our case 5 seconds) without a need to restart the application.
Sunday, 18 March 2012
Getting result from thread's execution with Future
To make things go faster we parallelize our computations. What if we need to use the result of the thread's execution?
Let's say we have a service that buys some product. It needs to fetch the price and quantity of the product. Fetching the price usually takes longer, so we delegate this task to a thread, while we are dealing with quantity.
To keep things simple, our PriceChecker class will be just simulating that it does something meaningful:
Now it would be good to somehow get the result of checkPrice() invocation. Runnable's run() is a void method, so we would have to do something like this:
This approach has a lot of drawbacks. We have to check in the loop if the price has already been set. What is more, price cannot be a final variable but has to be a field instead.
To deal with this kind of a problem, the Future interface should be used. Basically, it allows to get the result of thread's execution. Let's take a look at the actual usage in the context of our example:
First of all we're using Callable which is similar to Runnable but is capable of returning the result. Notice that it can also throw exception, while run() cannot. When we submit the callable to executor service, we're getting the Future object. Its method get() blocks until the computation is finished. If we want to specify the maximum time that we want to wait, there is an overloaded version that takes waiting time settings as parameters.
The runner for both cases is pretty straightforward:
Let's say we have a service that buys some product. It needs to fetch the price and quantity of the product. Fetching the price usually takes longer, so we delegate this task to a thread, while we are dealing with quantity.
To keep things simple, our PriceChecker class will be just simulating that it does something meaningful:
Now it would be good to somehow get the result of checkPrice() invocation. Runnable's run() is a void method, so we would have to do something like this:
This approach has a lot of drawbacks. We have to check in the loop if the price has already been set. What is more, price cannot be a final variable but has to be a field instead.
To deal with this kind of a problem, the Future interface should be used. Basically, it allows to get the result of thread's execution. Let's take a look at the actual usage in the context of our example:
First of all we're using Callable which is similar to Runnable but is capable of returning the result. Notice that it can also throw exception, while run() cannot. When we submit the callable to executor service, we're getting the Future object. Its method get() blocks until the computation is finished. If we want to specify the maximum time that we want to wait, there is an overloaded version that takes waiting time settings as parameters.
The runner for both cases is pretty straightforward:
Saturday, 10 March 2012
Handling read and write operations on shared object
Many times we come across a problem, when in the multithreaded environment some shared object has frequent read operations and write operations are rare. If we could use a concurrent collection that would be the best option in most of the cases, but sometimes we can't.
Let's imagine a situation when this shared object is in a 3rd party library. In our example this object is a Book class that I used in previous posts.
Most of the time we want to show information about the book, but exceptionally the author modifies the content and therefore the number of pages needs to be changed.
Using synchronized for the methods that read/write the book would be a poor solution.
Instead, we could use ReadWriteLock which allows to differentiate between read and write operations. Read operation is concurrent, which means that all the threads have access to the object. Only write operation is exclusive and blocks other threads.
Everything we do after acquiring the lock must be put in a try block, so that unlocking is done even if the exception is thrown.
Let's see it in action:
We have 10 threads that are performing 100 tasks. Only 10% of them are write tasks. When we run it, we can see that read operations are concurrent (lines about the reading thread and book info are mixed from time to time) and write operations are exclusive (info about writing thread and successful update are always in the correct order).
The example with the book is purely educational and not too realistic. In real life we would probably have some kind of a large collection. It's good to read ReentrantReadWriteLock and ReadWriteLock javadoc, because sometimes using them may bring more overhead than mutual exclusion.
Let's imagine a situation when this shared object is in a 3rd party library. In our example this object is a Book class that I used in previous posts.
Most of the time we want to show information about the book, but exceptionally the author modifies the content and therefore the number of pages needs to be changed.
Using synchronized for the methods that read/write the book would be a poor solution.
Instead, we could use ReadWriteLock which allows to differentiate between read and write operations. Read operation is concurrent, which means that all the threads have access to the object. Only write operation is exclusive and blocks other threads.
Everything we do after acquiring the lock must be put in a try block, so that unlocking is done even if the exception is thrown.
Let's see it in action:
We have 10 threads that are performing 100 tasks. Only 10% of them are write tasks. When we run it, we can see that read operations are concurrent (lines about the reading thread and book info are mixed from time to time) and write operations are exclusive (info about writing thread and successful update are always in the correct order).
The example with the book is purely educational and not too realistic. In real life we would probably have some kind of a large collection. It's good to read ReentrantReadWriteLock and ReadWriteLock javadoc, because sometimes using them may bring more overhead than mutual exclusion.
Subscribe to:
Posts (Atom)