A major benefit of building on the JVM is the wide range of infrastructure built natively for it. Combined with first-class support for threading this allows entire scaled-down environments to be run in-memory for testing. There are numerous advantages to this, including:
- Faster feedback
- Consistent environment across the team
- Easy debugging – set breakpoints anywhere in the stack
- Write and run integration tests for your adapters (as in Ports and Adapters) without needing a full environment
I’ve been regularly testing apps against HSQL and H2 for quite a while, and WireMock was built specifically to provide in-memory REST services to test against. More recently I’ve been running Cassandra, Zookeeper and Kafka (all Apache projects) in-situ with my apps. Virtually everything I’ve built over the last couple of years has been on top of Dropwizard, which fits well with this model given its embedded web server and relatively quick startup time.
JUnit rules are an extremely handy way of managing environment components. For the unfamiliar, JUnit rule classes allow you to abstract away code you might otherwise put in your @BeforeClass
, @AfterClass
, @Before
and @After
methods. Most (all?) environment components are essentially daemons that need to be started, stopped and sometimes reset at the right moments, so rules are ideal wrappers to manage this lifecycle.
WireMock and Dropwizard ship with rule implementations, so including them in your setup is as simple as:
1 2 |
|
and
1 2 3 4 |
|
For tools that don’t come with their own rules, creating them can be achieved by subclassing JUnit’s ExternalResource class e.g.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
I find it useful to hang test utility methods here too e.g.
1 2 3 4 5 6 7 |
|
If you have many more than a couple of components in your environment, the number of rules required can become a bit unwieldy. Also, including each rule as a class field as in the above examples gives you no control over the order in which they’re evaluated, which can be a problem if there are dependencies between the components (e.g. Kafka requires a Zookeeper server). JUnit’s RuleChain solves both of these problems. On my current project we’ve created an “environment” class, also a test rule, which composes the individual components and imposes a specific ordering. This looks a bit like this:
1 2 3 4 5 6 7 8 9 10 11 |
|
Gotchas
Many libraries aren’t built with this kind of usage in mind, so a little bit of extra complexity is sometimes required. Issues I’ve encountered are:
Cassandra manages some state in static variables, meaning that it survives restarts inside a single JVM. We worked around this by only starting Cassandra once per test run (by making the Cassandra server a static member of the rule and only starting it if it’s not already running).
You’ll be pulling lots of extra dependencies, and there will inevitably be clashes. I’ve been making heavy use of mvn dependency:tree
while working on this. I’d like to find a good way to do cross-platform JVM forking as a solution to this and the previous point.
Where one service depends on another and startup happens on a background thread it’s possible to start in a bad state. Polling and sleeps are often adequate if not great solutions to this.
Some servers have large or volatile startup times. I’ve found that the Kafka + Zookeeper combination can take between 10 seconds and over minute to stabilise, which makes it hard to get into a TDD groove.
The more services you run in your environment, the greater the risk they’ll try to bind to port numbers already in use. To counter this I try to ensure that random, free ports are selected for each service then make sure the port numbers are accessible via the rule so that they can be passed to the app under test. Some libraries let you pass a 0 as the port number parameter and then select a free port for you. Where this isn’t possible, you can do something like this:
1 2 3 4 |
|
Despite these obstacles I’ve found that the ability to run an embedded test environment is a valuable addition to a project.