Category Archives: design

Concurrency pattern: Concurrent multimaps in Java

Preamble

Maintaining state is one of the most important ways in which we can increase the performance of our code. I don’t mean using a database or some goliath system somewhere. I mean local memory caches. Often they can be more than adequate to allow us to maintain state and provide a dramatic improvement in performance.

Before I start with the content of this post, let me just state the obvious or at least what should be obvious if it isn’t already to you, which is that the whole domain of caching is an incredibly difficult and inexhaustible area of study and work which is why dedicated distributed cache providers have sprung up all over the place that companies normally resort to in favour of in-memory caches.

However there is often a combination of in-memory and distributed caches in use and this post focuses on one aspect of in-memory caches – concurrent multimaps in Java and this post is the resource that I wish I had when I was tackling this problem numerous times in the past. The post focuses exclusively on copy on write multimap implementations as that allows the read operation to be lock free which can be a significant advantage depending on what you want to do on read.

Singular value caches

When an in-memory cache is desired one always resorts to some kind of a map structure. And if you’re storing singular key value pairs then creating a cache can be as easy as picking a map implementation though you still have the check-then-act operation of checking whether a value exists and if so returning it otherwise populating it and returning it which can result in some blocking. Nonetheless these problems have already been solved a thousand times over out there now.

For example, Google Guava MapMaker, provides an excellent implementation of the memoization pattern for a cache as follows which is probably the most complex case of a simple singular key value pair cache.

package name.dhruba.kb.concurrency.mapmaker;

import java.util.concurrent.ConcurrentMap;

import com.google.common.base.Function;
import com.google.common.collect.MapMaker;

public class CacheWithExpensiveValues<K, V> {

    private final ConcurrentMap<K, V> cache = new MapMaker().makeComputingMap(new Function<K, V>() {
        @Override
        public V apply(K input) {
            return acquireExpensiveCacheValue();
        }
    });

    public V get(K key) { return cache.get(key); }
    private V acquireExpensiveCacheValue() { return null; }

}

This implementation, in concept similar to the memoization pattern put forward by Brian Goetz in Java Concurrency In Practice, guarantees that a value for a given key is only acquired/resolved once in total during the lifetime of the cache in the event that it hasn’t already been computed which can be very useful if creating/computing the value is an expensive call. Threads which request an uncomputed value while it is being computed wait until the computation already in progress is finished.

This can be said to be strongly consistent in its guarantees. If you are willing to compromise on the guarantees that a cache makes, making it a weakly consistent cache, you may be able to achieve faster performance in some cases by relying solely on atomic CAS operations.

Multi-value caches

A standard singular key value pair cache is fairly straightforward these days. But what happens if you suddenly realise that you actually need multiple values per key? There’s a little bit more to it than first meets the eye. If you are well informed about what’s out there – as a knee jerk reaction, you might immediately think of Google Guava multimaps and create something similar to the example below.

package name.dhruba.kb.concurrency.guava;

import java.util.List;

import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.ListMultimap;
import com.google.common.collect.Multimaps;

public class MultiMapCacheExample<K, V> {

    private final ListMultimap<K, V> cache = Multimaps.synchronizedListMultimap(ArrayListMultimap
            .<K, V> create());

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        return cache.removeAll(k);
    }

    public void put(K k, V v) {
        cache.put(k, v);
    }

    public boolean remove(K k, K v) {
        return cache.remove(k, v);
    }

}

However, the astute programmer, soon realises the inadequacies of such a solution. The synchronised wrapper pattern here is very similar to that utilised in the JDK in that it synchronises all the calls in the interface. Also, it synchronises the entirety of all the methods in the interface meaning that all paths through any given method will need to contend for and acquire a lock. To put it another way no paths of execution through any method are non-blocking.

As a result, this implementation is likely to perform very poorly under heavy concurrent load. There will be a lot of contention and the cache will only be able to serve one operation at a time. So where do we go from here? Googling didn’t bring much success on the subject of concurrent multimaps in Java when I was looking for what was out there already so I decided to explore this area from first principles. Below I present the process of iteratively developing an efficient concurrent multimap implementation and over the course of a few implementations – making it eventually as non-blocking as possible.

It’s interesting to read why Google Guava have not and will not implement a concurrent multimap though I’m not sure I agree. I think a couple of general purpose concurrent multimaps or at the very least a copy on write multimap would be of value to the public as I’ve seen this pattern quite a lot over the years. But admittedly it wouldn’t just be one implementation. It would need to support a range of backing collections.

Concurrent multimaps

In the following example implementations I present only the most important mutative calls in the map interface as they are the most challenging and the best calls for illustration. Bear in mind also when reading through the implementations the following design considerations.

  • Mutative versus an immutable copy on write approach
  • Size of critical sections and thereby the degree of blocking
  • Strongly consistent or weakly consistent in mutual exclusion guarantees
  • When removing the last value for a key should the multimap remove the key and associated empty collection?

Weakly consistent implementations are very common in the industry but are prone to interleaving. For example, if a put() is in progress and someone calls remove() on the key altogether then, after the remove has been invoked, the put() will put the key value association back in which may not be desirable at all. Or maybe put tries to add to a value collection that is now no longer referenced because the key has been removed. These methods should ideally be mutually exclusive and the final implementation achieves this quality. Though it is important to bear in mind that for certain use cases weakly consistent guarantees are acceptable and it is for you to say what is acceptable to your use case and what isn’t.

Fully blocking multimap

The fully blocking implementation is equivalent to the synchronised wrapper approach because it synchronises the entirety of all the methods. This is without doubt the poorest performing implementation though on the plus side it has minimal allocation unlike the copy on write implementations that follow.

Advantages
  • Strongly consistent.
  • Doesn’t allocate any more than it needs to (unlike the copy on write pattern).
Disadvantages
  • Very poor performance.
  • Uses a hashmap which isn’t thread safe so offers no visibility guarantees.
  • All calls – reads/writes are blocking.
  • All paths through the blocking calls are blocking.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class FullyBlockingMutativeArrayListMultiMap<K, V> {

    private final Map<K, List<V>> cache = new HashMap<K, List<V>>();

    public synchronized List<V> get(K k) {
        return cache.get(k);
    }

    public synchronized List<V> remove(K k) {
        return cache.remove(k);
    }

    public synchronized void put(K k, V v) {
        List<V> list = cache.get(k);
        if (list == null) {
            list = new ArrayList<V>();
            cache.put(k, list);
        }
        list.add(v);
    }

    public synchronized boolean remove(K k, V v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        if (list.isEmpty()) {
            cache.remove(k);
            return false;
        }
        boolean removed = list.remove(v);
        if (removed && list.isEmpty()) {
            cache.remove(k);
        }
        return removed;
    }

}

Copy on write multimap using synchronisation

This is an initial implementation of a copy on write approach but without using the jdk copy on write collections. It is strongly consistent but it still synchronises too much on writes.

Advantages
  • Strongly consistent.
  • Uses concurrent hash map so we can have non-blocking read.
Disadvantages
  • The synchronisation lock blocks on the entire cache.
  • The blocking calls are entirely blocking so all paths through them will block.
  • Concurrent hash map is blocking itself although at a fine grained level using stripes.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

public class CopyOnWriteArrayListMultiMap<K, V> {

	private final ConcurrentMap<K, List<V>> cache = new ConcurrentHashMap<K, List<V>>();

	public List<V> get(K k) {
		return cache.get(k);
	}

	public synchronized List<V> remove(K k) {
		return cache.remove(k);
	}

	public synchronized void put(K k, V v) {
		List<V> list = cache.get(k);
		if (list == null || list.isEmpty()) {
			list = new ArrayList<V>();
		} else {
			list = new ArrayList<V>(list);
		}
		list.add(v);
		cache.put(k, list);
	}

	public synchronized boolean remove(K k, K v) {
		List<V> list = cache.get(k);
		if (list == null) {
			return false;
		}
		if (list.isEmpty()) {
			cache.remove(k);
			return false;
		}
		boolean removed = list.remove(v);
		if (removed) {
			if (list.isEmpty()) {
				cache.remove(k);
			} else {
			    list = new ArrayList<V>(list);
				cache.put(k, list);
			}
		}
		return removed;
	}

}

Copy on write multimap but using the JDK CopyOnWriteArrayList

Here we opt to use the copy on write array list from the jdk. There is no synchronisation in the class itself (only within backing structure) but it is dangerously prone to interleaving and therefore weakly consistent. Personally I wouldn’t be happy about put() and remove() not being mutually exclusive and interleaving through each other. That to me would be unacceptable. Amazingly I’ve seen this implementation all too often at work.

Advantages
  • Uses {@link ConcurrentHashMap} for thread safety and visibility.
  • Uses {@link CopyOnWriteArrayList} for list thread safety and visibility.
  • No blocking in class itself. Instead the backing jdk classes handle blocking for us.
  • Blocking has been reduced to key level granularity instead of being at the cache level.
Disadvantages
  • Prone to interleaving. It is weakly consistent and does not guarantee mutually exclusive and atomic calls. The {@link remove(K)} call can interleave through the lines of the put method and potentially key value pairs can be added back in if a{@link remove(K)} is called part way through the {@link #put(K,V)} call. To be strongly consistent the {@link #remove(K)} and {@link #put(K,V)} need to be mutually exclusive.
package name.dhruba.kb.concurrency.multimap;

import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.CopyOnWriteArrayList;

public class JdkCopyOnWriteArrayListMultiMap<K, V> {

    private final ConcurrentMap<K, List<V>> cache = new ConcurrentHashMap<K, List<V>>();

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        return cache.remove(k);
    }

    public void put(K k, V v) {
        List<V> list = cache.get(k);
        if (list == null) {
            list = new CopyOnWriteArrayList<V>();
            List<V> oldList = cache.putIfAbsent(k, list);
            if (oldList != null) {
                list = oldList;
            }
        }
        list.add(v);
    }

    public boolean remove(K k, K v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        if (list.isEmpty()) {
            cache.remove(k);
            return false;
        }
        boolean removed = list.remove(k);
        if (removed && list.isEmpty()) {
            cache.remove(k);
        }
        return removed;
    }

}

Partially blocking copy on write multimap

So from the previous implementation we return to a strongly consistent implementation but this time block only on certain paths through the put() and remove() methods at the cost of a little additional allocation. However the lock it uses is still a global one which means that operations on different keys will become sequential which is obviously not desirable.

Advantages
  • Strongly consistent.
  • Use of {@link ConcurrentHashMap} for thread safety and visibility guarantees.
  • The {@link #get(Object)} and {@link #remove(Object)} calls don’t block at all in this class.
  • The {@link #put(Object, Object)} and {@link #remove(Object, Object)} methods do block but only for certain paths. There are paths through these methods which won’t block at all. The {@link #put(Object, Object)} method only blocks if the {@link ConcurrentHashMap#putIfAbsent(Object, Object)} fails and the {@link #remove(Object, Object)} only blocks if there is something there to remove.
Disadvantages
  • We allocate a list initially in the {@link #put(Object, Object)} which may not be needed.
  • {@link ConcurrentHashMap} still blocks although at a finer level using stripes.
  • The blocking synchronisation we are using is still blocking the entire cache. What we really want is to block only the keys that hash to the value bucket that we are currently working with. A more fine grained blocking strategy is called for which we’ll see in the next implementation.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

public class PartiallyBlockingCopyOnWriteArrayListMultiMap<K, V> {

    private final ConcurrentMap<K, List<V>> cache = new ConcurrentHashMap<K, List<V>>();

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        synchronized (cache) {
            return cache.remove(k);
        }
    }

    public void put(K k, V v) {
        List<V> list = Collections.singletonList(v);
        List<V> oldList = cache.putIfAbsent(k, list);
        if (oldList != null) {
            synchronized (cache) {
                list = cache.get(k);
                if (list == null || list.isEmpty()) {
                    list = new ArrayList<V>();
                } else {
                    list = new ArrayList<V>(list);
                }
                list.add(v);
                cache.put(k, list);
            }
        }
    }

    public boolean remove(K k, K v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        synchronized (cache) {
            list = cache.get(k);
            if (list == null) {
                return false;
            }
            if (list.isEmpty()) {
                cache.remove(k);
                return false;
            }
            boolean removed = list.remove(v);
            if (removed) {
                if (list.isEmpty()) {
                    cache.remove(k);
                } else {
                    list = new ArrayList<V>(list);
                    cache.put(k, list);
                }
            }
            return removed;
        }
    }

}

Striped lock copy on write multimap

The final implementation – strongly consistent, non-blocking backing structure, fine grained locking at key level and locking only on necessary paths. This example uses a striped lock provider. It’s purpose is to take a key as input and provide a lock as output to lock on. However it is consistent in that it always provides the same lock for the same key guaranteeing the mutual exclusion that is necessary.

It takes the number of locks desired as a constructor input (by default 2048) which means we can decide how many locks we want to make available in the distribution. It will then accordingly consistently hash across the distribution of locks. It also provides a better key distribution over non-concurrent hash maps. The concept behind the striped lock provider and its implementation is a very interesting topic and this will form a new post of its own in the future! Stay tuned!

Advantages
  • Strongly consistent. Implements correct mutual exclusion of calls.
  • Uses {@link NonBlockingHashMap} instead of {@link ConcurrentHashMap} so the backing cache member does not block at all. Far more efficient and scalable than {@link ConcurrentHashMap}.
  • The read calls are completely non-blocking even at the cache structure level.
  • The {@link #put(Object, Object)} and {@link #remove(Object, Object)} methods do block but only for certain paths. There are paths through these methods which won’t block at all. The {@link #put(Object, Object)} method only blocks if the {@link NonBlockingHashMap#putIfAbsent(Object, Object)} fails and the {@link #remove(Object, Object)} only blocks if there is something there to remove.
  • And to save the best for last – there is no longer any blocking at the cache level. We now apply mutual exclusion only at the key level.

This implementation has the best of all worlds really as long as the copy on write approach is acceptable to you.

Disadvantages
  • Fundamentally being a copy on write approach it does more allocation than a mutative approach.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import name.dhruba.kb.concurrency.striping.IntrinsicStripedLockProvider;

import org.cliffc.high_scale_lib.NonBlockingHashMap;

public class StripedLockArrayListMultiMap<K, V> {

    private final IntrinsicStripedLockProvider stripedLockProvider = new IntrinsicStripedLockProvider();
    private final ConcurrentMap<K, List<V>> cache = new NonBlockingHashMap<K, List<V>>();

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        Object lock = stripedLockProvider.getLockForKey(k);
        synchronized (lock) {
            return cache.remove(k);
        }
    }

    public void put(K k, V v) {
        List<V> list = Collections.singletonList(v);
        List<V> oldList = cache.putIfAbsent(k, list);
        if (oldList != null) {
            Object lock = stripedLockProvider.getLockForKey(k);
            synchronized (lock) {
                list = cache.get(k);
                if (list == null || list.isEmpty()) {
                    list = new ArrayList<V>();
                } else {
                    list = new ArrayList<V>(list);
                }
                list.add(v);
                cache.put(k, list);
            }
        }
    }

    public boolean remove(K k, K v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        Object lock = stripedLockProvider.getLockForKey(k);
        synchronized (lock) {
            list = cache.get(k);
            if (list == null) {
                return false;
            }
            if (list.isEmpty()) {
                cache.remove(k);
                return false;
            }
            boolean removed = list.remove(v);
            if (removed) {
                if (list.isEmpty()) {
                    cache.remove(k);
                } else {
                    list = new ArrayList<V>(list);
                    cache.put(k, list);
                }
            }
            return removed;
        }
    }

}

Conclusion

When designing concurrent structures it is important not to always resort blindly to what’s out there and for custom concurrent data structures there will be no ready made solution. For those cases concurrent patterns such as this are invaluable and best practices such as reducing critical sections, reducing the amount of blocking to paths that need it, reducing the granularity of the locks being used and selection of the right backing structures are absolutely key to an efficient concurrent data structure. If you have any feedback for how to do better on the above or if I’ve made any mistakes please do let me know. Enjoy and thanks for reading.

Links

FYI – it seems Joe Kearney has done an alternative implementation that does not rely on copy on write.

Update [05/07/2011]: Code updated with bug fixes for edge cases.
Update [17/04/2013]: After a long wait I finally fixed the bug that Charlie reported below. Thanks Charlie!

API Pattern: Exception transformer

Exception transformation is an API pattern that I came across some time back, originally from an able colleague of mine, that although very simple has proved to be immensely useful to me on a couple of occasions. Checked exceptions are widespread in Java. Imagine the situation where you are calling numerous methods on a class (say for the sake of serving as an example: a service class) each of which throws a checked exception (say ServiceException).

Service class that throws checked exceptions

package name.dhruba.kb.patterns.exceptions;

class Service {

    static class ServiceException extends Exception {}

    void serviceMethod1() throws ServiceException {}
    void serviceMethod2() throws ServiceException {}

}

Client with repetitive error handling logic

Your class as a client now has to deal with this exception every time you call a service method and also for every different method you call.

package name.dhruba.kb.patterns.exceptions;

import name.dhruba.kb.patterns.exceptions.Service.ServiceException;

class Client {

    private Service service;

    void callServiceMethod1Normally() {
        try {
            service.serviceMethod1();
        } catch (ServiceException e) {
            throw new RuntimeException("calling service method 1 failed", e);
        }
    }

    void callServiceMethod2Normally() {
        try {
            service.serviceMethod2();
        } catch (ServiceException e) {
            throw new RuntimeException("calling service method 2 failed", e);
        }
    }

}

Exception transformer abstraction

However your exception handling strategy may be the same across your use of the service class only with a different message each time. Instead of repetitively duplicating your exception handling logic (try/catch) around every service call you can abstract this out as follows. The following class abstracts the logic out. Note that this class is only shown as a separate class for the purposes of incrementally describing the pattern. For best effect this class should ideally be contained within your client class as a static inner class.

package name.dhruba.kb.patterns.exceptions;

import name.dhruba.kb.patterns.exceptions.Service.ServiceException;

abstract class ExceptionTransformer {

    abstract void call() throws ServiceException;

    void transform(String message) {
        try {
            call();
        } catch (ServiceException e) {
            throw new RuntimeException(message, e);
        }
    }

}

New client using exception transformer

Now using the new exception transformer our exception handling logic is simplified to only the logic that differs between the client methods.

package name.dhruba.kb.patterns.exceptions;

import name.dhruba.kb.patterns.exceptions.Service.ServiceException;

class ClientUsingExceptionTransformer {

    private Service service;

    void callServiceMethod1UsingTransformer() {
        new ExceptionTransformer() {
            @Override
            void call() throws ServiceException {
                service.serviceMethod1();
            }
        }.transform("calling service method 1 failed");
    }

    void callServiceMethod2UsingTransformer() {
        new ExceptionTransformer() {
            @Override
            void call() throws ServiceException {
                service.serviceMethod2();
            }
        }.transform("calling service method 2 failed");
    }

}

Variations

This pattern can be easily varied to suit your personal exception handling styles. Here’s another variation where different checked exceptions are thrown by different service methods and handled by only logging them this time.

package name.dhruba.kb.patterns.exceptions;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

class ClientWithVariations {

    static final Logger logger = LoggerFactory.getLogger(ClientWithVariations.class);

    static class Service {

        static class ServiceHungry extends Exception {}
        static class ServiceSleepy extends Exception {}

        void hungryMethod() throws ServiceHungry {}
        void sleepyMethod() throws ServiceSleepy {}

    }

    private Service service;

    void callHungryMethod() {
        new ExceptionTransformer() {
            @Override
            void call() throws Exception {
                service.hungryMethod();
            }
        }.transform("method was too hungry to respond :(");
    }

    void callSleepyMethod() {
        new ExceptionTransformer() {
            @Override
            void call() throws Exception {
                service.sleepyMethod();
            }
        }.transform("method was too sleepy to respond :(");
    }

    static abstract class ExceptionTransformer {

        abstract void call() throws Exception;

        void transform(String message) {
            try {
                call();
            } catch (Exception e) {
                logger.error(message, e);
            }
        }

    }

}

This pattern really shows its value most effectively when you have numerous methods using it and also when there are multiple checked exceptions to handle resulting in multiple catch blocks all over the place.

Do you have any exception handling API patterns of your own? I know Joshua Bloch has suggested a few in Effective Java of which one comes to mind – an exception throwing method can be transformed into a boolean via another method which just returns false in the catch block and true elsewhere that can be quite useful if you don’t want to pollute the rest of the code with knowledge of this handling logic. By the way, before anyone mentions this I’m not suggesting converting checked exceptions to runtime exceptions or suppressing them by logging them is always the right thing to do. Thanks for reading.

P.S. This pattern will be particularly nice with closures in Java 8. And the multicatch in Java 7 will certainly also help make code more concise.

JMockit: No holds barred testing with instrumentation over mocking

Introduction

The transition from junit to stubs to mocking occurred long back in the world of testing. Traditionally mocking has used jdk proxies for interface based classes and cglib for non-final concrete classes. This article draws attention to the yet ongoing transition from mocking to instrumentation for testing. Particularly it highlights how to achieve complete access to your API by being able to instrument and thereby mock areas of your code that were previously inaccessible.

In exploring this it becomes evident that testing as it stood prior to instrumentation required one to alter the design of production code fundamentally to accommodate testing. Prime examples of such practices have been the non-production related use of interfaces, widening the visibility of methods and not using the final and static keywords. Conversely the practices of overusing dependency injection, interfaces and instance based state, all for testing, have become all too common and established under the purported benefit that they make code more testable.

Examples

A clear delineation between production code and testing code finally becomes possible with JMockit which uses the java.lang.instrument package of JDK5 and ASM to modify bytecode at runtime and thereby has access to all regions that are accessible in the bytecode itself.

The following sections illustrate the most powerful features of JMockit that distinguish it from proxy based mocking tools with each code example being self contained and runnable – complete with imports. In each case the test subject API is contained within the test case itself for ease of reference and not having to switch back and forth. All source code is also available as a self contained eclipse maven project. Update [11/11/2009]: This download now has all fixes and enhancements to code and maven pom file suggested by, Rogério Liesenfeld, the author of JMockit.

The term ‘mocking’ is used here loosely to refer to mocking through instrumentation as it is common vocabulary. Strictly speaking, however, there is a distinction between mocking which creates a second instance of the test subject and instrumentation which mostly modifies or redefines the original test subject. However, here, these terms will be used interchangeably. Also please excuse the slight contrived and convoluted nature of the test subject API.

Mock classes that don’t implement interfaces

Classes that don’t implement interfaces cannot be mocked using jdk proxies and generally have to be deferred to cglib for subclassing. In this case neither is necessary. Here the @RunWith annotation loads the javaagent contained in jmockit.jar through the entry point of JMockit.class into the JVM prior to the testcase running. The @Mocked annotation designates which classes you’d like instrumented. The Expectations block encloses the specifications of how you’d like your classes instrumented. The end result is that the user name of ‘joe’ is replaced with ‘fred’.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Expectations;
import mockit.Mocked;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockNoInterfaceClassTest {

    static class User {
        String name() {
            return "joe";
        }
    }

    @Mocked
    User user;

    @Test
    public void mockNoInterfaceFinalClass() {

        new Expectations() {
            {
                user.name();
                returns("fred");
            }
        };

        assertEquals("fred", user.name());

    }
    
}

Mock final classes and methods

As no proxying is done JMockit is able to modify the bytecode of the test subject without needing to subclass the subject.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Expectations;
import mockit.Mocked;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockFinalClassTest {

    static final class User {
        final String name() {
            return "joe";
        }
    }

    @Mocked
    User user;

    @Test
    public void mockFinalClass() {

        new Expectations() {
            {
                user.name();
                returns("fred");
            }
        };

        assertEquals("fred", user.name());

    }
    
}

Mock internal (hidden) instantiations

The User might have a dependency on Address and as illustrated below neither the User nor the test class may have access what the Address is doing. This is referred to here as an internal or hidden invocation or instantiation. Since only the behaviour of Address is of any interest then only that class is marked for instrumentation. Note particularly that no dependency injection of any kind is necessary to make this code testable. JMockit can instrument internally managed dependencies.


package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Expectations;
import mockit.Mocked;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockInternalInstantiationTest {

    static class User {
        Address address() {
            return new Address();
        }
    }

    static class Address {
        String postCode() {
            return "sw1";
        }
    }

    @Mocked
    Address address;

    @Test
    public void mockInternalInstantiation() {

        new Expectations() {
            {
                new Address().postCode();
                returns("w14");
            }
        };

        String postCode = new User().address().postCode();
        assertEquals("w14", postCode);

    }
    
}

Mock static methods

Static methods can be instrumented in jmockit in exactly the same way as instance methods.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Expectations;
import mockit.Mocked;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockStaticMethodTest {

    static class User {
        static String name() {
            return "joe";
        }
    }

    @Mocked
    @SuppressWarnings("unused")
    private final User user = null;

    @Test
    public void mockStaticMethod() {

        new Expectations() {
            {
                User.name();
                returns("fred");
            }
        };

        assertEquals("fred", User.name());

    }

}

Mock native methods

Being able to alter the behaviour of native methods is one of the most powerful features of JMockit. Frequently native code isn’t mockable in the traditional way due to being final or static. However in the example below the JMockit API provides a means of substituting the original native method with one that you define. The end result here is that Runtime.getRuntime().availableProcessors() returns 999. That’s a far cheaper way to approximate Azul.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Mock;
import mockit.MockUp;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockNativeMethodTest {

    @Test
    public void mockNativeMethod() {
        new MockUp<Runtime>() {
            @Mock
            @SuppressWarnings("unused")
            int availableProcessors() {
                return 999;
            }
        };
        assertEquals(999, Runtime.getRuntime().availableProcessors());
    }

}

Mock private methods

Private methods are instrumentable through reflection based utilities provided by JMockit. Here the user name is a private field with no mutator methods to alter it. It is also a final field. Note that despite all that the Expectations block is able to modify the field, record an expected invocation of the accessor method and specify a return value for it. Although the API is slightly different than what it is for visible methods nevertheless it is concise and intuitive.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Deencapsulation;
import mockit.Expectations;
import mockit.Mocked;
import mockit.NonStrict;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockPrivateMethodTest {

    static class User {

        private final String name = "joe";

        @SuppressWarnings("unused")
        private final String name() {
            return name;
        }

    }

    @Mocked
    @NonStrict
    User user;

    @Test
    public void mockPrivateMethod() {
        new Expectations() {
            {
                invoke(user, "name");
                returns("fred");
            }
        };
        assertEquals("fred", Deencapsulation.invoke(user, "name"));
    }

}

Mock default constructors

Here the default constructor of an object is substituted by a constructor specified in the test case itself. The only additional API necessary here is the registration of the new constructor body with JMockit in the beforeClass() method prior to the test case running. The new constructor body is denoted by a special method name called $init which is named after its form in the class file.

package jmockit.examples;

import static org.junit.Assert.assertEquals;

@RunWith(JMockit.class)
public class MockDefaultConstructorTest {

    static class User {

        String name;

        public User() {
            this.name = "joe";
        }

        String name() {
            return name;
        }

    }

    private static final User user = new User();

    @Test
    public void mockDefaultConstructor() {
        new MockUp<User>() {
            @Mock
            @SuppressWarnings("unused")
            public void $init() {
                user.name = "fred";
            }
        };
        new User();
        assertEquals("fred", user.name());
    }

}

Mock static initialiser blocks

In the same way as constructors can be redefined so can static initialiser blocks. The only difference is that the new static initialiser block must be represented by a method called $clinit which again is named after its form in the class file after the jvm has compacted all static initialiser blocks into one block.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Mock;
import mockit.MockUp;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockStaticInitialiserBlockTest {

    static class UserInitialiser {
        static {
            User.name("joe");
        }
    }

    static class User {

        private static String name;

        static void name(String name) {
            User.name = name;
        }

    }

    @Test
    public void mockStaticInitialiserBlock() throws ClassNotFoundException {
        new MockUp<UserInitialiser>() {
            @Mock
            @SuppressWarnings("unused")
            public void $clinit() {
                User.name("fred");
            }
        };
        Class.forName(UserInitialiser.class.getName());
        assertEquals("fred", User.name);
    }

}

Verify internal (hidden) method invocations

Verifications can be an untuitive and confusing concept to grasp so here is a brief account of the jmockit test run lifecycle. The Expectations block puts in place the record phase. Once that block has ended it automatically switches to replay phase. The execution phase is the test subject API invocations themselves. So where do the verifications fit in? Well if Expectations can be thought of as the pre-expectations then verifications are post-expectations. However note that verifications can only complete those expectations that haven’t already been recorded in the Expectations block.

In the example below the creation and population of the User pojo is entirely hidden. So the testcase has no pre-expectations of return values or exceptions being thrown. The only thing the test case would like to verify is that the User was populated through specific method invocations in a given order with a specified set of arguments. The Verifications block verifies exactly that although one can also have optional verifications that are in any order with arguments matching supplied patterns. Note that in the verification of the setName() method the test case specifies that the method invocation must have taken place with an argument of exactly ‘fred’. However with setAge() it specifies only that the method should have been invoked with any integer. These are hamcrest matchers and are add incredible leeway in specifying invocation criteria.

package jmockit.examples;

import mockit.FullVerificationsInOrder;
import mockit.Mocked;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class VerifyInternalMethodsTest {

    static class User {

        String name;
        int age;

        public void setName(String name) {
            this.name = name;
        }

        public void setAge(int age) {
            this.age = age;
        }

    }

    static class UserService {

        void populateUser() {
            User user = new User();
            user.setName("fred");
            user.setAge(31);
        }
    }

    @Mocked
    User user;

    @Test
    public void verifyInternalMethods() {
        new UserService().populateUser();
        new FullVerificationsInOrder() {
            {
                User user = new User();
                user.setName("fred");
                user.setAge(withAny(1));
            }
        };

    }

}

Mock method invokes real method

Here the mock method is invoking the real method in the test subject. JMockit allows a special ‘it’ instance variable in a mock class that is set to the real class instance for each indirect call to a mock method. Unusual and interesting although it’s not immediately obvious why one might use this feature. It is also disabled by default as it can potentially result in stack overflow due to each calling the other in an infinite recursive loop.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import mockit.Expectations;
import mockit.Mock;
import mockit.MockClass;
import mockit.Mockit;
import mockit.integration.junit4.JMockit;

import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockInvokesRealMethodTest {

    static class User {
        String name() {
            return "joe";
        }
    }

    @MockClass(realClass = User.class)
    static class MockUser {
        User it;

        @Mock(reentrant = true)
        String name() {
            return it.name();
        }
    }

    @BeforeClass
    public static void beforeClass() {
        Mockit.setUpMocks(MockUser.class);
    }

    @AfterClass
    public static void afterClass() {
        Mockit.tearDownMocks();
    }

    @Test
    public void mockInvokesRealMethod() {

        new Expectations() {
            {
                new User().name();
            }
        };

        assertEquals("joe", new User().name());
    }

}

Mock JDK randomness

Ultimately the power of JMockit is used to mutate certain JDK methods that otherwise are inaccessible and behave very differently! This is done to show that the reach of JMockit extends even to the most diverse behaviours in the JDK and also of course for amusement and intrigue.

package jmockit.examples;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertSame;

import java.security.SecureRandom;

import mockit.Expectations;
import mockit.Mock;
import mockit.MockUp;
import mockit.Mocked;
import mockit.integration.junit4.JMockit;

import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(JMockit.class)
public class MockJdkRandomnessTest {

    @Test
    public void mockSystemNanoTime() {
        new MockUp<System>() {
            @Mock
            @SuppressWarnings("unused")
            long nanoTime() {
                return 0L;
            }
        };
        assertSame(0L, System.nanoTime());
    }

    @Test
    public void mockMathRandom() {
        new MockUp<Math>() {
            @Mock
            @SuppressWarnings("unused")
            double random() {
                return 0D;
            }
        };
        assertEquals(0D, Math.random(), 0D);
    }

    @Mocked
    SecureRandom secureRandom;

    @Test
    public void mockSecureRandom() {

        new Expectations() {
            {
                new SecureRandom().nextInt();
                returns(0);
            }
        };
        assertSame(0, new SecureRandom().nextInt());
    }

}

Conclusion

JMockit, by way of instrumentation, creates a new genre of testing toolkit that exposes the fundamental limitations and weaknesses in the preceding genre of traditional mocking tools. Critically, the design of production code, should remain unaffected by testing of that code. Interfaces should only be used when felt necessary for production code and there should be no penalty for using the final or static keywords or other language features. By operating at a lower level of abstraction and at a different stage of the VM lifecycle JMockit is able to reinstate the use of the language in its entirety*.

*There’s one language feature which JMockit does not support due to the fact that it is not distinctly supported in the bytecode itself. This is left as a quiz for the reader. 10 points to the first winner. Sorry – points are all I have to give.

Download

All source code above is available as a self contained eclipse maven project. Update [11/11/2009]: This download now has all fixes and enhancements to code and maven pom file suggested by, Rogério Liesenfeld, the author of JMockit.

Implementing timed concurrent execution of tasks with contextual failure reporting

Overview

The natural progression in implementing task execution is normally as follows although one may skip steps if well-informed.

  • Implement sequential execution of tasks.
  • Realise that sequential execution is too slow so implement basic concurrent execution of tasks using manual creation and management of background threads with manual completion checking.
  • Realise that java.util.concurrent.ExecutorService exists and migrate above to it and use manual completion checking using futures.
  • Realise that java.util.concurrent.CompletionService exists so migrate above to it and start using automatic completion checking.
  • Realise that two problems still exist.
    • Certain tasks fail or timeout holding up the invoking method indefinitely.
    • When tasks fail or timeout there is no contextual information i.e. a way to tell which tasks specifically failed or timed out.

    What now?

This article explores the end stage of the evolution of this thought process which has the following requirements.

  • Tasks must be executed concurrently.
  • A timeout must be imposed on task execution.
  • In the event of task timeout or failure contextual information must be accessible for informative error reporting i.e. to tell which tasks failed or timed out.

The design of the domain is what enables contextual information to be made available in the event of a timeout or failure giving precise information to the calling client.

Continue reading

Using MockServletContext and ContextLoader with Spring 2.5 TestContext framework

Other than being, quite possibly, the longest blog post title so far here this topic had recently been a thorn in my side at work. Here I present the problem and a few different solutions to it. I’ve noticed also that on the spring forum a few others have been asking the same question so I hope this helps them too. Please note that code examples are deliberately kept as short as is necessary to illustrate the solutions. If adopting these solutions you may want to enhance the API yourself. Prior to stating the problem imagine the following ant based web application directory structure.

Project directory structure

project/
|-- java
`-- tests
|-- webapp
|   `-- WEB-INF
|       |-- bar-context.xml
|       |-- foo-context.xml
|       |-- classes
|       `-- lib

The Problem

The problem was to create a base class for spring integration tests that recognised the mock servlet context when loading spring context files and resources. Being a webapp the problem specification required that all paths to spring context files and resources must be relative to the web root (in this case ‘webapp/’) and should therefore start with ‘/WEB-INF/blah’. Also I wanted to do this using the new Spring 2.5 TestContext framework and JUnit4 so that I could use their improved APIs to make life easier and also because I didn’t want to base all tests written in the future on outdated api and versions.

Spring 2.0 and JUnit 3 Solution

import junit.framework.TestCase;

import org.springframework.core.io.FileSystemResourceLoader;
import org.springframework.mock.web.MockServletContext;
import org.springframework.web.context.ConfigurableWebApplicationContext;
import org.springframework.web.context.support.XmlWebApplicationContext;

public abstract class AbstractJUnit3SpringIntegrationTest extends TestCase {

    protected static ConfigurableWebApplicationContext context;

    @Override
    protected void setUp() throws Exception {
        context = new XmlWebApplicationContext();
        context.setConfigLocations(new String[] { "/WEB-INF/foo-context.xml", "/WEB-INF/bar-context.xml" });
        context.setServletContext(new MockServletContext("/webapp", new FileSystemResourceLoader()));
        context.refresh();
        context.registerShutdownHook();
    }

    @Override
    protected void tearDown() throws Exception {
        context.close();
    }

    protected Object getBean(String beanName) {
        return context.getBean(beanName);
    }

}

Note that the above could be enhanced if necessary by allowing the child classes to specify the context file locations that they require and also by provided a strong typed generified getBean() method which would remove the need for casting from Object on every invocation. It is also possible to make the setup and teardown happen once only per class if desired. See elsewhere on this site for sample code.

Spring 2.0 and JUnit 4 Solution

import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.springframework.core.io.FileSystemResourceLoader;
import org.springframework.mock.web.MockServletContext;
import org.springframework.web.context.ConfigurableWebApplicationContext;
import org.springframework.web.context.support.XmlWebApplicationContext;

public abstract class AbstractJunit4SpringIntegrationTest {

    protected static ConfigurableWebApplicationContext context;

    @BeforeClass
    public static void setUpBeforeClass() throws Exception {

        String[] contexts = new String[] { "/WEB-INF/foo-context.xml" };
        context = new XmlWebApplicationContext();
        context.setConfigLocations(contexts);
        context.setServletContext(new MockServletContext("/webapp", new FileSystemResourceLoader()));
        context.refresh();

    }

    @AfterClass
    public static void tearDownAfterClass() {
        context.close();
    }

    protected Object getBean(String beanName) {
        return context.getBean(beanName);



    }

}

The difference with junit4 is that no inheritance is required and the context setup and teardown is done only once at the beginning and end respectively. The same suggestions for enhancement apply here as before.

Spring 2.5 and JUnit 4 Solution

There were a number of choices to make when implementing this using Spring 2.5 TestContext framework and JUnit 4. I will refer to the Spring 2.5 TestContext framework as TCF to relieve RSI somewhat. TCF API or documentation does not really offer a way of solving this problem out of the box. However by using lower level building blocks it is possible to achieve the desired effect. Let us look at how TCF works normally in the vanilla fashion.

import org.junit.runner.RunWith;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "/WEB-INF/foo-context.xml", "/WEB-INF/foo-context.xml" })
public abstract class AbstractSpringTCFTest {
}

Of course that won’t work because /WEB-INF is not on the classpath and a mock servlet context has not been registered. The following example shows how this can be done. The abstract spring base class needs a custom context loader applied to it.

import org.junit.runner.RunWith;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(loader = MockServletContextWebContextLoader.class, locations = {
        "/WEB-INF/foo-context.xml", "/WEB-INF/bar-context.xml" })
public abstract class AbstractSpringTCFTest {
}

The custom context loader is what sets a mock servlet context but also resolves certain api type compatibility issues. The key lines of code are highlighted below (which you won’t see if reading through rss).

import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.AnnotationConfigUtils;
import org.springframework.core.io.FileSystemResourceLoader;
import org.springframework.mock.web.MockServletContext;
import org.springframework.test.context.support.AbstractContextLoader;
import org.springframework.web.context.support.XmlWebApplicationContext;

public class MockServletContextWebContextLoader extends AbstractContextLoader {

    @Override
    public final ConfigurableApplicationContext loadContext(String... locations) throws Exception {
        ConfigurableWebApplicationContext context = new XmlWebApplicationContext();
        context.setServletContext(new MockServletContext("/webapp", new FileSystemResourceLoader()));
        context.setConfigLocations(locations);
        context.refresh();
        AnnotationConfigUtils.registerAnnotationConfigProcessors((BeanDefinitionRegistry) context.getBeanFactory());
        context.registerShutdownHook();
        return context;
    }

    @Override
    protected String getResourceSuffix() {
        return "-context.xml";
    }

}

So there we have it – the final solution using Spring 2.5 TCF and JUnit 4. I’ll be adding further enhancements and simplifications at a later date. Of course none of this hackery would be necessary if we were simply using classpath based resources. Classpath based access allows a uniform way of accessing resources of any kind through spring in any environment whether that be ant, eclipse or container. The spring resource locator types: classpath: and classpath*: will find items on the classpath with the latter asterisk based syntax searching for all matches on the filesystem including inside jars.

Update1: It turns out that there is a jira issue for this particular use case which is fantastic news and I believe we have Geoff, the commenter on this post, to thank for filing it. If you’d like to see this happen please add vote+comment to the jira issue. It is set to be completed by 3.0M2 which seems a little distant right now – it would have been great to have it in a point release. However better late than never.

Update2: Geoff please find my response below.

Originally the code in this post was implemented at my work place where it does work. However when I got around to trying it out in an isolated example project I too had the same problem – the lack of dependency injection and yes you can’t swap the refresh line and the annotations line due to a fatal exception.

Our work code base has extended certain spring classes to incorporate their own custom logic. So we are in fact using a custom subclass of XmlWebApplicationContext. However I did try using XmlWebApplicationContext and it still worked at work. As to why it works at my workplace and not in an isolated project I really haven’t had time to debug. However I think it’s great that this is finally filed as a jira which I’ve been meaning to do for quite some time because whether or not this problem can be solved with custom code I think that there should be native support for this in the spring framework. If I find out more about this issue I will post here and on forums to let you all know.

Update3: I’ve synchronised the minor differences between my work context loader and the example loader here (I’ve added the modifyLocations() method). Please retry and let me know. I’m unable to try this myself for the time being due to lack of time and a suitable environment.

Update4: Removed most recent modifications as they don’t work. It’s highly likely that this only works for me because at my workplace we are using a custom patched build of Spring and also our own classes which extend Spring (which I wasn’t aware of to this degree previously) so in order to get this work on a vanilla distribution please follow the forum link that GeoffM posted below or the jira ticket. Also vote on the jira ticket if you can. Thanks.

Spring UG Meeting on Nov 12

The next Spring UG meeting is on Nov 12 and features Sam Brannen on Developing Web Applications with OSGi. Excerpt follows.

If you’re a Java web developer, you’re certainly familiar with monolithic WAR deployments and /library bloat/, and you’ve probably thought numerous times, “There must be a better way.” Well, there is! By building on the benefits of an OSGi runtime environment and combining the *Spring* and *Spring-DM* programming models, the *SpringSource dm Server* offers enterprise web developers exciting new opportunities.

This session will focus on developing web applications in an OSGi environment and will include a discussion of the migration path from a standard Java EE WAR to a fully OSGi-enabled web application packaged as a Web Module within a PAR. We will begin with an overview of deployment and packaging options available on the dm Server and then take a closer look at each supported web deployment model from Standard WARs to Shared Libraries WARs, Shared Services WARs, and finally Web Modules. Attendees will walk away with a solid understanding of how to both develop and deploy next generation web applications on the SpringSource dm Server.

This sounds very interesting. Sam Brannen, incidentally, created the Spring TestContext framework which has transformed Spring’s testing support from the way it was before from an API and ease of use perspective. I’ve already registered and I’m going to try to attend this one as it is in the evening and it is free. Come along if you can.

Java project structural best practices

The structure of a java project has gone through some transitions in time. Here are some conventional examples.

Basic project

Here is a simple java project created in eclipse for basic needs. Although this is lightweight and simple it offers no place to put resources.

eclipse-project/
|-- bin
`-- src

Ant web project

Here is the legacy ant project layout before maven came along. It provides a place to put resources. However the problem is that if you put stuff in WEB-INF then it is not classpath accessible and, specific to eclipse, you cannot add WEB-INF as a source folder to add it to your editor classpath because it is nested in your output folder of WEB-INF/classes/.

ant-project/
|-- src
|-- tests
`-- webapp
    `-- WEB-INF
        `-- classes

Basic maven project

This is the standard maven layout and provides clean separation of every distinct responsibility of a project structure. It is very well defined and has no classpath problems and is my favourite.

basic-maven-project/
|-- pom.xml
|-- src
|   |-- main
|   |   |-- java
|   |   |-- resources
|   |   `-- webapp
|   |       `-- WEB-INF
|   `-- test
|       |-- java
|       `-- resources
`-- target
    |-- classes
    `-- test-classes

Advanced maven project

The advanced maven project structure is offered by M2Eclipse, the eclipse plugin, and has one advantage over the previous structure. It offers different output directories for eclipse and command line. This is useful when building in editor and command line and you don’t want them to conflict, clear or overwrite each other in the output directory. This helps also with setting editors and build tools to work independently if their settings differ.

advanced-maven-project/
|-- pom.xml
|-- src
|   |-- main
|   |   |-- java
|   |   `-- resources
|   `-- test
|       |-- java
|       `-- resources
|-- target
|   |-- classes
|   `-- test-classes
`-- target-eclipse
    |-- classes
    `-- test-classes

Best practices

So what must one consider when creating a new project structure? Sometimes the use of maven is not permitted either due to a legacy environment or due to a heavy investment in ant already. That is when it becomes necessary to think in terms of concepts and the pros and cons of different structures outside of the domain of specific build tools. The following are some recommendations to follow which make life a hell of a lot easier.

  • Separate sources from tests. Yes I have seen them both together and it is a very bad idea.
  • Separate resources from the output folder. This can take the shape of a resources/ folder by itself in its simplest form. The benefit of this is that it can be classpath accessible by your editor and your build tool and is not nested within your output folder and can therefore be added as a source folder within your editor. This also thereby provides a uniform classpath based access mechanism to all its contents independent of the tool used. The build tool can then be configured to copy its contents to the output folder.
  • Optionally have separate output directories for different tools. At times for example it is necessary for eclipse and ant to operate on the same project but independently and in isolation especially when you are testing your build environment with various different tools and settings. Most of the time this is unnecessary and so optional.
  • When accessing resources from your application always use classpath based paths relative to your classpath. Spring makes this dead easy by allowing various locator patterns that also interpret wildcards (classpath:, classpath*:, file:). If you are storing your resources in their own folder and that is added to the classpath then you can access its contents by using simply a slash. The build tool or editor will then copy your resources into the output folder where the classpath based access will continue to apply whether deployed or otherwise.
  • When using components (jars) in your application containing resources such as spring context files and property files allow the application to aggregate. The component should only ever provide config and resources and never load them. It should also never access the application to load application specific content. The end-user application is where the control should lie and the components should remain subordinate and not only because the same component may be used in more than one disparate application.
  • Use Spring to your benefit. Spring provides a uniform mechanism to load resources of different kinds. Use it instead of loading resources manually. Try to resist the temptation to innovate on the classpath and associated integration and build tools. Reinventing the wheel rarely pays off in a positive way in some common ground!
  • Standardise on your editor if possible (non-essential) and if using source control check in your editor project configuration. It makes new check outs far more informative as to the project structure and classpath entries. If omitted or ignored it has be reconfigured from scratch every time.

This was a blog post on a rather simplistic and basic note. However I felt it was worth a mention at least once as there will always be legacy projects out there in need of inspiration. Classpath problems are fundamental and if they exist they tend to impact the application as a whole and its components and cause prolonged heartache. It’s worth having a clearly defined and justified strategy to begin with so that later on down the line your app can grow and change with ease.

Free event: In The Brain of Eric Evans on Domain Driven Design

Free event: In The Brain of Eric Evans, on Domain Driven Design, London, UK, October 2nd, 18:30 – 20:00

Eric Evans will give his ideas and views on software architecture and Design and will explain the principles and ideas that form the foundation of Doman Driven Design.

Attendance is free for registered participants. You can register here: http://skillsmatter.com/podcast/home/domain-driven-design

More information: http://skillsmatter.com/podcast/home/domain-driven-design

Source: Design & DDD UK User Group

This is in the evening so I’m going to try and go work permitting.