One thing I didn't mention is that the objects in the queue are coming in via Web service calls, marshaled by XFire's Aegis binding. No big deal, right? Except that the objects are exposed in our API using Java interfaces, not concrete classes. In other words, our code is something like this:
public interface Node { ... }
public class NodeImpl implements Node { ... }
public class OurService {
public void enqueueNode(Node node) { ... }
}
How does XFire handle this? At first glance, it doesn't seem like a problem, but when XFire is converting a SOAP call into Java objects, how does it know what type of object to instantiate? XFire doesn't magically know that our implementation of the Node interface is NodeImpl. And in fact, early versions of XFire did not support using interfaces in a service's API.
The trick XFire uses since version 1.0 RC (way back when) is to create a dynamic proxy class. This is really great because it allows interfaces to "just work", but it carries a huge amount of overhead.
Now, back to our queue. If these proxies are taking so much memory, what can we do about it? Do we have to change our API to use concrete classes? Once again, XFire comes to the rescue. You can use XFire settings to configure your service so that, even though you use interfaces, XFire will always instantiate the class that you tell it to. The details are all here.
Now for the results: using dynamic proxies, our queue could hold about 1,000 elements (using a JVM configured with a maximum heap size of 13 GB). After the change was made, our queue could hold 10,000 elements - a 10x increase! Now that's what I'm talking about :-).