Jetty/Howto/Prevent Memory Leaks
Well, the most obvious cause of this is memory leaks in your application :) But, if you've thoroughly investigated using tools like jconsole, yourkit, jprofiler, jvisualvm or any of the other profiling and analysis tools out there and you can eliminate your code as the source of the problem, read on.
Preventing WebApp Classloader Pinning
There's a class of memory leak problems that are caused by code keeping references to a webapp classloader. This generally falls into 2 categories: static fields and daemon threads. For the former, a static field is initialized with the value of the classloader, which happens to be a webapp classloader; as the webapp is undeployed and redeployed, the static reference lives on, meaning that the webapp classloader cannot be garbage collected. The latter case is where a thread is started as a daemon thread and is outside the lifecycle of the webapp; threads have references to the context classloader that created them, thus leading to a memory leak if that classloader belongs to a webapp. There's a good discussion of the issue here: 
We provide a number of workaround classes that pre-emptively invoke the problematic code with jetty's classloader, thereby ensuring a webapp's classloader is not pinned. Note that as some of the problematic code creates threads, you should in general be selective about which preventers you enable, and only use those that are specific to your application.
|AppContextLeakPreventer||The call to AppContext.getAppContext() will keep a static reference to the context classloader. AppContext can be invoked in many different places by the jre.|
|AWTLeakPreventer||The java.awt.Toolkit class has a static field that is the default toolkit. Creating the default toolkit causes the creation of an EventQueue, which has a classloader field initialized with the thread context class loader. See |
|DOMLeakPreventer||DOM parsing can cause the webapp classloader to be pinned, due to the static field RuntimeException of com.sun.org.apache.xerces.internal.parsers.AbstractDOMParser. Note that the bug report at  specifically mentions that a heap dump may not identify the GCRoot as the uncollected loader, making it difficult to identify the cause of the leak.|
|DriverManagerLeakPreventer||java.sql.DriverManager keeps a static reference to the classloader, see java.sql.DriverManager.getCallerClassLoader().|
|GCThreadLeakPreventer||Calls to sun.misc.GC.requestLatency create a daemon thread which keeps a reference to the context classloader. A known caller of this method is the RMI impl. See [ http://stackoverflow.com/questions/6626680/does-java-garbage-collection-log-entry-full-gc-system-mean-some-class-called].|
|Java2DLeakPreventer||sun.java2d.Disposer keeps a reference to the classloader. See |
|LDAPLeakPreventer||If com.sun.jndi.LdapPoolManager class is loaded and the system property com.sun.jndi.ldap.connect.pool.timeout is set to a nonzero value, a daemon thread is started keeps a reference to the context classloader.|
|LoginConfigurationLeakPreventer||The javax.security.auth.login.Configuration class keeps a static reference to the thread context classloader.|
|SecurityProviderLeakPreventer||Some security providers, such as sun.security.pkcs11.SunPKCS11 start a deamon thread which will trap the thread context classloader.|
How to Configure
Each preventer can be individually enabled by adding an instance to a Server with the addBean(Object) call. Here's an example of how to do it in code with the org.eclipse.jetty.util.preventers.AppContextLeakPreventer:
Server server = new Server(); server.addBean(new AppContextLeakPreventer());
The equivalent in code can be added to the $JETTY_HOME/etc/jetty.xml file or any jetty xml file that is configuring a Server instance. Note however, if you have more than one Server instance in your jvm, only configure these preventers on one of them. Here's the example from code put into in xml:
<Configure id="Server" class="org.eclipse.jetty.server.Server"> <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.util.preventers.AppContextLeakPreventer"/> </Arg> </Call> </Configure>
The JSP engine in Jetty is Jasper. This was originally developed under the Apache Tomcat project, but over time has been forked by many different projects. All jetty versions up to 6 used Apache-based Jasper exclusively, with Jetty 6 using Apache Jasper only for JSP2.0. With the advent of JSP 2.1, Jetty 6 switched to using Jasper from Sun's Glassfish project, which is now the reference implementation.
All forks of Jasper suffer from a problem whereby the permgen space can be put under pressure by using jsp tag files. This is because of the classloading architecture of the jsp implementation. Each jsp file is effectively compiled and its class loaded in its own classloader so as to allow for hot replacement. Each jsp that contains references to a tag file will compile the tag if necessary and then load it using its own classloader. If you have many jsps that refer to the same tag file, then the tag's class will be loaded over and over again into permgen space, once for each jsp. The relevant Glassfish bug report is bug # 3963, and the equivalent Apache Tomcat bug report is bug # 43878. The Apache Tomcat project has already closed this bug report with status WON'T FIX, however the Glassfish folks still have the bug report open and have scheduled it to be fixed. When the fix becomes available, the Jetty project will pick it up and incorporate into our release program.
Garbage Collection Problems
One symptom of a cluster of jvm related memory issues is the OOM exception accompanied by a message such as "java.lang.OutOfMemoryError: requested xxxx bytes for xxx. Out of swap space?".
Sun bug number #4697804 describes how this can happen in the scenario when the garbage collector needs to allocate a bit more space during its run and tries to resize the heap but fails because the machine is out of swap space. One suggested work around is to ensure that the jvm never tries to resize the heap, by setting min heap size to max heap size:
java -Xmx1024m -Xms1024m
Another workaround is to ensure you have configured sufficient swap space on your device to accommodate all programs you are running concurrently.
Another issue related to jvm bugs is the exhaustion of native memory. The symptoms to look out for are the process size growing, but the heap usage remaining relatively constant. Native memory can be consumed by a number of things, the JIT compiler being one, and nio ByteBuffers being another. Sun bug number #6210541 discusses a still-unsolved problem whereby the jvm itself allocates a direct ByteBuffer in some circumstances that is never garbage collected, effectively eating native memory. Guy Korland's blog discusses this problem here and here. As the JIT compiler is one consumer of native memory, the lack of available memory may manifest itself in the JIT as OutOfMemory exceptions such as "Exception in thread "CompilerThread0" java.lang.OutOfMemoryError: requested xxx bytes for ChunkPool::allocate. Out of swap space?"
By default, Jetty will allocate and manage its own pool of direct ByteBuffers for io if the nio SelectChannelConnector is configured. It also allocates MappedByteBuffers to memory-map static files via the DefaultServlet settings. However, you could be vulnerable to this jvm ByteBuffer allocation problem if you have disabled either of these options. For example, if you're on Windows, you may have disabled the use of memory-mapped buffers for the static file cache on the DefaultServlet to avoid the file-locking problem.