I'm getting OutOfMemoryError
If your Hudson started choking with OutOfMemoryError, there are two possibilities.
- Your Hudson is growing in data size, requiring a bigger heap space. In this case you just want to give it a bigger heap.
- Your Hudson is temporarily processing a large amount of data (like test reports), requring a bigger head room in memory. In this case you just want to give it a bigger heap.
- Your Hudson is leaking memory, in which case we need to fix that.
Which category your OutOfMemoryError falls into is not always obvious, but here are a few useful techniques to diagnose the problem.
- Use VisualVM, attach to the running instance, and observe the memory usage. Does the memory max out while loading Hudson? If so, it probably just needs a bigger memory space. Or is it slowing creeping up? If so, maybe it is a memory leak.
- Do you consistently see OOME around the same phase in a build? If so, maybe it just needs a bigger memory.
If you think it's a memory leak, the Hudson team needs to get the heap dump to be able to fix the problem. There are several ways to go about this.
- Run JVM with -XX:+HeapDumpOnOutOfMemoryError so that JVM will automatically produce a heap dump when it hits OutOfMemoryError.
- You can run jmap -heap:format=b pid where pid is the process ID of the target Java process. Please only do this if you use Java6, as earlier versions have issues.
- Use VisualVM, attach to the running instance, and obtain a heap dump
- If your Hudson runs at "http://server/hudson/", request "http://server/hudson/heapDump" with your browser and you'll get the heap dump downloaded. (1.395 and newer)
- If you are familiar with one of many Java profilers, they normally offer this capability, too.
Once you obtain the heap dump, please post it somewhere, then open an issue (or look for a duplicate issue), and attach a pointer to it.
In the past, the distributed build support has often been a source of leakage (as this involves in a distributed garbage collection.) To check for this possibility, visit links like
http://yourserver/hudson/computer/YOURSLAVENAME/dumpExportTable. If this show too many objects, they may be leaks. Analyzing the heap dump yourself
If you cannot let us inspect your heap dump, we need to ask you to diagnose the leak.
- Find the objects with biggest retention size. Often they are various Maps, arrays, or buffers.
- Find the path from that object to GC root, so that you can see which Hudson object owns those big objects.
Report the summary of those findings to the list and we'll take it from there.