Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "MemoryAnalyzer"

(HPROF dumps from Sun Virtual Machines)
(Update IBM Java system dump and PHD information)
(21 intermediate revisions by 8 users not shown)
Line 1: Line 1:
 
== About ==
 
== About ==
The [http://eclipse.org/mat Eclipse Memory Analyzer] is a fast and feature-rich heap analyzer that helps you find '''memory leaks''' and high '''memory consumption''' issues.
+
The [http://eclipse.org/mat Eclipse Memory Analyzer] tool (MAT) is a fast and feature-rich heap dump analyzer that helps you find '''memory leaks''' and analyze high '''memory consumption''' issues.
  
A summary of the releases is available at [[MemoryAnalyzer/Releases]]
+
With Memory Analyzer one can easily
 +
* find the biggest objects, as MAT provides reasonable accumulated size (retained size)
 +
* explore the object graph, both inbound and outbound references
 +
* compute paths from the garbage collector roots to interesting objects
 +
* find memory waste, like redundant String objects, empty collection objects, etc...
 +
 
 +
[[Category:Tools Project]][[Category:Memory Analyzer]]
  
 
== Getting Started ==
 
== Getting Started ==
  
View the [http://live.eclipse.org/node/520 Webinar] recorded on 29 May 2008. (It has lots of annoying sound drop-outs.)
+
=== Installation ===
 
+
Read the [http://www.vogella.de/articles/EclipseMemoryAnalyser/article.html Eclipse Memory Analyzer Tutorial] to see how to start the program.
+
 
+
Run the [http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/27/automated-heap-dump-analysis-finding-memory-leaks-with-one-click/  leak report] to automatically detect memory leaks.
+
 
+
Analyze Eclipse: [http://dev.eclipse.org/blogs/memoryanalyzer/2008/04/21/immortal-objects-or-how-to-find-memory-leaks/ Finding a Leaking Workbench Window].
+
 
+
The [[MemoryAnalyzer/FAQ]] is under construction, but might help by problems.
+
  
== Concepts ==
+
See the [http://eclipse.org/mat/downloads.php download page] for installation instructions.
  
===Garbage Collection Roots===
+
=== Basic Tutorials ===
  
The Garbage Collector (GC) ([http://en.wikipedia.org/wiki/Garbage_Collection Wikipedia]) is responsible for removing objects that will never be accessed anymore. Objects cannot be accessed if they are not reachable through any reference chain. The starting point of this analysis are the '''Garbage Collection Roots''', i.e. objects that are assumed to be reachable by the virtual machine itself. Objects that are reachable from the GC roots remain in memory, objects that are not reachable are garbage collected.
+
Both the [http://help.eclipse.org/neon/topic/org.eclipse.mat.ui.help/gettingstarted/basictutorial.html?cp=49_1_0 Basic Tutorial] chapter in the MAT documentation and the [http://www.vogella.com/articles/EclipseMemoryAnalyser/article.html Eclipse Memory Analyzer Tutorial] by Lars Vogel are a good first reading, if you are just starting with MAT.
  
Common GC Roots are objects on the call stack of the current thread (e.g. method parameters and local variables), the thread itself, classes loaded by the system class loader and objects kept alive due to native code.
+
=== Further Reading ===
  
GC Roots are very important when determining why an object is still kept in memory: The reference chain from an arbitrary object to the GC roots (''Path to GC Roots...'') tells who is accidentally keeping a reference.
+
Check [[MemoryAnalyzer/Learning Material]]. You will find there a collection of presentations and web articles on Memory Analyzer, which are also a good resource for learning.
  
== Tasks ==
 
  
=== Getting a Heap Dump ===
+
== Getting a Heap Dump ==
  
 
==== HPROF dumps from Sun Virtual Machines ====
 
==== HPROF dumps from Sun Virtual Machines ====
Line 37: Line 34:
  
 
If you run your application with the VM flag '''-XX:+HeapDumpOnOutOfMemoryError''' a heap dump is written on the first Out Of Memory Error. There is no overhead involved unless a OOM actually occurs. This flag is a must for production systems as it is often the only way to further analyze the problem.
 
If you run your application with the VM flag '''-XX:+HeapDumpOnOutOfMemoryError''' a heap dump is written on the first Out Of Memory Error. There is no overhead involved unless a OOM actually occurs. This flag is a must for production systems as it is often the only way to further analyze the problem.
 +
 +
As per [http://stackoverflow.com/questions/542979/using-heapdumponoutofmemoryerror-parameter-for-heap-dump-for-jboss this article], the heap dump will be generated in the "current directory" of the JVM by default. It can be explicitly redirected with '''-XX:HeapDumpPath=''' for example ''-XX:HeapDumpPath=/disk2/dumps'' . Note that the dump file can be huge, up to Gigabytes, so ensure that the target file system has enough space.
 +
  
 
'''Interactive'''
 
'''Interactive'''
Line 43: Line 43:
  
 
Via MAT:
 
Via MAT:
* tutorial [http://www.bonitasoft.org/blog/eclipse/acquire-heap-dump-from-mat/ here]
+
* tutorial [http://community.bonitasoft.com/blog/effective-way-fight-duplicated-libs-and-version-conflicting-classes-using-memory-analyzer-tool here]
  
 
Via Java VM parameters:
 
Via Java VM parameters:
Line 122: Line 122:
 
==== System Dumps and Heap Dumps from IBM Virtual Machines ====
 
==== System Dumps and Heap Dumps from IBM Virtual Machines ====
  
Memory Analyzer can also read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files. For this purpose one just has to install the IBM DTFJ feature into Memory Analyzer version 0.8 or later. Follow the [http://www.ibm.com/developerworks/java/jdk/tools/dtfj.html IBM DTFJ feature installation instructions]. If the DTFJ feature is successfully installed then 'File' 'Open Heap Dump' should give the following options for the file types:
+
Memory Analyzer may read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files with the [http://www.ibm.com/developerworks/java/jdk/tools/dtfj.html IBM DTFJ feature] installed. Once installed, then '''File''' > '''Open Heap Dump''' should give the following options for the file types:
  
 
* All known formats
 
* All known formats
 
* HPROF binary heap dumps
 
* HPROF binary heap dumps
* IBM Javacore file
+
* IBM 1.4.2 SDFF
* IBM Portable Heap Dump
+
* IBM Javadumps
* IBM Portable Heap Dump (gzipped)
+
* IBM SDK for Java (J9) system dumps
* IBM SDK for Java (J9) JExtract output XML
+
* IBM SDK for Java Portable Heap Dumps
* IBM SDK for Java (J9) JExtracted system dump
+
* IBM SDK for Java (J9) system dump
+
* IBM SDK for Java (J9) JExtracted system dump
+
* IBM SDK for Java 1.4.2 JExtracted system dump
+
  
 +
For a comparison of dump types, see [http://www.ibm.com/developerworks/library/j-memoryanalyzer/#table1 Debugging from dumps]. System dumps are simply operating system core dumps; therefore, they are a superset of portable heap dumps. System dumps are far superior than PHDs, particularly for more accurate GC roots, thread-based analysis, and unlike PHDs, system dumps contain memory contents like HPROFs. Older versions of IBM Java (e.g. < 5.0SR12, < 6.0SR9) require running jextract on the operating system core dump which produced a zip file that contained the core dump, XML or SDFF file, and shared libraries. The IBM DTFJ feature still supports reading these jextracted zips; however, newer versions of IBM Java do not require jextract for use in MAT since DTFJ is able to directly read each supported operating system's core dump format. Simply ensure that the operating system core dump file ends with the '''.dmp''' suffix for visibility in the MAT Open Heap Dump selection. It is also common to zip core dumps because they are so large and compress very well. If a core dump is compressed with '''.zip''', the IBM DTFJ feature in MAT is able to decompress the ZIP file and read the core from inside (just like a jextracted zip). The only significant downsides to system dumps over PHDs is that they are much larger, they usually take longer to produce, they may be useless if they are manually taken in the middle of an exclusive event that manipulates the underlying Java heap such as a garbage collection, and they sometimes require operating system configuration ([http://www.ibm.com/support/knowledgecenter/SSYKE2_7.1.0/com.ibm.java.lnx.71.doc/diag/problem_determination/linux_setup.html Linux], [http://www.ibm.com/support/knowledgecenter/SSYKE2_7.1.0/com.ibm.java.aix.71.doc/diag/problem_determination/aix_setup_full_core.html AIX]) to ensure non-truncation.
  
Minimum-required versions of IBM Virtual Machines to generate the dump are:
+
In recent versions of IBM Java (> 6.0.1), by default, when an OutOfMemoryError is thrown, IBM Java [http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/tools/dumpagents_defaults.html produces] a system dump, PHD, javacore, and Snap file on the first occurrence for that process (although often the core dump is suppressed by the default 0 core ulimit on operating systems such as Linux). For the next three occurrences, it produces only a PHD, javacore, and Snap. If you only plan to use system dumps, and you've configured your operating system correctly as per the links above (particularly core and file ulimits), then you may disable PHD generation with -Xdump:heap:none. For versions of IBM Java older than 6.0.1, you may switch from PHDs to system dumps using -Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk -Xdump:heap:none
  
'''IBM JDK 1.4.2 SR12, 5.0 SR8a and 6.0 SR2'''
+
In addition to an OutOfMemoryError, system dumps may be produced using operating system tools (e.g. gcore in gdb for Linux, gencore for AIX, Task Manager for Windows, SVCDUMP for z/OS, etc.), using the [http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/tools/diagnostics_summary.html IBM Java APIs], using the various options of [http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/tools/dump_agents.html -Xdump], using [https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=7d3dc078-131f-404c-8b4d-68b3b9ddd07a Java Surgery], and more.
  
though previous versions may generate dumps usable with Memory Analyzer but with inaccurate root information.
+
Versions of IBM Java older than IBM JDK 1.4.2 SR12, 5.0 SR8a and 6.0 SR2 are known to produce inaccurate GC root information.
 
+
The [http://www.ibm.com/developerworks/java/jdk/diagnosis/ IBM Diagnostics Documentation] contains details about how to trigger a system dump.
+
 
+
=====IBM Java 5.0 and Java 6 Virtual Machine dump options=====
+
 
+
A quick reference for IBM Java 5.0 and Java 6 Virtual Machine command line options:
+
* -Xdump:system+heap+java:events=systhrow+user,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk+compact
+
 
+
Dump types:
+
* system - a system process dump. Process system dump files with jextract before loading them into Memory Analyzer. Do not give them a .sdff extension as that is only used for Java 1.4.2 system dumps
+
* heap - a Portable Heap Dump (PHD) file. Contains all objects and classes, but no thread details
+
* javacore - a readable file which contains information about class loaders and which can be used by Memory Analyzer when reading PHD files
+
 
+
Events:
+
* systhrow - when a system generated exception is thrown
+
* user - the user has typed control-break
+
 
+
Filter:
+
* java/lang/OutOfMemoryError - the type of the systhrow exception
+
 
+
Request:
+
* exclusive Stop anything modifying the heap while generating the dump
+
* prepwalk Make sure the heap is safe to dump
+
* compact minimise the size of the heap
+
 
+
=====IBM Java 1.4.2 dump options=====
+
 
+
A quick reference for IBM Java 1.4.2 dump options
+
 
+
* export JAVA_DUMP_OPTS=ONOUTOFMEMORY(SYSDUMP,HEAPDUMP,JAVADUMP)
+
* set JAVA_DUMP_OPTS=ONOUTOFMEMORY(SYSDUMP,HEAPDUMP,JAVADUMP)
+
 
+
On non-z/OS systems process any system dump files with JExtract to give a .sdff file. On z/OS systems copy the SVC dump file in binary mode to your Eclipse Memory Analyzer system, giving it a .dmp file extension.
+
 
+
=====IBM Support Assistant=====
+
 
+
The latest version 4.1 of [http://www-01.ibm.com/software/support/isa/ IBM Support Assistant] has a technical preview of [http://www-01.ibm.com/support/docview.wss?uid=swg27013116#IBM%20Monitoring%20and%20Diagnostic%20Too IBM Monitoring and Diagnostic Tools for Java - Memory Analyzer] which contains version 0.8 of Eclipse Memory Analyzer together with the latest IBM DTFJ feature packaged for IBM Support Assistant.
+
 
+
=====Previous versions of Memory Analyzer=====
+
 
+
The previous version of Memory Analyzer, version 0.7, required both a DTFJ Adapter feature and a DTFJ feature. It is strongly recommended that you upgrade to Memory Analyzer version 0.8. If for some reason you cannot do this then follow the
+
[http://www.ibm.com/developerworks/java/jdk/tools/mat.html installation instructions for the old IBM DTFJ Adapter feature and back level IBM DTFJ feature]. If you have DTFJ problems then there is an update for just the DTFJ feature. Follow the [http://www.ibm.com/developerworks/java/jdk/tools/dtfj.html IBM DTFJ feature installation instructions] for the latest level of DTFJ.
+
  
 
==== What if the Heap Dump is NOT Written on OutOfMemoryError? ====
 
==== What if the Heap Dump is NOT Written on OutOfMemoryError? ====
Line 202: Line 157:
 
Also please note that a heap dump is written only on the first OutOfMemoryError. If the application chooses to catch it and continues to run, the next OutOfMemoryError will never cause a heap dump to be written!
 
Also please note that a heap dump is written only on the first OutOfMemoryError. If the application chooses to catch it and continues to run, the next OutOfMemoryError will never cause a heap dump to be written!
  
=== Finding Memory Leaks ===
+
== Extending Memory Analyzer ==
 
+
Start by running the [http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/27/automated-heap-dump-analysis-finding-memory-leaks-with-one-click/ leak report] to automatically check for memory leaks.
+
 
+
This blog details [http://dev.eclipse.org/blogs/memoryanalyzer/2008/04/21/immortal-objects-or-how-to-find-memory-leaks/ How to Find a Leaking Workbench Window].
+
 
+
The Memory Analyzer grew up at SAP. Back then, Krum blogged about
+
[https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/6856 Finding Memory Leaks with SAP Memory Analyzer]. The content is still relevant!
+
 
+
=== Analyzing Java Collection Usage ===
+
 
+
Check out Krum's blog about [https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/7680 Analyzing Java Collections Usage with Memory Analyzer]. Also, [https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/7006 Memory for Nothing] looks unused collections and the memory kept alive.
+
 
+
=== Perm Space Issus ===
+
 
+
A good starting point is the blog [http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/17/the-unknown-generation-perm/ The Unknown Generation: Perm] by Andreas.
+
 
+
Also, Vedran has blogged some hints how to address [https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/7636 Perm Space Issues].
+
 
+
=== Instance Segmentation ===
+
  
Often it is possible to spot problem by segmenting objects along different attributes. Elena has blogged about [http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/08/the-power-of-aggregation-making-sense-of-the-objects-in-a-heap-dump/ The Power of Aggregation: Making sense of the Objects in a Heap Dump].
+
Memory Analyzer is extensible, so new queries and dump formats can be added. Please see
 +
[[MemoryAnalyzer/Extending_Memory_Analyzer]] for details.

Revision as of 15:55, 8 November 2016

About

The Eclipse Memory Analyzer tool (MAT) is a fast and feature-rich heap dump analyzer that helps you find memory leaks and analyze high memory consumption issues.

With Memory Analyzer one can easily

  • find the biggest objects, as MAT provides reasonable accumulated size (retained size)
  • explore the object graph, both inbound and outbound references
  • compute paths from the garbage collector roots to interesting objects
  • find memory waste, like redundant String objects, empty collection objects, etc...

Getting Started

Installation

See the download page for installation instructions.

Basic Tutorials

Both the Basic Tutorial chapter in the MAT documentation and the Eclipse Memory Analyzer Tutorial by Lars Vogel are a good first reading, if you are just starting with MAT.

Further Reading

Check MemoryAnalyzer/Learning Material. You will find there a collection of presentations and web articles on Memory Analyzer, which are also a good resource for learning.


Getting a Heap Dump

HPROF dumps from Sun Virtual Machines

The Memory Analyzer can work with HPROF binary formatted heap dumps. Those heap dumps are written by Sun HotSpot and any VM derived from HotSpot. Depending on your scenario, your OS platform and your JDK version, you have different options to acquire a heap dump.

Non-interactive

If you run your application with the VM flag -XX:+HeapDumpOnOutOfMemoryError a heap dump is written on the first Out Of Memory Error. There is no overhead involved unless a OOM actually occurs. This flag is a must for production systems as it is often the only way to further analyze the problem.

As per this article, the heap dump will be generated in the "current directory" of the JVM by default. It can be explicitly redirected with -XX:HeapDumpPath= for example -XX:HeapDumpPath=/disk2/dumps . Note that the dump file can be huge, up to Gigabytes, so ensure that the target file system has enough space.


Interactive

As a developer, you want to trigger a heap dump on demand. On Windows, use JDK 6 and JConsole. On Linux and Mac OS X, you can also use jmap that comes with JDK 5.

Via MAT:

Via Java VM parameters:

  • -XX:+HeapDumpOnOutOfMemoryError writes heap dump on OutOfMemoryError (recommended)
  • -XX:+HeapDumpOnCtrlBreak writes heap dump together with thread dump on CTRL+BREAK
  • -agentlib:hprof=heap=dump,format=b combines the above two settings (old way; not recommended as the VM frequently dies after CTRL+BREAK with strange errors)

Via Tools:

  • Sun (Linux, Solaris; not on Windows) JMap Java 5: jmap -heap:format=b <pid>
  • Sun (Linux, Solaris; Windows see link) JMap Java 6: jmap.exe -dump:format=b,file=HeapDump.hprof <pid>
  • Sun (Linus, Solaris) JMap with Core Dump File: jmap -dump:format=b,file=HeapDump.hprof /path/to/bin/java core_dump_file
  • Sun JConsole: Launch jconsole.exe and invoke operation dumpHeap() on HotSpotDiagnostic MBean
  • SAP JVMMon: Launch jvmmon.exe and call menu for dumping the heap

Heap dump will be written to the working directory.

Vendor / Release VM Parameter VM Tools
On OoM On Ctrl+Break Agent JMap JConsole
Sun, HP
1.4.2_12 Yes Yes Yes
1.5.0_07 Yes Yes (Since 1.5.0_15) Yes Yes (Only Solaris and Linux)
1.6.0_00 Yes Yes Yes Yes
SAP
1.5.0_07 Yes Yes Yes Yes (Only Solaris and Linux)

System Dumps and Heap Dumps from IBM Virtual Machines

Memory Analyzer may read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files with the IBM DTFJ feature installed. Once installed, then File > Open Heap Dump should give the following options for the file types:

  • All known formats
  • HPROF binary heap dumps
  • IBM 1.4.2 SDFF
  • IBM Javadumps
  • IBM SDK for Java (J9) system dumps
  • IBM SDK for Java Portable Heap Dumps

For a comparison of dump types, see Debugging from dumps. System dumps are simply operating system core dumps; therefore, they are a superset of portable heap dumps. System dumps are far superior than PHDs, particularly for more accurate GC roots, thread-based analysis, and unlike PHDs, system dumps contain memory contents like HPROFs. Older versions of IBM Java (e.g. < 5.0SR12, < 6.0SR9) require running jextract on the operating system core dump which produced a zip file that contained the core dump, XML or SDFF file, and shared libraries. The IBM DTFJ feature still supports reading these jextracted zips; however, newer versions of IBM Java do not require jextract for use in MAT since DTFJ is able to directly read each supported operating system's core dump format. Simply ensure that the operating system core dump file ends with the .dmp suffix for visibility in the MAT Open Heap Dump selection. It is also common to zip core dumps because they are so large and compress very well. If a core dump is compressed with .zip, the IBM DTFJ feature in MAT is able to decompress the ZIP file and read the core from inside (just like a jextracted zip). The only significant downsides to system dumps over PHDs is that they are much larger, they usually take longer to produce, they may be useless if they are manually taken in the middle of an exclusive event that manipulates the underlying Java heap such as a garbage collection, and they sometimes require operating system configuration (Linux, AIX) to ensure non-truncation.

In recent versions of IBM Java (> 6.0.1), by default, when an OutOfMemoryError is thrown, IBM Java produces a system dump, PHD, javacore, and Snap file on the first occurrence for that process (although often the core dump is suppressed by the default 0 core ulimit on operating systems such as Linux). For the next three occurrences, it produces only a PHD, javacore, and Snap. If you only plan to use system dumps, and you've configured your operating system correctly as per the links above (particularly core and file ulimits), then you may disable PHD generation with -Xdump:heap:none. For versions of IBM Java older than 6.0.1, you may switch from PHDs to system dumps using -Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk -Xdump:heap:none

In addition to an OutOfMemoryError, system dumps may be produced using operating system tools (e.g. gcore in gdb for Linux, gencore for AIX, Task Manager for Windows, SVCDUMP for z/OS, etc.), using the IBM Java APIs, using the various options of -Xdump, using Java Surgery, and more.

Versions of IBM Java older than IBM JDK 1.4.2 SR12, 5.0 SR8a and 6.0 SR2 are known to produce inaccurate GC root information.

What if the Heap Dump is NOT Written on OutOfMemoryError?

Heap dumps are not written on OutOfMemoryError for the following reasons:

  • Application creates and throws OutOfMemoryError on its own
  • Another resource like threads per process is exhausted
  • C heap is exhausted

As for the C heap, the best way to see that you won't get a heap dump is if it happens in C code (eArray.cpp in the example below):

   # An unexpected error has been detected by SAP Java Virtual Machine:
   # java.lang.OutOfMemoryError: requested 2048000 bytes for eArray.cpp:80: GrET*. Out of swap space or heap resource limit exceeded (check with limits or ulimit)?
   # Internal Error (\\...\hotspot\src\share\vm\memory\allocation.inline.hpp, 26), pid=6000, tid=468

C heap problems may arise for different reasons, e.g. out of swap space situations, process limits exhaustion or just address space limitations, e.g. heavy fragmentation or just the depletion of it on machines with limited address space like 32 bit machines. The hs_err-file will help you with more information on this type of error. Java heap dumps wouldn't be of any help, anyways.

Also please note that a heap dump is written only on the first OutOfMemoryError. If the application chooses to catch it and continues to run, the next OutOfMemoryError will never cause a heap dump to be written!

Extending Memory Analyzer

Memory Analyzer is extensible, so new queries and dump formats can be added. Please see MemoryAnalyzer/Extending_Memory_Analyzer for details.

Back to the top