Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "Performance Bloopers"

(Changed to new way of getting the plugin id)
(26 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 +
(Last updated: {{REVISIONYEAR}}/{{REVISIONMONTH}}/{{REVISIONDAY}}.)
 +
 
Below is a collection of goofs, mistakes, and bad decisions made in developing plugins for the Eclipse platform. Many are standard Java programming problems, some are specific to Eclipse. The intent here is not to pick on the perpetrators (in most cases they in fact eagerly contributed the blooper information!) but rather to help other developers avoid similar pitfalls. The bloopers have been contributed from the community and may have been discovered in Eclipse SDK code or in third party plugins. Since all of Eclipse is implemented as plugins, the issues are usually generally relevant.
 
Below is a collection of goofs, mistakes, and bad decisions made in developing plugins for the Eclipse platform. Many are standard Java programming problems, some are specific to Eclipse. The intent here is not to pick on the perpetrators (in most cases they in fact eagerly contributed the blooper information!) but rather to help other developers avoid similar pitfalls. The bloopers have been contributed from the community and may have been discovered in Eclipse SDK code or in third party plugins. Since all of Eclipse is implemented as plugins, the issues are usually generally relevant.
  
 
Each blooper is structured as a statement of the scenario followed by suggested techniques for avoiding the problem(s). In some cases there are clear steps, in others there really is no solution except to follow the advice of a wise doctor and "don't do that".
 
Each blooper is structured as a statement of the scenario followed by suggested techniques for avoiding the problem(s). In some cases there are clear steps, in others there really is no solution except to follow the advice of a wise doctor and "don't do that".
  
The set of bloopers is (sadly) always growing. This site is intended as a resource for developers to consult to build their general knowledge of problems, techniques, etc. Check back often and contribute your own bloopers.
+
The set of bloopers is (sadly) always growing. This page is intended as a resource for developers to consult to build their general knowledge of problems, techniques, etc. Check back often and contribute your own bloopers.
  
== Blooper: String.substring() ==
+
== String.substring() ==
 
The <tt>java.lang.String.substring(...)</tt> method is usually implemented by creating a new <tt>String</tt> object that points back to the same underlying <tt>char[]</tt> as the receiver, but with different offset and length values.  Therefore, if you  take a very large string, create a substring of length 1, then discard the large string, the little substring may still hold onto the very large <tt>char[]</tt>.
 
The <tt>java.lang.String.substring(...)</tt> method is usually implemented by creating a new <tt>String</tt> object that points back to the same underlying <tt>char[]</tt> as the receiver, but with different offset and length values.  Therefore, if you  take a very large string, create a substring of length 1, then discard the large string, the little substring may still hold onto the very large <tt>char[]</tt>.
  
Line 14: Line 16:
 
In situations where you know you are creating a small substring and then throwing the large string away, force a copy of the string to be created by calling <tt>new String(substring)</tt>.  This seems counter-intuitive from a performance perspective because it creates extra objects, but it can be worthwhile if the substring is being retained for a long period. In one particular case in the Eclipse JDT plugins, copying the substring yielded a 550KB space savings.  Not bad for a one line fix!
 
In situations where you know you are creating a small substring and then throwing the large string away, force a copy of the string to be created by calling <tt>new String(substring)</tt>.  This seems counter-intuitive from a performance perspective because it creates extra objects, but it can be worthwhile if the substring is being retained for a long period. In one particular case in the Eclipse JDT plugins, copying the substring yielded a 550KB space savings.  Not bad for a one line fix!
  
== Blooper: Unbufferred I/O ==
+
== Unbufferred I/O ==
 
With most flavours of <tt>java.io.InputStream</tt> and <tt>java.io.OutputStream</tt>, buffering doesn't come for free.  This means that every single read and write call may result in disk or network I/O.  Similarly in Eclipse, the streams returned by methods such as <tt>org.eclipse.core.resources.IFile#getContents</tt>, or created by opening an <tt>InputStream</tt> on an Eclipse <tt>URL</tt> are not buffered.
 
With most flavours of <tt>java.io.InputStream</tt> and <tt>java.io.OutputStream</tt>, buffering doesn't come for free.  This means that every single read and write call may result in disk or network I/O.  Similarly in Eclipse, the streams returned by methods such as <tt>org.eclipse.core.resources.IFile#getContents</tt>, or created by opening an <tt>InputStream</tt> on an Eclipse <tt>URL</tt> are not buffered.
  
Line 21: Line 23:
 
The solution in this case is simple.  Just wrap the stream in a <tt>java.io.BufferedInputStream</tt> or <tt>BufferedOutputStream</tt>. If you have a good idea of the amount of bytes that need reading or writing, you can even set the stream's buffer size appropriately.
 
The solution in this case is simple.  Just wrap the stream in a <tt>java.io.BufferedInputStream</tt> or <tt>BufferedOutputStream</tt>. If you have a good idea of the amount of bytes that need reading or writing, you can even set the stream's buffer size appropriately.
  
== Blooper: Strings in the plug-in registry ==
+
== Strings in the plug-in registry ==
In Eclipse 2.0.* and before, it was generally assumed that there would be hundreds of plugins and that the plugin registry, while sizeable, could reasonably be held in memory. As Eclipse-based products came to market we discovered that developers were taking the plugin model to heart and were creating hundreds of plugins for one produce. Our assumptions were being tested...
+
In Eclipse 2.0.* and before, it was generally assumed that there would be hundreds of plugins and that the plugin registry, while sizeable, could reasonably be held in memory. As Eclipse-based products came to market we discovered that developers were taking the plugin model to heart and were creating hundreds of plugins for one product. Our assumptions were being tested...
  
One of the key failings in this area was the use of Strings. (Note this is actually a more general problem but reared its ugly head in a very tangible way here) All aspects of plugins (extensions, extension points, plugins/fragments, ...) are defined in terms of String identifiers. When the platform starts it parses the plugin.xml/fragment.xml files and builds a regstry. This registry is essentially a complete parse tree of all parsed files (i.e., a mess of Strings). In general the String identifiers are not needed for human readability but rather code based access and matching. Unfortunately, Strings are one of the least space efficient data forms in Java (e.g., a 25 character string requires approximately 90 bytes of storage).
+
One of the key failings in this area was the use of Strings. (Note this is actually a more general problem but reared its ugly head in a very tangible way here) All aspects of plugins (extensions, extension points, plugins/fragments, ...) are defined in terms of String identifiers. When the platform starts it parses the plugin.xml/fragment.xml files and builds a registry. This registry is essentially a complete parse tree of all parsed files (i.e., a mess of Strings). In general the String identifiers are not needed for human readability but rather code based access and matching. Unfortunately, Strings are one of the least space efficient data forms in Java (e.g., a 25 character string requires approximately 90 bytes of storage).
  
 
Further, typical code patterns for registry access involve the declaration of some constant, for example:
 
Further, typical code patterns for registry access involve the declaration of some constant, for example:
Line 50: Line 52:
 
In Eclipse 3.1, the fourth approach was taken.  All registry data structures are now loaded from disk on demand, and flushed from memory when not in use by employing soft references (<tt>java.lang.ref.SoftReference</tt>).  For an application that is not consulting the registry, memory usage for the extension registry has effectively been reduced to zero.
 
In Eclipse 3.1, the fourth approach was taken.  All registry data structures are now loaded from disk on demand, and flushed from memory when not in use by employing soft references (<tt>java.lang.ref.SoftReference</tt>).  For an application that is not consulting the registry, memory usage for the extension registry has effectively been reduced to zero.
  
== Blooper: Excessive crawling of the extension registry ==
+
== Excessive crawling of the extension registry ==
 
As described in the previous blooper, the Eclipse extension registry is now loaded from disk on demand, and discarded when no longer referenced.  This speed/space trade-off has created the possibility of a whole new category of performance blooper for clients of the registry.  For example, here is a block of code that was actually discovered in a third-party plugin:
 
As described in the previous blooper, the Eclipse extension registry is now loaded from disk on demand, and discarded when no longer referenced.  This speed/space trade-off has created the possibility of a whole new category of performance blooper for clients of the registry.  For example, here is a block of code that was actually discovered in a third-party plugin:
  
IExtensionRegistry registry = Platform.getExtensionRegistry();
+
    IExtensionRegistry registry = Platform.getExtensionRegistry();
IExtensionPoint[] points = registry.getExtensionPoints();
+
    IExtensionPoint[] points = registry.getExtensionPoints();
for (int i = 0; i < points.length; i++) {
+
    for (int i = 0; i < points.length; i++) {
IExtension[] extensions = points[i].getExtensions();
+
        IExtension[] extensions = points[i].getExtensions();
for (int j = 0; j < extensions.length; j++) {
+
        for (int j = 0; j < extensions.length; j++) {
IConfigurationElement[] configs = extensions[j].getConfigurationElements();
+
            IConfigurationElement[] configs = extensions[j].getConfigurationElements();
for (int k = 0; k < configs.length; k++) {
+
            for (int k = 0; k < configs.length; k++) {
if (configs[k].getName().equals("some.name"))
+
                if (configs[k].getName().equals("some.name"))
//do something with this config
+
                    //do something with this config
}
+
            }
}
+
        }
}
+
    }
  
 
Prior to Eclipse 3.1, the above code was actually not that terrible. Alhough the extension registry has been loaded lazily since Eclipse 2.1, it always stayed in memory once loaded.  If the above code ran after the registry was in memory, most of the registry API calls were quite fast.  This is no longer true.  In Eclipse 3.1, the above code will now cause the entire extension registry, several megabytes for a large Eclipse-based product, to be loaded into memory. While this is an extreme case, there are plenty of examples of code that is performing more registry access than necessary.  These inefficiences were not apparent with a memory-resident extension registry.
 
Prior to Eclipse 3.1, the above code was actually not that terrible. Alhough the extension registry has been loaded lazily since Eclipse 2.1, it always stayed in memory once loaded.  If the above code ran after the registry was in memory, most of the registry API calls were quite fast.  This is no longer true.  In Eclipse 3.1, the above code will now cause the entire extension registry, several megabytes for a large Eclipse-based product, to be loaded into memory. While this is an extreme case, there are plenty of examples of code that is performing more registry access than necessary.  These inefficiences were not apparent with a memory-resident extension registry.
Line 72: Line 74:
 
Avoid calling extension registry API when not needed.  Use shortcuts as much as possible.  For example, directly call <tt>IExtensionRegistry.getExtension(...)</tt> rather than <tt>IExtensionRegistry.getExtensionPoint(...).getExtension(...)</tt>.
 
Avoid calling extension registry API when not needed.  Use shortcuts as much as possible.  For example, directly call <tt>IExtensionRegistry.getExtension(...)</tt> rather than <tt>IExtensionRegistry.getExtensionPoint(...).getExtension(...)</tt>.
  
Some extra shortcut methods were added in Eclipse 3.1 to help clients avoid unnecessary registry access.  For example, to find the plugin ID (namespace) for a configuration element, clients would previously call    <tt>IConfigurationElement.getDeclaringExtension().getNamespace()</tt>. It is much more efficient to call the new <tt>IConfigurationElement.getNamespace()</tt> method directly, saving the <tt>IExtension</tt> object from potentially being loaded from disk.
+
Some extra shortcut methods were added in Eclipse 3.1 to help clients avoid unnecessary registry access.  For example, to find the plugin ID (namespace) for a configuration element, clients would previously call    <tt>IConfigurationElement.getDeclaringExtension().getContributor().getName()</tt>. It is much more efficient to call the new <tt>IConfigurationElement.getContributor().getName()</tt> method directly, saving the <tt>IExtension</tt> object from potentially being loaded from disk.
 +
 
 +
== Message catalog keys ==
 +
The text messages required for a particular plugin are typically contained in one or more Java properties files. These message bundles have key-value pairs where the key is some useful token that humans can read and the value is the translated text of the message. Plugins are responsible for loading and maintaining the messags. Typically this is done on demand either when the plugin is started or when the first message from a particular bundle is needed. Loading one message typically loads all messages in the same bundle.
 +
 
 +
There are several problems with this situation:
 +
# Again we have the inefficient use of Strings as identifiers. Other than readability in the properties file, having human readable keys is not particularly compelling. Assuming the use of constants, int values would be just as functional.
 +
# Similarly, the use of String keys requires the use of Hashtables to store the loaded message bundles. Some array based structure would be more efficient.
 +
# The Eclipse SDK contains tooling which helps users &quot;externalize&quot; their Strings. That is, it replaces embedded Strings with message references and builds the entries in the message bundles. This tool can generate the keys for the messages as they are discovered. Unfortunately, the generated keys are based on the fully qualified class/method name where the string was discovered. This makes for quite long keys (e.g., keys greater than 90 characters long were discovered in some of the Debug plugins).
 +
 
 +
'''Avoidance Techniques:'''
 +
There are several facets to this problem but the basic lesson here is to understand the space you are using. Long keys are not particularly useful and just waste space. String keys are good for developers but end-users pay the space cost. Mechanisms like bundle loading/management which are going to be used through out the entire system should be well thought out and supplied to developers rather than leaving it up to each to do their own (inefficient) implementation.
 +
 
 +
With that in mind, below are some of the many possible alternatives:
 +
# Shorter keys: Clearly the message keys should be useful but not excessively long.
 +
# Use the Eclipse 3.1 [http://www.eclipse.org/eclipse/platform-core/documents/3.1/message_bundles.html message bundle] facility, <tt>org.eclipse.osgi.util.NLS</tt>. This API binds each message in your catalog to a Java field, eliminating the notion of keys entirely, yielding a huge memory improvement over the basic Java <tt>PropertyResourceBundle</tt>.
 +
 
 +
== Eager preference pages ==
 +
Often preference keys and setting the preference defaults is done in preference pages. As a result, the preference page classes are loaded when either the key or the value is needed. Those preference page classes sometimes contain extensive UI code. The net result is that code is loaded that is typically never used since users rarely consult preference pages once acceptable values are set.
 +
 
 +
'''Avoidance Techniques:'''
 +
Refactor your code to move the preference constants and initialization code into dedicated classes. Preference page classes will then only be loaded on demand by the workbench's lazy loading mechanism when needed. In Eclipse 3.0 or newer it is recommended to use the [http://help.eclipse.org/ganymede/topic/org.eclipse.platform.doc.isv/reference/extension-points/org_eclipse_core_runtime_preferences.html org.eclipse.core.runtime.preferences extension point].
 +
 
 +
== Too much work on activation ==
 +
Plugins are activated as needed. Typically this means that a plugin is activated the first time one of its classes is loaded. On activation, the plugin's runtime class (aka plugin class) is loaded and instantiated and the startup() lifecycle method called. This gives the plugin a chance to do rudimentary initialization and hook itself into the platform more tightly than is allowed by the extension mechanisms in the plugin.xmls.
 +
 
 +
Unfortunately, developers seize the opportunity and do all manner of work. Also unfortunate is the fact that activation is done in a context free manner. For example, at activation time the JDT Core plugin, for example, does not know why it is being activated. It might be because someone is trying to compile/build some Java, or it might be because class C in some other plugin subclasses a JDT class and C is being loaded. In the former case it would be reasonable for JDT Core to load/initialize required state, create new structures etc. In the latter this would be completely unreasonable. Because a bundle never knows what causes it to be activated, it is difficult for a bundle to optimize for this. JDT Core handles this by populating its in-memory model lazily, ensuring that initialization work is only performed when necessary.
 +
 
 +
We have seen cases where literally hundreds of classes and <b>megabytes of code</b> have been loaded (not to mention all the objects created) just to check and see that there was nothing to do.
 +
 
 +
This behavior impacts platform startup time if the plugins in question contribute to the current UI/project structure or imposes lengthy delays in the user's workflow when they suddenly (often unknowingly) invoke some new function requiring the errant plugin to be activated.</p>
 +
 
 +
'''Avoidance Techniques:'''
 +
 
 +
The platform provides lazy activation of plugins. Plugins are responsible for efficiently creating their internal structures according to the function required. The startup() method is <b>not</b> the time or place to be doing large scale initialization.
 +
 
 +
== Decorators ==
 +
The UI plugin provides a mechanism for decorating resources with icons and text (e.g., adding the little 'J' on Java projects or the CVS version number to the resource label). Plugins contribute decorators by extending a UI extension point specifying the kind of element they would like to decorate. When a resource of the identified is displayed, all installed decorators are given a chance to add their bit to the visual presentation. This model/mechanism is simple and clean.
 +
 
 +
There are performance consequences however:
 +
* <b>Early plugin activation</b> In many scenarios, plugins get activated well before their function is actually needed. Further, because of the &quot;Too much work at activation&quot; blooper, the activated plugins often did way more work than was required. In many cases whether or not a resource should be decorated is predicated on a simple test (e.g., does it have a particular persistant property). These require almost no code and certainly no complicated domain/model structures.
 +
* <li><b>Resource leaks</b> The mechanism can leak images even if individual decorators are careful. decorateImage() wants to return an image. If a decorator simply creates a new image and returns it (i.e., without remembering it) then there is no way of disposing it. To counter this, decorators typically maintain a list of the images they have provided. Unfortunately, this list is monotonically increasing if they still create (but remember) a new image for every decoration request. To counter this, well-behaved decorators cache the images they supply based on some key. The key is typically a combination of the base image          provided and the decoration they add. This key then allows decorators to return an already allocated image if the net result of the requested decoration is the same as some previous result. Since decorators are chained, <b>all</b> decorators must have this good behaviour. If just one decorator in a chain returns a new image, then the caching strategies of all following decorators are foiled and once again resources are leaked.
 +
* <b>Threading </b>Decorators run in foreground which causes problems for some people (e.g., CVS). To workaround this, heavy-weight decorators have a background thread which computes the decorations and then issues a label change event to update the UI. This does not scale. When a label changed event is posted, all decorators are run again. This allows the decorators following the heavy-weight contributor to add their decoration. The net result is a flurry of label change events, decoration requests and UI updates, most of which do little or nothing. Further, the problem gets worse quickly as heavy-weight decorators are added.
 +
* <b>Code complexity</b> While this is not directly a performance problem, it does lead to performance issues as the code here is complex and hard to test. To do decorators correctly, plugin writers have to write their own caching code as well as their own threading code (assuming they have heavy decorator logic). Both chunks of code are complicated, error prone and likely very much the same from plugin to plugin. Prime candidates for inclusion in the base mechanism.
 +
 
 +
'''Avoidance techniques:'''
 +
The UI team tackled this problem by providing more decorator infrastructure.
 +
 
 +
* The semantic level of the decorator API was raised so that decorators described their decorations rather than directly acting. This allows the UI mechanisms to manage a central image cache and create fewer intermediate image results by applying all decorations at once.
 +
* The Workbench also manages a background decoration thread. All heavy-weight decorators are run together in the background and their results combined and presented in one label changed event.
 +
* Static decoration information can now be declared in the plugin.xml. This allows plugins to contribute decorators without loading/running any of their code (a big win!!). The plugin describes the conditions for decoration (based on the existence of properties, resource types, etc) and the decoration image and position. The Workbench does the rest.
 +
 
 +
== PDE cycle detection ==
 +
PDE Core used to have a linear list of plug-in models generated by parsing manifest files. Meanwhile, manifest editor has a small 'Issues and Action Items' area in the Overview page. Among other things, this area shows problems related to the plug-in to which the manifest file belongs. One of the problems that can be detected is cyclical plug-in dependencies. When opened, this section will initiate a cycle detection computation.
 +
 
 +
Cycle detection computation follows the dependency graphs trying to find closures. It follows the graph by looping through the plug-in IDs, looking up plug-in models that match the IDs, then recursively follows their dependencies. In the original implementation, each ID-&gt;model lookup was done linearly (by iterating over the flat list of models).
 +
 
 +
'''Avoidance techniques:'''
 +
 
 +
In a large product with 600 plug-ins and convoluted dependency tree, we got complaints that manifest editor takes 3 minutes to open in some cases!! After performance analysis, we changed the linear lookup with a hash table (using plug-in ID as the lookup key). The opening time was reduced to 3 seconds (worst case scenario) !!!! And we already had this table in place for other purposes. The actual fix took 2 minutes to do.
 +
 
 +
== Too many resource change listeners ==
 +
PRE_AUTO_BUILD and POST_AUTO_BUILD resource change listeners have a non-trivial cost associated with them. This is because a new tree layer must be created to allow the listener to make changes to the tree. It was discovered that of the five BUILD listeners that were typically running, four of them were from the org.eclipse.team.cvs plug-ins. See the [http://dev.eclipse.org/bugs/show_bug.cgi?id=27351 bug report] for more details.
 +
 
 +
'''Avoidance techniques:'''
 +
 
 +
Minimize use of these listeners. Some ideas:
 +
 
 +
* POST_CHANGE listeners have trivial cost... switch to POST_CHANGE where possible
 +
* Two listeners cost more than one. Try to create just one and delegate the work from there.
 +
* Consider removing listeners when they are not applicable. For example, if you are listening for changes on a particular file or directory, you may be able to remove that listener when the applicable resource is not present.
 +
 
 +
== Calling Bundle.start() ==
 +
One of the core APIs of the OSGi runtime is the <tt>org.osgi.framework.Bundle</tt> interface.  Each <tt>Bundle</tt> instance represents a single bundle (aka plug-in) in an Eclipse or OSGi instance.  This class largely replaces the <tt>Plugin</tt>
 +
class from the original Eclipse runtime.  As clients started to migrate over to the OSGi runtime, they began using the <tt>Bundle</tt> class, and in particular its <tt>Bundle.start()</tt> method to start up a bundle.  However, if you look at the fine print in the javadoc of the <tt>start()</tt> method, it specifies behaviour that many don't expect:
 +
 
 +
    Persistently record that this bundle has been started. When the
 +
    Framework is restarted, this bundle must be automatically started.
 +
 
 +
This goes against the general Eclipse philosophy of lazy activation.  In Eclipse 3.0 and 3.1, the Eclipse OSGi implementation actually did not obey this specification and continued to follow the lazy activation principle.  In Eclipse 3.2, this "bug" was fixed and the <tt>start()</tt> method now obeys the spec. The end result is that anyone calling <tt>Bundle.start()</tt> without later calling <tt>Bundle.stop()</tt> essentially causes an activation leak.  In all future sessions, that bundle, and any bundle it implicitly loads due to class references, will be started immediately. This can lead to startup times getting steadily longer as all these started bundles are activated.
 +
 
 +
'''Avoidance techniques:'''
 +
 
 +
The most obvious solution is to avoid using <tt>Bundle.start()</tt>. Think hard about why you need to activate that bundle.  If you need to access some class or extension in that bundle, then explicit activation is not needed because it will happen automatically.  If you are relying on some side-effect of that bundle's activation code, consider using an explicit class or extension reference instead to avoid such a brittle dependency.  If you ''really'' must activate that bundle, then you must also take responsibility for stopping that bundle when no longer needed. Essentially in this case you are taking that bundle's lifecycle into your own hands. The general lesson from this blooper is that you should always fully read and understand the specification of methods you are using. A method may at first glance appear to be doing what you want, but may have side-effects that you did not anticipate.
 +
 
 +
== JAR signing and verification ==
 +
Some people distributing and deploying Eclipse want to sign the content with a cryptographic hash so that the recipient can verify that it came from a trusted source.  Java has a signing and verification mechanism built directly into the class libraries for doing this kind of signing.  The problem is, the performance of certain class library methods vary greatly when JARs are signed, because they performance expensive verification whenever a signed JAR is encountered.  Even worse, there is often no way to disable this verification, even in a trusted environment where you are certain about the origin of the JARs being used.  Here are some signing related performance bloopers that were encountered when testing Eclipse with signed JARs:
 +
* <tt>URLClassLoader</tt> - this class will verify any JAR that it loads, and there is no way to avoid it. 
 +
* <tt>java.util.jar.JarFile</tt> - the default <tt>JarFile(String)</tt> and <tt>JarFile(File)</tt> methods will verify any signed JARs, even if you weren't planning on loading or running any code in the JAR you are opening.
 +
* <tt>java.util.jar.Manifest</tt> - the manifest of a signed JAR can be quite large, because signatures for each entry in the JAR are stored in the manifest.  Most code that is opening a manifest is only interested in the typically small set of main attributes (<tt>Manifest.getMainAttributes()</tt>) at the start of the file.  However, as soon as you instantiate <tt>Manifest</tt>, the entire manifest file is loaded and parsed.  This can be very expensive in large signed JARs, that can sometimes have 500KB of signatures in them.
 +
 
 +
'''Avoidance techniques:'''
 +
 
 +
Be aware of the consequences of these class library methods when signing is enabled.  Consider whether your particular application needs to perform verification, and if not, find an alternative that avoids verification.  If you are loading a class or other code to be executed, you likely want the verification to proceed.  On the other hand, if you are opening content from a trusted source, or looking at JAR entries that do not contain executable code, then signing may not be necessary.  <tt>java.util.jar.JarFile</tt> has additional constructors with a boolean flag for verification, so it is an easy fix to disable verification in this case.  When reading a manifest file, consider doing a light-weight parse to find the main attribute you are looking for, to avoid parsing the entire file.  The first blank line in a manifest file indicates the end of the main attributes.  For example, the OSGi class <tt>org.eclipse.osgi.framework.util.Headers</tt> performs an efficient parse of the main attributes of a manifest file.
 +
 
 +
== Selective loading of JAR files ==
 +
The JDT Java model uses a least recently used (LRU) cache to bound the amount of memory that it consumes.  Reading a very large JAR file would cause this cache to repeatedly overflow, causing the same large JAR file to be read over and over again.  While this could be resolved by increasing the Java heap size, it resulted in poor performance in situations where memory was constrained.
 +
 
 +
'''Avoidance techniques:'''
 +
 
 +
When using a bounded cache, be aware of the thrashing that can occur when operating on data sets that are too large to fit in the cache. The solution that was found in JDT was to read large JAR files selectively, rather than reading the entire file and causing the cache to overflow.  The result is that interesting portions remain in the cache longer without consuming lots of memory. User editing experience is thus significantly improved on large workspaces containing big JARs. Our experiments show that the memory requirement for developing Eclipse in Eclipse 3.2 can be lowered to 128MB only (i.e. passing -Xmx128m to the VM) as opposed to 256MB as currently specified in the eclipse.ini file.
 +
 
 +
== [[Image:new-small.gif]] Not closing streams from zip and jar files ==
 +
The javadoc of java.util.zip.ZipFile#close() specifies that it closes all streams opened on that file.
 +
Unfortunately the implementation has never done this (see [http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6735255 Java bug]). This means if you are trusting the spec and never explicitly closing your streams in zip or jar files, you are leaking memory.
 +
 
 +
'''Avoidance techniques:'''
 +
 
 +
The solution is easy: whenever you open a stream on a ZipFile or JarFile, ensure you explicitly close the stream in addition to closing the file. This will ensure all memory is properly freed.
  
 
----
 
----
Back to [[Performance]]
+
[[Category:Performance]]
 +
[[Category:Architecture Council]]
  
 
Back to [[Eclipse Project]]
 
Back to [[Eclipse Project]]

Revision as of 02:51, 7 March 2009

(Last updated: 2009/03/7.)

Below is a collection of goofs, mistakes, and bad decisions made in developing plugins for the Eclipse platform. Many are standard Java programming problems, some are specific to Eclipse. The intent here is not to pick on the perpetrators (in most cases they in fact eagerly contributed the blooper information!) but rather to help other developers avoid similar pitfalls. The bloopers have been contributed from the community and may have been discovered in Eclipse SDK code or in third party plugins. Since all of Eclipse is implemented as plugins, the issues are usually generally relevant.

Each blooper is structured as a statement of the scenario followed by suggested techniques for avoiding the problem(s). In some cases there are clear steps, in others there really is no solution except to follow the advice of a wise doctor and "don't do that".

The set of bloopers is (sadly) always growing. This page is intended as a resource for developers to consult to build their general knowledge of problems, techniques, etc. Check back often and contribute your own bloopers.

String.substring()

The java.lang.String.substring(...) method is usually implemented by creating a new String object that points back to the same underlying char[] as the receiver, but with different offset and length values. Therefore, if you take a very large string, create a substring of length 1, then discard the large string, the little substring may still hold onto the very large char[].

A nasty variant of this blooper is when the substring is later interned by calling String.intern(). On some VMs, this means the large char[] object is now held onto forever by the VM's intern pool. Kiss that memory good-bye, because there's no way to free it again.

Avoidance techniques:

In situations where you know you are creating a small substring and then throwing the large string away, force a copy of the string to be created by calling new String(substring). This seems counter-intuitive from a performance perspective because it creates extra objects, but it can be worthwhile if the substring is being retained for a long period. In one particular case in the Eclipse JDT plugins, copying the substring yielded a 550KB space savings. Not bad for a one line fix!

Unbufferred I/O

With most flavours of java.io.InputStream and java.io.OutputStream, buffering doesn't come for free. This means that every single read and write call may result in disk or network I/O. Similarly in Eclipse, the streams returned by methods such as org.eclipse.core.resources.IFile#getContents, or created by opening an InputStream on an Eclipse URL are not buffered.

Avoidance techniques:

The solution in this case is simple. Just wrap the stream in a java.io.BufferedInputStream or BufferedOutputStream. If you have a good idea of the amount of bytes that need reading or writing, you can even set the stream's buffer size appropriately.

Strings in the plug-in registry

In Eclipse 2.0.* and before, it was generally assumed that there would be hundreds of plugins and that the plugin registry, while sizeable, could reasonably be held in memory. As Eclipse-based products came to market we discovered that developers were taking the plugin model to heart and were creating hundreds of plugins for one product. Our assumptions were being tested...

One of the key failings in this area was the use of Strings. (Note this is actually a more general problem but reared its ugly head in a very tangible way here) All aspects of plugins (extensions, extension points, plugins/fragments, ...) are defined in terms of String identifiers. When the platform starts it parses the plugin.xml/fragment.xml files and builds a registry. This registry is essentially a complete parse tree of all parsed files (i.e., a mess of Strings). In general the String identifiers are not needed for human readability but rather code based access and matching. Unfortunately, Strings are one of the least space efficient data forms in Java (e.g., a 25 character string requires approximately 90 bytes of storage).

Further, typical code patterns for registry access involve the declaration of some constant, for example:

   public static final String ID = "org.eclipse.core.resources";

And then the use of this constant to access the registry:

   Platform.getRegistry().getPlugin(ID);

In this case, the character sequence "org.eclipse.core.resources" (26 characters) is stored as UTF8 in constant pool of each class using the constant and, in typical JVMs, on first use, the UTF8 encoding is used to create and intern a real String object. Note that this String object is equal but not identical to the one created during registry parsing. The net result is that the total space required for this identifier usecase is:

   (space for "org.eclipse.core.resources" * 2) + (space for UTF8 * number, N, of loaded referencing classes)
   ((44 + 2 * 26) * 2) + (26 * N) = 192 + 26 = 218bytes (where N > 1)

Obviously as platform installs move from hundreds to thousands of plugins this approach does not scale.

Avoidance Techniques:

The first thing to observe is that this was a design flaw. The initial design should not have relied on the registry being omni-present. The second observation is that Strings as identifiers are easy to read but terribly inefficient. Third, changing the behaviour in a fundamental way is difficult as much of the implementation is dictated by API (which cannot be changed).

With those points in mind, there are several possible approaches for better performance.

  1. Intern the registry strings: This is perhaps the easiest to implement. Since the strings used in the methods are intern()'d in the system's symbol table, the registry can share the strings by intern()'ing its strings there as well. This costs a little more at parse time but saves one copy of the string or (44 + 2 * M) bytes. One side effect of this is the performance degradation of intern(). On some JVM implementations the performance of intern() degrades dramatically. Interning the registry strings eagerly and early seeds the intern() table increasing the collision rate.
  2. Use a private intern table: Within the registry there are many duplicate strings. These can be eliminated without overloading the system's intern() table by using a secondary table. The duplication between the strings in the code and those in the registry would not be eliminated.
  3. Avoid strings: In general the ids are used for matching/looking elements in the registry. The only compelling reason to use Strings is so they are humanly readable in the plugin.xml files. Some sort of mechanism which retains the needed information but uses primitive types (e.g., int) as keys would address the issue without losing the useability. Unfortunately, this approach is very attractive but difficult after the fact as most of the platform runtime's API is specified in terms of string ids.
  4. Swap out the registry: The registry is typically used only when plugins are activated. As such, most or all of it could be written to disk and reread on demand.

In Eclipse 3.1, the fourth approach was taken. All registry data structures are now loaded from disk on demand, and flushed from memory when not in use by employing soft references (java.lang.ref.SoftReference). For an application that is not consulting the registry, memory usage for the extension registry has effectively been reduced to zero.

Excessive crawling of the extension registry

As described in the previous blooper, the Eclipse extension registry is now loaded from disk on demand, and discarded when no longer referenced. This speed/space trade-off has created the possibility of a whole new category of performance blooper for clients of the registry. For example, here is a block of code that was actually discovered in a third-party plugin:

   IExtensionRegistry registry = Platform.getExtensionRegistry();
   IExtensionPoint[] points = registry.getExtensionPoints();
   for (int i = 0; i < points.length; i++) {
       IExtension[] extensions = points[i].getExtensions();
       for (int j = 0; j < extensions.length; j++) {
           IConfigurationElement[] configs = extensions[j].getConfigurationElements();
           for (int k = 0; k < configs.length; k++) {
               if (configs[k].getName().equals("some.name"))
                   //do something with this config
           }
       }
   }

Prior to Eclipse 3.1, the above code was actually not that terrible. Alhough the extension registry has been loaded lazily since Eclipse 2.1, it always stayed in memory once loaded. If the above code ran after the registry was in memory, most of the registry API calls were quite fast. This is no longer true. In Eclipse 3.1, the above code will now cause the entire extension registry, several megabytes for a large Eclipse-based product, to be loaded into memory. While this is an extreme case, there are plenty of examples of code that is performing more registry access than necessary. These inefficiences were not apparent with a memory-resident extension registry.

Avoidance techniques:

Avoid calling extension registry API when not needed. Use shortcuts as much as possible. For example, directly call IExtensionRegistry.getExtension(...) rather than IExtensionRegistry.getExtensionPoint(...).getExtension(...).

Some extra shortcut methods were added in Eclipse 3.1 to help clients avoid unnecessary registry access. For example, to find the plugin ID (namespace) for a configuration element, clients would previously call IConfigurationElement.getDeclaringExtension().getContributor().getName(). It is much more efficient to call the new IConfigurationElement.getContributor().getName() method directly, saving the IExtension object from potentially being loaded from disk.

Message catalog keys

The text messages required for a particular plugin are typically contained in one or more Java properties files. These message bundles have key-value pairs where the key is some useful token that humans can read and the value is the translated text of the message. Plugins are responsible for loading and maintaining the messags. Typically this is done on demand either when the plugin is started or when the first message from a particular bundle is needed. Loading one message typically loads all messages in the same bundle.

There are several problems with this situation:

  1. Again we have the inefficient use of Strings as identifiers. Other than readability in the properties file, having human readable keys is not particularly compelling. Assuming the use of constants, int values would be just as functional.
  2. Similarly, the use of String keys requires the use of Hashtables to store the loaded message bundles. Some array based structure would be more efficient.
  3. The Eclipse SDK contains tooling which helps users "externalize" their Strings. That is, it replaces embedded Strings with message references and builds the entries in the message bundles. This tool can generate the keys for the messages as they are discovered. Unfortunately, the generated keys are based on the fully qualified class/method name where the string was discovered. This makes for quite long keys (e.g., keys greater than 90 characters long were discovered in some of the Debug plugins).

Avoidance Techniques: There are several facets to this problem but the basic lesson here is to understand the space you are using. Long keys are not particularly useful and just waste space. String keys are good for developers but end-users pay the space cost. Mechanisms like bundle loading/management which are going to be used through out the entire system should be well thought out and supplied to developers rather than leaving it up to each to do their own (inefficient) implementation.

With that in mind, below are some of the many possible alternatives:

  1. Shorter keys: Clearly the message keys should be useful but not excessively long.
  2. Use the Eclipse 3.1 message bundle facility, org.eclipse.osgi.util.NLS. This API binds each message in your catalog to a Java field, eliminating the notion of keys entirely, yielding a huge memory improvement over the basic Java PropertyResourceBundle.

Eager preference pages

Often preference keys and setting the preference defaults is done in preference pages. As a result, the preference page classes are loaded when either the key or the value is needed. Those preference page classes sometimes contain extensive UI code. The net result is that code is loaded that is typically never used since users rarely consult preference pages once acceptable values are set.

Avoidance Techniques: Refactor your code to move the preference constants and initialization code into dedicated classes. Preference page classes will then only be loaded on demand by the workbench's lazy loading mechanism when needed. In Eclipse 3.0 or newer it is recommended to use the org.eclipse.core.runtime.preferences extension point.

Too much work on activation

Plugins are activated as needed. Typically this means that a plugin is activated the first time one of its classes is loaded. On activation, the plugin's runtime class (aka plugin class) is loaded and instantiated and the startup() lifecycle method called. This gives the plugin a chance to do rudimentary initialization and hook itself into the platform more tightly than is allowed by the extension mechanisms in the plugin.xmls.

Unfortunately, developers seize the opportunity and do all manner of work. Also unfortunate is the fact that activation is done in a context free manner. For example, at activation time the JDT Core plugin, for example, does not know why it is being activated. It might be because someone is trying to compile/build some Java, or it might be because class C in some other plugin subclasses a JDT class and C is being loaded. In the former case it would be reasonable for JDT Core to load/initialize required state, create new structures etc. In the latter this would be completely unreasonable. Because a bundle never knows what causes it to be activated, it is difficult for a bundle to optimize for this. JDT Core handles this by populating its in-memory model lazily, ensuring that initialization work is only performed when necessary.

We have seen cases where literally hundreds of classes and megabytes of code have been loaded (not to mention all the objects created) just to check and see that there was nothing to do.

This behavior impacts platform startup time if the plugins in question contribute to the current UI/project structure or imposes lengthy delays in the user's workflow when they suddenly (often unknowingly) invoke some new function requiring the errant plugin to be activated.</p>

Avoidance Techniques:

The platform provides lazy activation of plugins. Plugins are responsible for efficiently creating their internal structures according to the function required. The startup() method is not the time or place to be doing large scale initialization.

Decorators

The UI plugin provides a mechanism for decorating resources with icons and text (e.g., adding the little 'J' on Java projects or the CVS version number to the resource label). Plugins contribute decorators by extending a UI extension point specifying the kind of element they would like to decorate. When a resource of the identified is displayed, all installed decorators are given a chance to add their bit to the visual presentation. This model/mechanism is simple and clean.

There are performance consequences however:

  • Early plugin activation In many scenarios, plugins get activated well before their function is actually needed. Further, because of the "Too much work at activation" blooper, the activated plugins often did way more work than was required. In many cases whether or not a resource should be decorated is predicated on a simple test (e.g., does it have a particular persistant property). These require almost no code and certainly no complicated domain/model structures.
  • Resource leaks The mechanism can leak images even if individual decorators are careful. decorateImage() wants to return an image. If a decorator simply creates a new image and returns it (i.e., without remembering it) then there is no way of disposing it. To counter this, decorators typically maintain a list of the images they have provided. Unfortunately, this list is monotonically increasing if they still create (but remember) a new image for every decoration request. To counter this, well-behaved decorators cache the images they supply based on some key. The key is typically a combination of the base image provided and the decoration they add. This key then allows decorators to return an already allocated image if the net result of the requested decoration is the same as some previous result. Since decorators are chained, all decorators must have this good behaviour. If just one decorator in a chain returns a new image, then the caching strategies of all following decorators are foiled and once again resources are leaked.
  • Threading Decorators run in foreground which causes problems for some people (e.g., CVS). To workaround this, heavy-weight decorators have a background thread which computes the decorations and then issues a label change event to update the UI. This does not scale. When a label changed event is posted, all decorators are run again. This allows the decorators following the heavy-weight contributor to add their decoration. The net result is a flurry of label change events, decoration requests and UI updates, most of which do little or nothing. Further, the problem gets worse quickly as heavy-weight decorators are added.
  • Code complexity While this is not directly a performance problem, it does lead to performance issues as the code here is complex and hard to test. To do decorators correctly, plugin writers have to write their own caching code as well as their own threading code (assuming they have heavy decorator logic). Both chunks of code are complicated, error prone and likely very much the same from plugin to plugin. Prime candidates for inclusion in the base mechanism.

Avoidance techniques: The UI team tackled this problem by providing more decorator infrastructure.

  • The semantic level of the decorator API was raised so that decorators described their decorations rather than directly acting. This allows the UI mechanisms to manage a central image cache and create fewer intermediate image results by applying all decorations at once.
  • The Workbench also manages a background decoration thread. All heavy-weight decorators are run together in the background and their results combined and presented in one label changed event.
  • Static decoration information can now be declared in the plugin.xml. This allows plugins to contribute decorators without loading/running any of their code (a big win!!). The plugin describes the conditions for decoration (based on the existence of properties, resource types, etc) and the decoration image and position. The Workbench does the rest.

PDE cycle detection

PDE Core used to have a linear list of plug-in models generated by parsing manifest files. Meanwhile, manifest editor has a small 'Issues and Action Items' area in the Overview page. Among other things, this area shows problems related to the plug-in to which the manifest file belongs. One of the problems that can be detected is cyclical plug-in dependencies. When opened, this section will initiate a cycle detection computation.

Cycle detection computation follows the dependency graphs trying to find closures. It follows the graph by looping through the plug-in IDs, looking up plug-in models that match the IDs, then recursively follows their dependencies. In the original implementation, each ID->model lookup was done linearly (by iterating over the flat list of models).

Avoidance techniques:

In a large product with 600 plug-ins and convoluted dependency tree, we got complaints that manifest editor takes 3 minutes to open in some cases!! After performance analysis, we changed the linear lookup with a hash table (using plug-in ID as the lookup key). The opening time was reduced to 3 seconds (worst case scenario) !!!! And we already had this table in place for other purposes. The actual fix took 2 minutes to do.

Too many resource change listeners

PRE_AUTO_BUILD and POST_AUTO_BUILD resource change listeners have a non-trivial cost associated with them. This is because a new tree layer must be created to allow the listener to make changes to the tree. It was discovered that of the five BUILD listeners that were typically running, four of them were from the org.eclipse.team.cvs plug-ins. See the bug report for more details.

Avoidance techniques:

Minimize use of these listeners. Some ideas:

  • POST_CHANGE listeners have trivial cost... switch to POST_CHANGE where possible
  • Two listeners cost more than one. Try to create just one and delegate the work from there.
  • Consider removing listeners when they are not applicable. For example, if you are listening for changes on a particular file or directory, you may be able to remove that listener when the applicable resource is not present.

Calling Bundle.start()

One of the core APIs of the OSGi runtime is the org.osgi.framework.Bundle interface. Each Bundle instance represents a single bundle (aka plug-in) in an Eclipse or OSGi instance. This class largely replaces the Plugin class from the original Eclipse runtime. As clients started to migrate over to the OSGi runtime, they began using the Bundle class, and in particular its Bundle.start() method to start up a bundle. However, if you look at the fine print in the javadoc of the start() method, it specifies behaviour that many don't expect:

   Persistently record that this bundle has been started. When the
   Framework is restarted, this bundle must be automatically started.

This goes against the general Eclipse philosophy of lazy activation. In Eclipse 3.0 and 3.1, the Eclipse OSGi implementation actually did not obey this specification and continued to follow the lazy activation principle. In Eclipse 3.2, this "bug" was fixed and the start() method now obeys the spec. The end result is that anyone calling Bundle.start() without later calling Bundle.stop() essentially causes an activation leak. In all future sessions, that bundle, and any bundle it implicitly loads due to class references, will be started immediately. This can lead to startup times getting steadily longer as all these started bundles are activated.

Avoidance techniques:

The most obvious solution is to avoid using Bundle.start(). Think hard about why you need to activate that bundle. If you need to access some class or extension in that bundle, then explicit activation is not needed because it will happen automatically. If you are relying on some side-effect of that bundle's activation code, consider using an explicit class or extension reference instead to avoid such a brittle dependency. If you really must activate that bundle, then you must also take responsibility for stopping that bundle when no longer needed. Essentially in this case you are taking that bundle's lifecycle into your own hands. The general lesson from this blooper is that you should always fully read and understand the specification of methods you are using. A method may at first glance appear to be doing what you want, but may have side-effects that you did not anticipate.

JAR signing and verification

Some people distributing and deploying Eclipse want to sign the content with a cryptographic hash so that the recipient can verify that it came from a trusted source. Java has a signing and verification mechanism built directly into the class libraries for doing this kind of signing. The problem is, the performance of certain class library methods vary greatly when JARs are signed, because they performance expensive verification whenever a signed JAR is encountered. Even worse, there is often no way to disable this verification, even in a trusted environment where you are certain about the origin of the JARs being used. Here are some signing related performance bloopers that were encountered when testing Eclipse with signed JARs:

  • URLClassLoader - this class will verify any JAR that it loads, and there is no way to avoid it.
  • java.util.jar.JarFile - the default JarFile(String) and JarFile(File) methods will verify any signed JARs, even if you weren't planning on loading or running any code in the JAR you are opening.
  • java.util.jar.Manifest - the manifest of a signed JAR can be quite large, because signatures for each entry in the JAR are stored in the manifest. Most code that is opening a manifest is only interested in the typically small set of main attributes (Manifest.getMainAttributes()) at the start of the file. However, as soon as you instantiate Manifest, the entire manifest file is loaded and parsed. This can be very expensive in large signed JARs, that can sometimes have 500KB of signatures in them.

Avoidance techniques:

Be aware of the consequences of these class library methods when signing is enabled. Consider whether your particular application needs to perform verification, and if not, find an alternative that avoids verification. If you are loading a class or other code to be executed, you likely want the verification to proceed. On the other hand, if you are opening content from a trusted source, or looking at JAR entries that do not contain executable code, then signing may not be necessary. java.util.jar.JarFile has additional constructors with a boolean flag for verification, so it is an easy fix to disable verification in this case. When reading a manifest file, consider doing a light-weight parse to find the main attribute you are looking for, to avoid parsing the entire file. The first blank line in a manifest file indicates the end of the main attributes. For example, the OSGi class org.eclipse.osgi.framework.util.Headers performs an efficient parse of the main attributes of a manifest file.

Selective loading of JAR files

The JDT Java model uses a least recently used (LRU) cache to bound the amount of memory that it consumes. Reading a very large JAR file would cause this cache to repeatedly overflow, causing the same large JAR file to be read over and over again. While this could be resolved by increasing the Java heap size, it resulted in poor performance in situations where memory was constrained.

Avoidance techniques:

When using a bounded cache, be aware of the thrashing that can occur when operating on data sets that are too large to fit in the cache. The solution that was found in JDT was to read large JAR files selectively, rather than reading the entire file and causing the cache to overflow. The result is that interesting portions remain in the cache longer without consuming lots of memory. User editing experience is thus significantly improved on large workspaces containing big JARs. Our experiments show that the memory requirement for developing Eclipse in Eclipse 3.2 can be lowered to 128MB only (i.e. passing -Xmx128m to the VM) as opposed to 256MB as currently specified in the eclipse.ini file.

New-small.gif Not closing streams from zip and jar files

The javadoc of java.util.zip.ZipFile#close() specifies that it closes all streams opened on that file. Unfortunately the implementation has never done this (see Java bug). This means if you are trusting the spec and never explicitly closing your streams in zip or jar files, you are leaking memory.

Avoidance techniques:

The solution is easy: whenever you open a stream on a ZipFile or JarFile, ensure you explicitly close the stream in addition to closing the file. This will ensure all memory is properly freed.


Back to Eclipse Project

Copyright © Eclipse Foundation, Inc. All Rights Reserved.