Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Hudson-ci/features/Memory Performance

< Hudson-ci‎ | features
Revision as of 13:42, 1 August 2013 by Winston.prakash.gmail.com (Talk | contribs) (New page: The basic idea behind performance analysis and fix is to free up memory that is not actively being utilized. When builds are running there is some inevitable pinning of memory, but when a ...)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The basic idea behind performance analysis and fix is to free up memory that is not actively being utilized. When builds are running there is some inevitable pinning of memory, but when a build is finished, the data for that build need not be held in memory but can be read from disk when requested. A summary information may be cached in some data structure to service frequently used queries. Similarly, a Job definition is mostly static information that need not be pinned, it can also be loaded from disk and some key attributes can be cached.

Basic considerations:

1. There should be no change in APIs, plugins should be minimally impacted if at all.
2. We should not trade one problem for another. If nothing is held in memory, I/O could spike when views are rendered.
3. Changes should be incrementally testable.
4. Changes need to be published back to Hudson repository.

Design Considerations

Current plan is to replace some of the model objects with ‘Lazy’ versions that act like a proxy for the real object and only loads it from disk as needed. The real object is weakly-held and thus available for GC. The higher the object in the model hierarchy( i.e. the closer it is to the Hudson object), the larger the size of the graph that is freed but conversely the higher the I/O impact when it has to be loaded. This approach takes us from a situation where ‘everything is cached’ to ‘nothing is cached’, another extreme.

This is where a strongly-referenced cache comes in, that holds onto frequently used objects based on an LRU list or a timed eviction policy (or both). The parameters of the cache can then be fine tuned to adjust the setting between the two extremes of the scale. Multiple caches may be constructed for each type of model object.

Design Details

We will start with the Project or Job object, put a Lazy decorator on that, and then move down to its children, viz Builds or Runs and so no, till we get down to the build artifacts like log files and console output. This means even when a ‘real’ Job object is constructed, it won’t construct a ‘real’ Build, but will instead hold onto a Build proxy that is light-weight and does not trigger creation of its children and grandchildren.

Some Results

Back to the top