Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: for the plan.

Jump to: navigation, search

WTP Performance Tests

Revision as of 01:17, 21 November 2010 by Unnamed Poltroon (Talk) (Status Meeting -- removed list of numbers, which is obsolte)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

WTP Performance Cross Team

The goal of the team is to revive the WTP Performance tests. The test execution had to be moved to another server and there are still problems with setting up the execution environment. It was decided to start over WTP Performance tests with something small - each team will contribute some simple tests and after having successfully setup this, we will continue building on top.

Team Lead: Kaloyan Raev

Each WTP project team contributes one person who will participate in the WTP Performance Cross Team. See the list of participants below.

How To

Latest findings on how to run performance tests are documented on the How To wiki page.

Status Meeting

Meetings are held on Monday's, 12 Noon to 1 PM Eastern Time.

Dial-in numbers:

Toll Free: 877-421-0030
Toll (USA): 770-615-1247
Participant Passcode: 269746

Meeting Minutes

Next meeting: will be scheduled on demand


Open Action Items

  • Top action item: Refactor Ant scripts for automating performance test execution - see bug 252334.
  • Patch Platform code - see bug 244986.
  • Define a single test per project (see Initial Tests section).
  • Write thoughts about best practices and priorities for measurement (see Best Practices section).
  • Review the available performance test suites if they are semantically relevant for performance tests.

Team Members

Team Participant
Common Gary Karasiuk
Server Angel Vera
Webservices Mark Hutchinson
Source Editing Nick Sandonato
JEE Tools Jason Peterson
EJB Tools Kaloyan Raev
JSF Tools Raghu Srinivasan
Dali Neil Hauge
Releng David Williams

Initial Tests

Each component should continue to identify a single test case that should work correctly.

Team Testcase
Common org.eclipse.wst.common.tests.performance.PerformancePlugin.HTMLValidator
Server org.eclipse.wst.server.tests.performance.StartupExtensionTestCase
Webservices plugin:

vmargs: -Dorg.eclipse.jst.server.tomcat.50=<tomcat 5 install location> -Dtomcat50Dir=<tomcat 5 install location>

Source Editing org.eclipse.wst.xml.ui.tests.performance.FormatTest
JEE Tools org.eclipse.jst.j2ee.tests.performance.TestCaseSAXParser
EJB Tools org.eclipse.jst.j2ee.tests.performance.EJBProjectCreationTestCase
JSF Tools
Dali org.eclipse.jpt.core.tests.internal.performance.JpaPerformanceTests


Latest Results

WTP 2.0
WTP 2.0.1 (run on the new system as a basis for reference)

Known Issues

  • Performance Test Bugs and Enhancements
  • Improve the actual tests. Why do some take so long? Why do some results fluctuate? Why do some have drastic performance degradations? Are the tests even still valid?

Best Practices

  • The first iterations of the above for-loop will generally take more time because the code is not optimized by the JIT compiler yet. This can introduce some variance to the measurements, especially if other tests run before and change in some way that affects the JIT's optimization of the measured code. A simple way to stabilize the measurements is to run the code a few times before the measurements start. Caches also need special caution as they can affect the measurements.
  • As a rule of thumb the measured code should take at least 100ms on the target machine in order for the measurements to be relevant. For example, Windows' and Linux 2.4's system time increases in 10ms steps. In some cases the measured code can be invoked repeatedly to accumulate the elapsed time, however, it should be kept in mind that JIT could optimize this more aggressively than in real-world scenarios.
  • There needs to be a set of "larger scale" tests that can be used by adopting products, to ensure that the performance of WTP has not regressed. You can think of these as assurance performance tests. These would work on large workspaces, and would cover the operations that could take a long time. (Import, Clean Build and Deploy). These tests need to measure both elapsed time as well as memory. Typically these tests take minutes to run.

Reference Material

Performance wiki page
How to Write an Eclipse Performance Test
Graph Generation Tool HowTo
Sample Eclipse Performance Results
Article Automating Eclipse PDE Unit Tests using Ant
Bugzilla: (performance tests) No Results for WTP Tests
Eclipse-dev fingerprint changes coming:
EclipseCon 2005 - Continuous Performance - Monitoring Performance with Automated Tests

Back to Web Tools Wiki Home

Back to the top