Difference between revisions of "Platform UI/Performance Tests"

From Eclipsepedia

Jump to: navigation, search
Line 133: Line 133:
[[Automated_Testing | Automated Testing]]
[[Eclipse/Testing | Automated Testing]]
[http://www.eclipsecon.org/2005/presentations/EclipseCon2005_13.2ContinuousPerformance.pdf EclipseCon 2005: Continuous Performance]
[http://www.eclipsecon.org/2005/presentations/EclipseCon2005_13.2ContinuousPerformance.pdf EclipseCon 2005: Continuous Performance]

Latest revision as of 12:28, 17 November 2009


[edit] Running UI Performance Tests

Got some performance tests in the red? Past the "scratching my head" phase, and into the "can I run those on my computer" phase? Then this page is for you.

The typical steps to deal with those test failures would be:

Some general suggestions:

  • If your CPU has power-saving options, disable them before running performance test.
  • Use profilers, but don't trust profilers. Check them with direct measurements.
  • If a method takes 0.5% of CPU time, don't bother optimizing it.

[edit] Setup Test Environments

Some points on setting up environments:

  • Keep "Before" and "After" environments as separate Eclipse installs
  • For getting source code, see the "Checking out the Plug-ins" section.
  • Current tests from CVS Head might not work with the "Before" build. In this case use "Team" -> "Replace with" -> "Another Branch or Version" command. For comparison with 3.4 I used version "R3_4".
On Windows there is an additional step: the fragment "org.eclipse.test.performance.win32" has to be checked out into both "Before" and "After" workspaces.

Most of the UI performance tests expect certain setup, and some are generated dynamically. As a result, most of the tests can not be run directly; rather, the top-level test suites have to be run.

The two top-level tests suites are:

  • JFacePerformanceSuite for JFace, and
  • UIPerformanceTestSuite for everything else.

However, it takes considerable time (an hour or so) to run the full test suite. You can comment out unneeded tests. Alternatively, there is an enhancement request [271456] that will allow tests to be filtered using runtime arguments.

The output of the performance tests will appear in the console and will look like this:

Console with test results.

Once "before" and "after" environments are setup, you can run the failing test(s) and hopefully see the difference similar to what you see on the Eclipse download page.

[edit] Investigate the Differences

Unless the place that causes performance problem is obvious ("ah, yes, we added that great feature X") you'll need a profiler to start. There are a number of profilers out there. I currently use YourKit as it works well for my tasks and offers [integration] with Eclipse.

However, the results from the profiled runs have to be taken with caution. Depending on the nature of the code, instrumentation can heavily skew the results and turn method calls that take 0.5% of the actual run time into 40%. (This is true for all profilers that I've tried so far.)

So, before making happy noises and rushing to optimize that poor method, I'd suggest verifying profiler results with direct measurements.

The direct measurement idea is simple: now we have a suspect method and, hence, can add a few lines of code to measure it effects directly. For instance, for elapsed time, we can put statements linke

around the method calls.

[edit] Check Using the Test Framework

OK, you think you have found the cause and made the fix. At this point you probably have developed a healthy mistrust towards all measurements and wondering about the impact of your changes on the tests as they run on the build machines.

To ease your mind, you can verify effects of the fix by using the test framework - essentially using the same setup as a build machine.

Setting up test framework

To set up the test environment download "eclipse-Automated-Tests-*.zip" from the "JUnit Plugin Tests and Automated Testing Framework" category on the [download page].

Testing framework on the download page.

The Eclipse version and the test framework have to be from the same build. The downloaded archive contains file readme.html that explains how to use the test framework.

For the UI performance tests:

  • Unzip the test framework into a clean directory, let's say "C:/ET". On Windows try to use a directory with a short name at the root of the hard drive so that paths won't exceed 255 chars limit.
  • Copy the SDK build into this directory ("C:/ET"). If it is an M-build, the archive would have to be renamed to the tag or the original I-build, for instance:

The unzipped framework contains eclipse-junit-tests-I20090313-0100.zip The corresponding SDK build archive is named eclipse-SDK-3.5M6-win32.zip The SDK zip file needs to be renamed to eclipse-SDK-I20090313-0100-win32.zip

  • Copy an unzip.exe file in the "C:/ET" or make sure it is in the OS search path

Running tests

To run the UI performance test suite (includes UI and JFace) use the command:

C:\ET>runtests "-Dtest.target=performance" uiperformance

To limit runs to the specific test, source code has to be modified (more on this below). Alternatively, once the enhancement request [271456] is implemented, name of the test could be specified in the properties file.

For instance, if the properties file name is properties.txt it could contain a line similar to:


Then the command to run the test would be:

C:\ET>runtests "-Dtest.target=performance" -properties properties.txt uiperformance

or the Linux version of:

./runtests -os linux -ws gtk -arch x86 "-Dtest.target=performance" -properties properties.txt uiperformance

Running the test framework requires learning some magic incantations. However, once those incantations are in your spell book, it gets to be fairly trivial (albeit a time consuming) task.

Running tests with modified code

The easiest way is to repackage SDK zip file and/or test framework zip file to include modified plug-ins. This process has to maintain qualifiers on the versions of the original plug-ins.

The steps look like:

  • In the test framework find the qualifier version of the modified plug-in. For instance, let's say that the "org.eclipse.ui.tests.performance" plug-in was modified. Its full name in the test framework we downloaded is "org.eclipse.ui.tests.performance_1.1.0.I20090209-2000".

The part "I20090209-2000" is the qualifier. Write down this number.

  • Note if the plug-in is provided in the JARed form or as a directory
  • Now we are ready to create modified plug-in. In the Eclipse instance that has the modified plug-in, use "Export" -> "Plug-in Development" -> "Deployable plug-ins and fragments". In the "Options" tab be sure to specify "qualifier replacement" and enter the qualifier we found above. Also make sure that "Package plug-ins as individual JAR files" corresponds to the desired state: JARed or directory.
  • Take the zip file from the framework (either eclipse-junit-tests-*.zip or eclipse-SDK-*.zip)) that contains the modified plug-in and unzip it somewhere
  • In the unziped archive delete the old plugin and copy the new file
  • Create a new zip from the modified contents; copy it back to the test framework directory
  • Before re-running tests, delete all subdirectories of the test framework to avoid confusion with previous results (that were run with unmodified code)
  • Run the tests. To reduce confounding variables I suggest rebooting your computer before running final "before" and "after" tests and giving it a few minutes after reboots to fully wakeup.

[edit] Links


Automated Testing

EclipseCon 2005: Continuous Performance

Performance Tests HowTo

Back to Platform UI home