Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.
Performance/Automated Tests
The Eclipse performance test plugin (org.eclipse.test.performance) provides infrastructure for instrumenting programs to collect performance data and to assert that performance doesn't drop below a baseline. The infrastructure is supported on Windows, Linux, and MacOS X.
The first part of this document describes how performance tests are written and executed, the second part explains how performance data is collected in a database and how this database is installed and configured.
Contents
- 1 Writing Performance Tests
- 1.1 Setting up the environment
- 1.2 Writing a performance test case
- 1.3 Participating in the performance summary (aka "Performance Fingerprint")
- 1.4 Running a performance test case (from a launch configuration)
- 1.5 Running a performance test case (from the command-line)
- 1.6 Running a performance test case (within the Automated Testing Framework on each build)
- 1.7 Running a performance test case (within the Automated Testing Framework, locally)
- 2 Related Pages
Writing Performance Tests
Setting up the environment
- check out the following plug-ins from http://git.eclipse.org/c/platform/eclipse.platform.releng.git.
- org.eclipse.test.performance
- org.eclipse.test.performance.win32 (for Windows only)
- you need org.junit
- add org.eclipse.test.performance to your test plug-in's dependencies
Writing a performance test case
A performance test case is an ordinary JUnit test TestCase. Create a test case with test methods along the lines of:
public void testMyOperation() { Performance perf= Performance.getDefault(); PerformanceMeter performanceMeter= perf.createPerformanceMeter(perf.getDefaultScenarioId(this)); try { for (int i= 0; i < 10; i++) { performanceMeter.start(); toMeasure(); performanceMeter.stop(); } performanceMeter.commit(); perf.assertPerformance(performanceMeter); } finally { performanceMeter.dispose(); } }
or create a test case extending PerformanceTestCase which is a convenience class that makes the use of PerformanceMeter transparent:
public class MyPerformanceTestCase extends PeformanceTestCase { public void testMyOperation() { for (int i= 0; i < 10; i++) { startMeasuring(); toMeasure(); stopMeasuring(); } commitMeasurements(); assertPerformance(); } }
Notes:
- The scenario id passed to
createPerformanceMeter(...)
must be unique in a single test run and must be the same for each build. This enables comparisons between builds. ThePerformance#getDefaultScenarioId(...)
methods are provided for convenience. -
PerformanceMeter
supports repeated measurements by multiple invocations of thestart()
,stop()
sequence. The call tocommit()
is required before evaluation withassertPerformance()
anddispose()
is required before releasing the meter. - The first iterations of the above
for
-loop will generally take more time, because the code is not optimized by the JIT compiler yet. This can introduce some variance to the measurements, especially if other tests run before and change in some way that affects the JIT's optimization of the measured code. A simple way to stabilize the measurements is to run the code a few times before the measurements start. Caches also need special caution as they can affect the measurements. - As a rule of thumb the measured code should take at least 100 ms on the target machine in order for the measurements to be relevant. For example, Windows' and Linux 2.4's system time increases in 10 ms steps. In some cases the measured code can be invoked repeatedly to accumulate the elapsed time, however, it should be kept in mind that JIT could optimize this more aggressively than in real-world scenarios.
Participating in the performance summary (aka "Performance Fingerprint")
If the number of performance tests grows large, it becomes harder to get a good overview of the performance characteristics of a build. A solution for this problem is a performance summary chart that tries to condense a small subset of key performance tests into a chart that fits onto a single page. Currently the performance infrastructure supports two levels of summaries, one global and any number of "local" summaries. A local summary is typically associated with a component. A summary bar chart shows the performance development of about 20 tests relative to a reference build in an easy to grasp red/green presentation.
So dependent on the total number of components every Eclipse component can tag one or two tests for inclusion in a global and up to 20 for a local performance summary. Tests marked for the global summary are automatically included for a local summary.
Marking a test for inclusion is done by passing a performance meter into the method Performance.tagAsGlobalSummary(...)
or Performance.tagAsSummary(...)
. Both methods should be called outside of start/stop calls, but it must be called before the the call to commit()
.
// .... Performance perf= Performance.getDefault(); PerformanceMeter pm= perf.createPerformanceMeter(perf.getDefaultScenarioId(this)); perf.tagAsGlobalSummary(pm, "A Short Name", Dimension.CPU_TIME); try { // ...
In order to keep the overview graph small, only a single dimension (CPU_TIME
, USED_JAVA_HEAP
etc.) of the test's data is shown and only a short name is used to label the data (instead of the rather long scenario ID). Both the short label as well as the dimension must be supplied in the calls to tagAsGlobalSummary
and tagAsSummary
. The available dimensions can be found in org.eclipse.test.performance.Dimension.
The PerformanceTestCase
provides similar methods that must be called before startMeasuring()
:
public class MyPerformanceTestCase extends PerformanceTestCase { public void testMyOperation() { tagAsSummary("A Short Name", Dimension.CPU_TIME); for (int i= 0; i < 10; i++) { startMeasuring(); toMeasure(); stopMeasuring(); } commitMeasurements(); assertPerformance(); } }
Running a performance test case (from a launch configuration)
- Create a new JUnit Plug-in Test launch configuration for the test case.
- Adjust -Xms and -Xmx if needed. Note that during automated production runs, the settings are set to what the eclipse.ini has. If your test needs some different (larger or smaller) be sure to set both (ms and mx) in your test.xml file, using the vmargs property.
- Run the launch configuration.
- By default, the measured averages of the performance monitor are written to the console on
commit()
. This is suppressed if performance tests are configured to store data in the database (see below).
Running a performance test case (from the command-line)
- This method is of particular interest if you want to do precise measurements of a specific build.
- Export the following plug-ins as 'Deployable plug-ins' to the eclipse installation directory (not the 'plugins' directory; choose to deploy as 'a directory structure')
- org.eclipse.test
- org.eclipse.test.performance
- org.eclipse.test.performance.win32 (for Windows only)
- your test plug-in(s)
Copy the following .bat file and customize it to your needs (Windows)
@echo off REM Installation paths SET ECLIPSEPATH=c:\eclipse SET JVMPATH=c:\jdk\jdk1.4.2_05 REM Paths, relative to ECLIPSEPATH SET BUILD=I200411050810 SET WORKSPACE=workspace\performance REM Test SET TESTPLUGIN=org.eclipse.jdt.text.tests SET TESTCLASS=org.eclipse.jdt.text.tests.performance.OpenQuickOutlineTest REM For headless tests use: org.eclipse.test.coretestapplication SET APPLICATION=org.eclipse.test.uitestapplication REM Add -clean when the installation changes SET OPTIONS=-console -consolelog -showlocation SET JVMOPTIONS=-Xms256M -Xmx256M ECHO Build: %ECLIPSEPATH%\%BUILD% ECHO Workspace: %ECLIPSEPATH%\%WORKSPACE% ECHO Test: %TESTPLUGIN%\%TESTCLASS% %JVMPATH%\bin\java %JVMOPTIONS% -cp %ECLIPSEPATH%\%BUILD%\startup.jar org.eclipse.core.launcher.Main %OPTIONS% -application %APPLICATION% -data %ECLIPSEPATH%\%WORKSPACE% -testPluginName %TESTPLUGIN% -className %TESTCLASS% pause
After testing the setup you may want to close other applications to avoid distortion of the measurements.
Running a performance test case (within the Automated Testing Framework on each build)
If the test.xml of your test plug-in already exists and looks similar to the jdt.text.tests' one, add targets similar to those shown below. The performance target is the entry point for performance testing like the run target is for correctness testing.
<!-- This target defines the performance tests that need to be run. --> <target name="performance-suite"> <property name="your-performance-folder" value="${eclipse-home}/your_performance_folder"/> <delete dir="${your-performance-folder}" quiet="true"/> <ant target="ui-test" antfile="${library-file}" dir="${eclipse-home}"> <property name="data-dir" value="${your-performance-folder}"/> <property name="plugin-name" value="${plugin-name}"/> <property name="classname" value="<your fully qualified test case class name>"/> </ant> </target> <!-- This target runs the performance test suite. Any actions that need to happen --> <!-- after all the tests have been run should go here. --> <target name="performance" depends="init,performance-suite,cleanup"> <ant target="collect" antfile="${library-file}" dir="${eclipse-home}"> <property name="includes" value="org*.xml"/> <property name="output-file" value="${plugin-name}.xml"/> </ant> </target>
Notes:
- Do not include an empty performance target, in your test.xml file. Committers sometimes do as a reminder of where it would go, if/when they have performance tests, but some tools in the Maven/Tycho build (eclipse-cbi-plugin) produce a 'test.properties' file that describes the test bundles and characteristics, so an empty performance target causes a lot of noise and confusion (if not, later, processing complications). (bug 442455)
- Performance measurements are run on (primarily) a dedicated performance measurement machine of the Releng team.
- If you have created a new source folder, do not forget to include it in the build (add it to the build.properties file).
Running a performance test case (within the Automated Testing Framework, locally)
- Modify test.xml as described above.
- Download a build and its accompanying test framework plug-ins contained in eclipse-test-framework-*.zip
- Unzip the Eclipse SDK and the eclipse-test-framework ZIP to install your target Eclipse.
(you need Info-ZIP UnZip version 5.41 or later (Windows) installed and added to the path).
- Export your test plug-in as a 'Deployable plug-in' to the target Eclipse installed above; choose to export as directory structure.
- Open a terminal or command window and execute the following (on a single line):
java -jar <an eclipse install>/startup.jar -application org.eclipse.ant.core.antRunner -file <target eclipse install/plugins/your test plug-in id_version>/test.xml performance -Dos=<os> -Dws=<ws> -Darch=<arch> -Declipse_home=<target eclipse install> "-Dvmargs=-Xms256M -Xmx256M" -logger org.apache.tools.ant.DefaultLogger
- The JUnit results are written to an XML file in the root of the target Eclipse and the performance measurements are written to the console.
Related Pages
- Performance (see especially for its section with links to related pages)]]
- Platform UI/Performance Tests (see for some early work on the framework, including a link to an EclipseCon2005 presentation!)
- Another 2005 writeup.
- A good overview tutorial by Lars Vogel.