Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: for the plan.

Jump to: navigation, search

WTP Performance Tests/2008-07-21


Gary Karasiuk Y
Angel Vera Y
Mark Hutchinson Y
Nick Sandonato Y
Carl Anderson Y
Kaloyan Raev Y
Raghu Srinivasan Y
Neil Hauge Y
David Williams


First meeting
Explain current problems
Discuss how we would like to see WTP performance tests look like.


Old ANT scripts are complicated and convoluted. They run, but the generation of the graphic does not occur. There is a sample of the data generated without the graphics which is still useful. Kaloyan will post the link on the wiki.
Is there a benefit on running the performance tests and its results?
- Yes!
Problem area
Resources "man hours"
Old complicated ant script that for some reason can't extract result data from the Derby DB.
Run a set of tests in a controled enviroment that enable the WTP subprojects to analyze the responsiveness of the tool and its subproject.
Teams should own the perf tests, like they do the junit tests. This responsibility should not be centralized.
Tests should be run in central controlled environment.
There is good value to have tests running repeatedly, but they should be meaningful. Tests should be automated and run frequently enough.
Hardware available
At the moment we have one Windows Server 2003 machine (donated by Kaloyan).
We can start running the automated perf tests on it. If we want to do this on other platforms we should make a list of possible machines the perf team can gather.
Short terms goals
We are first going to try to get this running on Windows and then later expand to other platforms if we are successful.
If there is a manual step to generate the graphic we will attempt that first and then look into automating the tasks
It was suggested that the automation would have to be written in a ANT script from the ground up (forget about the ant script by JohnL)
We want to reuse as much as we can. We should use the same framework as the Platform team does.
Start with 1 test per component
Let's start with looking at how the Platform team does their performance tests.
Long term goals
Add RedHat
Add Windows XP/Vista
How about ad-hoc perf testing? Incorporate as part of our smoke tests? Performance snapshots. The tester clicks a menu item to start recording the snapshot...?!
Action Takers
Each week we will look at the "man hours" that each team can contribute and we will determine what contribution can be made to the action items.
The person contributing will need to summaries its action to the team on a email.

Action items

Gary - will spend one hour to see how the Platform are doing their performance tests
Everybody else - do like Gary or at least spend some time reading the Performance wiki page and related links.
Gary - put info on the wiki about how ad-hoc perf testing is done.

Gary - As promised I spent some time looking into how the platform runs their performance tests. I think I got about as far as Kaloyan did. I was able to write and then run a performance test. This test wrote results into a Derby database. I was able to examine the database and verify that the results were written correctly.

I then spent a significant amount of time debugging the org.eclipse.test.performance.ui.Main application, to try to get it to generate some graphs. This program is very specific to the platform (e.g. they hard coded in a number of standard eclipse components) and would take some work to make it usable by others. I was hitting various NPEs while trying to run it against my database. In the time that I had set aside I couldn’t get it to generate results.


Performance wiki page
Performance results for WTP 2.0
Performance results for WTP 2.0.1 (run on the new system as a basis for reference)

Back to the top