Gary promised to make it for the Common Tools team by next week. The other team with pending test case is JSF Tools.
Do we have success on running the performance test procedure on the a single test?
Nick sent a very useful information to Kaloyan. Now he is able to do a successful manual run of the graph generation tool.
Still have problems with the automatic procedure - with the CVS download of the graph generation tool.
The Ant scripts need refactoring to make it easy for automatically execute some main tasks:
set a baseline - not possible at the moment without manually manipulating the DB
run a test and compare against a baseline - almost possible to do this now, but still needs some optimization in terms of where the scripts downloads tools and plugins. For example, the org.eclipse.test.performance plugin is downloaded from the already released build and have to be patched manually.
We need to discuss what exactly do we want to measure.
Kaloyan mentioned that we should respect the JIT compiler optimizations
Gary expressed an opinion that we should measure big things - a complete user tasks, like executing a build, finishing a wizard, opening big files in an editor, etc.
David said that we should also take into account some small operations (less than a second) that are executed repeatedly in bigger operations or a critical to execute quickly. We should have to types of performance tests like:
assurance tests - for major user tasks that take significant time
diagnostic tests - for tiny operation that are critical to execute quickly