Difference between revisions of "WTP Performance Tests/2008-09-29"

From Eclipsepedia

Jump to: navigation, search
(New page: == Attendees == {|border=1 cellpadding=5 |- | Gary Karasiuk | |- | Angel Vera | |- | Mark Hutchinson | |- | Nick Sandonato | |- | Carl Anderson | |- | Kaloyan Raev | |- | R...)
 
(Action items)
 
(5 intermediate revisions by 2 users not shown)
Line 4: Line 4:
 
|-   
 
|-   
 
| Gary Karasiuk
 
| Gary Karasiuk
|  
+
| Y
 
|-   
 
|-   
 
| Angel Vera
 
| Angel Vera
Line 10: Line 10:
 
|-   
 
|-   
 
| Mark Hutchinson
 
| Mark Hutchinson
|  
+
| Y
 
|-
 
|-
 
| Nick Sandonato
 
| Nick Sandonato
Line 16: Line 16:
 
|-
 
|-
 
| Carl Anderson
 
| Carl Anderson
|  
+
| Y
 
|-
 
|-
 
| Kaloyan Raev
 
| Kaloyan Raev
|  
+
| Y
 
|-
 
|-
 
| Raghu Srinivasan
 
| Raghu Srinivasan
Line 25: Line 25:
 
|-
 
|-
 
| Neil Hauge
 
| Neil Hauge
|  
+
| Y
 
|-
 
|-
 
| David Williams
 
| David Williams
|  
+
| Y
 
|}
 
|}
  
Line 35: Line 35:
 
:Has every team identified a single test case?  
 
:Has every team identified a single test case?  
 
:Do we have success on running the performance test procedure on the a single test?
 
:Do we have success on running the performance test procedure on the a single test?
:Best practices for writing Eclipse performance tests
+
:[[WTP Performance Tests#Best Practices|Best practices]] for writing Eclipse performance tests
  
 
== Minutes ==
 
== Minutes ==
  
;
+
;Has every team identified a single test case?
:
+
:Gary promised to make it for the Common Tools team by next week. The other team with pending test case is JSF Tools.
 +
 
 +
;Do we have success on running the performance test procedure on the a single test?
 +
:Nick sent a very useful information to Kaloyan. Now he is able to do a successful manual run of the graph generation tool.
 +
:Still have problems with the automatic procedure - with the CVS download of the graph generation tool.
 +
:The Ant scripts need refactoring to make it easy for automatically execute some main tasks:
 +
:*set a baseline - not possible at the moment without manually manipulating the DB
 +
:*run a test and compare against a baseline - almost possible to do this now, but still needs some optimization in terms of where the scripts downloads tools and plugins. For example, the org.eclipse.test.performance plugin is downloaded from the already released build and have to be patched manually.
 +
 
 +
;[[WTP Performance Tests#Best Practices|Best practices]] for writing Eclipse performance tests
 +
:We need to discuss what exactly do we want to measure.
 +
:Kaloyan mentioned that we should respect the JIT compiler optimizations
 +
:Gary expressed an opinion that we should measure big things - a complete user tasks, like executing a build, finishing a wizard, opening big files in an editor, etc.
 +
:David said that we should also take into account some small operations (less than a second) that are executed repeatedly in bigger operations or a critical to execute quickly. We should have to types of performance tests like:
 +
:*assurance tests - for major user tasks that take significant time
 +
:*diagnostic tests - for tiny operation that are critical to execute quickly
 +
:All thoughts should be captured in the [[WTP Performance Tests#Best Practices|Best practices]] section of the [[WTP Performance Tests|main performance tests wiki page]].
 +
 
 +
;Other thoughts
 +
:We should summarize all our open tasks and see if we are able to handle them in the short term.
 +
:In any case we must document all our findings.
  
 
== Action items ==
 
== Action items ==
  
:
+
:Refactor Ant scripts ('''open''')
 +
:Summarize latest findings (Kaloyan)
 +
:Write thoughts about best practices and priorities for measurement (Gary and others)
 +
:Summarize the list of open action items (Kaloyan)
  
 
== References ==
 
== References ==
  
 
:
 
:

Latest revision as of 07:02, 6 October 2008

Contents

[edit] Attendees

Gary Karasiuk Y
Angel Vera
Mark Hutchinson Y
Nick Sandonato
Carl Anderson Y
Kaloyan Raev Y
Raghu Srinivasan
Neil Hauge Y
David Williams Y

[edit] Agenda

Has every team identified a single test case?
Do we have success on running the performance test procedure on the a single test?
Best practices for writing Eclipse performance tests

[edit] Minutes

Has every team identified a single test case?
Gary promised to make it for the Common Tools team by next week. The other team with pending test case is JSF Tools.
Do we have success on running the performance test procedure on the a single test?
Nick sent a very useful information to Kaloyan. Now he is able to do a successful manual run of the graph generation tool.
Still have problems with the automatic procedure - with the CVS download of the graph generation tool.
The Ant scripts need refactoring to make it easy for automatically execute some main tasks:
  • set a baseline - not possible at the moment without manually manipulating the DB
  • run a test and compare against a baseline - almost possible to do this now, but still needs some optimization in terms of where the scripts downloads tools and plugins. For example, the org.eclipse.test.performance plugin is downloaded from the already released build and have to be patched manually.
Best practices for writing Eclipse performance tests
We need to discuss what exactly do we want to measure.
Kaloyan mentioned that we should respect the JIT compiler optimizations
Gary expressed an opinion that we should measure big things - a complete user tasks, like executing a build, finishing a wizard, opening big files in an editor, etc.
David said that we should also take into account some small operations (less than a second) that are executed repeatedly in bigger operations or a critical to execute quickly. We should have to types of performance tests like:
  • assurance tests - for major user tasks that take significant time
  • diagnostic tests - for tiny operation that are critical to execute quickly
All thoughts should be captured in the Best practices section of the main performance tests wiki page.
Other thoughts
We should summarize all our open tasks and see if we are able to handle them in the short term.
In any case we must document all our findings.

[edit] Action items

Refactor Ant scripts (open)
Summarize latest findings (Kaloyan)
Write thoughts about best practices and priorities for measurement (Gary and others)
Summarize the list of open action items (Kaloyan)

[edit] References