Difference between revisions of "WTP Performance Tests/2008-07-21"
m (→Minutes) |
(→Minutes) |
||
Line 39: | Line 39: | ||
=== Minutes === | === Minutes === | ||
+ | ;Summary | ||
+ | :Old ANT scripts are complicated and convoluted. They run, but the generation of the graphic does not occur. There is a sample of the data generated without the graphics which is still useful. Kaloyan will post the link on the wiki. | ||
+ | |||
+ | :Is there a benefit on running the performance tests and its results? | ||
+ | :- Yes! | ||
+ | |||
+ | ;Problem area | ||
+ | :Resources "man hours" | ||
+ | :Old complicated ant script that for some reason can't extract result data from the Derby DB. | ||
+ | |||
+ | ;Goal/mandate | ||
+ | :Run a set of tests in a controled enviroment that enable the WTP subprojects to analyze the responsiveness of the tool and its subproject. | ||
:Teams should own the perf tests, like they do the junit tests. This responsibility should not be centralized. | :Teams should own the perf tests, like they do the junit tests. This responsibility should not be centralized. | ||
:Tests should be run in central controlled environment. | :Tests should be run in central controlled environment. | ||
:There is good value to have tests running repeatedly, but they should be meaningful. Tests should be automated and run frequently enough. | :There is good value to have tests running repeatedly, but they should be meaningful. Tests should be automated and run frequently enough. | ||
+ | ;Hardware available | ||
+ | :At the moment we have one Windows Server 2003 machine (donated by Kaloyan). | ||
+ | :We can start running the automated perf tests on it. If we want to do this on other platforms we should make a list of possible machines the perf team can gather. | ||
+ | |||
+ | ;Short terms goals | ||
+ | :We are first going to try to get this running on Windows and then later expand to other platforms if we are successful. | ||
+ | :If there is a manual step to generate the graphic we will attempt that first and then look into automating the tasks | ||
+ | :It was suggested that the automation would have to be written in a ANT script from the ground up (forget about the ant script by JohnL) | ||
:We want to reuse as much as we can. We should use the same framework as the Platform team does. | :We want to reuse as much as we can. We should use the same framework as the Platform team does. | ||
:Start with 1 test per component | :Start with 1 test per component | ||
− | :Let's start with looking at how the Platform team does their performance tests. | + | :Let's start with looking at how the Platform team does their performance tests. |
− | : | + | ;Long term goals |
− | : | + | :Add RedHat |
+ | :Add Windows XP/Vista | ||
+ | :How about ad-hoc perf testing? Incorporate as part of our smoke tests? Performance snapshots. The tester clicks a menu item to start recording the snapshot...?! | ||
− | : | + | ;Action Takers |
+ | :Each week we will look at the "man hours" that each team can contribute and we will determine what contribution can be made to the action items. | ||
+ | :The person contributing will need to summaries its action to the team on a email. | ||
=== Action items === | === Action items === |
Revision as of 13:29, 21 July 2008
Attendees
Gary Karasiuk | Y |
Angel Vera | Y |
Mark Hutchinson | Y |
Nick Sandonato | Y |
Carl Anderson | Y |
Kaloyan Raev | Y |
Raghu Srinivasan | Y |
Neil Hauge | Y |
David Williams |
Agenda
- First meeting
- Explain current problems
- Discuss how we would like to see WTP performance tests look like.
Minutes
- Summary
- Old ANT scripts are complicated and convoluted. They run, but the generation of the graphic does not occur. There is a sample of the data generated without the graphics which is still useful. Kaloyan will post the link on the wiki.
- Is there a benefit on running the performance tests and its results?
- - Yes!
- Problem area
- Resources "man hours"
- Old complicated ant script that for some reason can't extract result data from the Derby DB.
- Goal/mandate
- Run a set of tests in a controled enviroment that enable the WTP subprojects to analyze the responsiveness of the tool and its subproject.
- Teams should own the perf tests, like they do the junit tests. This responsibility should not be centralized.
- Tests should be run in central controlled environment.
- There is good value to have tests running repeatedly, but they should be meaningful. Tests should be automated and run frequently enough.
- Hardware available
- At the moment we have one Windows Server 2003 machine (donated by Kaloyan).
- We can start running the automated perf tests on it. If we want to do this on other platforms we should make a list of possible machines the perf team can gather.
- Short terms goals
- We are first going to try to get this running on Windows and then later expand to other platforms if we are successful.
- If there is a manual step to generate the graphic we will attempt that first and then look into automating the tasks
- It was suggested that the automation would have to be written in a ANT script from the ground up (forget about the ant script by JohnL)
- We want to reuse as much as we can. We should use the same framework as the Platform team does.
- Start with 1 test per component
- Let's start with looking at how the Platform team does their performance tests.
- Long term goals
- Add RedHat
- Add Windows XP/Vista
- How about ad-hoc perf testing? Incorporate as part of our smoke tests? Performance snapshots. The tester clicks a menu item to start recording the snapshot...?!
- Action Takers
- Each week we will look at the "man hours" that each team can contribute and we will determine what contribution can be made to the action items.
- The person contributing will need to summaries its action to the team on a email.
Action items
- Gary - will spend one hour to see how the Platform are doing their performance tests
- Everybody else - do like Gary or at least spend some time reading the Performance wiki page and related links.
- Gary - put info on the wiki about how ad-hoc perf testing is done.
References
- Performance wiki page
- Performance results for WTP 2.0
- Performance results for WTP 2.0.1 (run on the new system as a basis for reference)