Difference between revisions of "WTP Performance Tests"
(→WTP Performance Cross Team)
|Line 24:||Line 24:|
=== Meeting Minutes ===
=== Meeting Minutes ===
Next meeting: [[WTP Performance Tests/2008-10-06|2008-10-06]]
:[[WTP Performance Tests/2008-10-06|2008-10-06]]
Revision as of 12:04, 6 October 2008
WTP Performance Cross Team
The goal of the team is to revive the WTP Performance tests. The test execution had to be moved to another server and there are still problems with setting up the execution environment. It was decided to start over WTP Performance tests with something small - each team will contribute some simple tests and after having successfully setup this, we will continue building on top.
Team Lead: Kaloyan Raev
Each WTP project team contributes one person who will participate in the WTP Performance Cross Team. See the list of participants below.
Latest findings on how to run performance tests are documented on the How To wiki page.
Meetings are held on Monday's, 12 Noon to 1 PM Eastern Time.
Toll Free: 877-421-0030
Toll (USA): 770-615-1247
Participant Passcode: 269746
Full list of phone numbers
Next meeting: 2008-10-13
Open Action Items
- Patch Platform code - see bug 244986.
- Refactor Ant scripts for automating performance test execution.
- Define a single test per project (see Initial Tests section).
- Write thoughts about best practices and priorities for measurement (see Best Practices section).
- Review the available performance test suites if they are semantically relevant for performance tests.
|Source Editing||Nick Sandonato|
|JEE Tools||Carl Anderson|
|EJB Tools||Kaloyan Raev|
|JSF Tools||Raghu Srinivasan|
Each component should continue to identify a single test case that should work correctly.
|Webservices||plugin: org.eclipse.jst.ws.tests.performance |
- Performance Test Bugs and Enhancements
- Improve the actual tests. Why do some take so long? Why do some results fluctuate? Why do some have drastic performance degradations? Are the tests even still valid?
- The first iterations of the above for-loop will generally take more time because the code is not optimized by the JIT compiler yet. This can introduce some variance to the measurements, especially if other tests run before and change in some way that affects the JIT's optimization of the measured code. A simple way to stabilize the measurements is to run the code a few times before the measurements start. Caches also need special caution as they can affect the measurements.
- As a rule of thumb the measured code should take at least 100ms on the target machine in order for the measurements to be relevant. For example, Windows' and Linux 2.4's system time increases in 10ms steps. In some cases the measured code can be invoked repeatedly to accumulate the elapsed time, however, it should be kept in mind that JIT could optimize this more aggressively than in real-world scenarios.
- There needs to be a set of "larger scale" tests that can be used by adopting products, to ensure that the performance of WTP has not regressed. You can think of these as assurance performance tests. These would work on large workspaces, and would cover the operations that could take a long time. (Import, Clean Build and Deploy). These tests need to measure both elapsed time as well as memory. Typically these tests take minutes to run.
- Performance wiki page
- How to Write an Eclipse Performance Test
- Graph Generation Tool HowTo
- Sample Eclipse Performance Results
- Article Automating Eclipse PDE Unit Tests using Ant
- Bugzilla: (performance tests) No Results for WTP Tests