Difference between revisions of "WTP Performance Tests"
(→Meeting Minutes) |
(→Status Meeting -- removed list of numbers, which is obsolte) |
||
(31 intermediate revisions by 5 users not shown) | |||
Line 6: | Line 6: | ||
Each WTP project team contributes one person who will participate in the WTP Performance Cross Team. See the list of participants below. | Each WTP project team contributes one person who will participate in the WTP Performance Cross Team. See the list of participants below. | ||
+ | |||
+ | === How To === | ||
+ | |||
+ | Latest findings on how to run performance tests are documented on the [[WTP Performance Tests/How To|How To]] wiki page. | ||
=== Status Meeting === | === Status Meeting === | ||
Line 15: | Line 19: | ||
Toll Free: 877-421-0030 <br> | Toll Free: 877-421-0030 <br> | ||
Toll (USA): 770-615-1247 <br> | Toll (USA): 770-615-1247 <br> | ||
− | Participant Passcode: | + | Participant Passcode: 269746 <br> |
− | + | ||
=== Meeting Minutes === | === Meeting Minutes === | ||
− | :[[WTP Performance Tests/2008-08-04|2008-08-04]] | + | Next meeting: '''will be scheduled on demand''' |
+ | |||
+ | ;October | ||
+ | :[[WTP Performance Tests/2008-10-20|2008-10-20]] | ||
+ | :[[WTP Performance Tests/2008-10-13|2008-10-13]] | ||
+ | :[[WTP Performance Tests/2008-10-06|2008-10-06]] | ||
+ | |||
+ | ;September | ||
+ | :[[WTP Performance Tests/2008-09-29|2008-09-29]] | ||
+ | :[[WTP Performance Tests/2008-09-15|2008-09-15]] | ||
+ | :[[WTP Performance Tests/2008-09-08|2008-09-08]] | ||
+ | |||
+ | ;August | ||
+ | :[[WTP Performance Tests/2008-08-25|2008-08-25]] | ||
+ | :[[WTP Performance Tests/2008-08-18|2008-08-18]] | ||
+ | :[[WTP Performance Tests/2008-08-11|2008-08-11]] | ||
+ | :[[WTP Performance Tests/2008-08-04|2008-08-04]] | ||
+ | |||
+ | ;July | ||
:[[WTP Performance Tests/2008-07-28|2008-07-28]] | :[[WTP Performance Tests/2008-07-28|2008-07-28]] | ||
:[[WTP Performance Tests/2008-07-21|2008-07-21]] | :[[WTP Performance Tests/2008-07-21|2008-07-21]] | ||
+ | |||
+ | === Open Action Items === | ||
+ | |||
+ | * '''Top action item:''' Refactor Ant scripts for automating performance test execution - see bug [https://bugs.eclipse.org/252334 252334]. | ||
+ | * Patch Platform code - see bug [https://bugs.eclipse.org/244986 244986]. | ||
+ | * Define a single test per project (see [[WTP Performance Tests#Initial Tests|Initial Tests]] section). | ||
+ | * Write thoughts about best practices and priorities for measurement (see [[WTP Performance Tests#Best Practices|Best Practices]] section). | ||
+ | * Review the available performance test suites if they are semantically relevant for performance tests. | ||
=== Team Members === | === Team Members === | ||
Line 43: | Line 72: | ||
|- | |- | ||
| JEE Tools | | JEE Tools | ||
− | | | + | | Jason Peterson |
|- | |- | ||
| EJB Tools | | EJB Tools | ||
Line 56: | Line 85: | ||
| Releng | | Releng | ||
| David Williams | | David Williams | ||
+ | |} | ||
+ | |||
+ | == Initial Tests == | ||
+ | |||
+ | Each component should continue to identify a single test case that should work correctly. | ||
+ | |||
+ | {|border=1 cellpadding=5 | ||
+ | |+ | ||
+ | ! Team !! Testcase | ||
+ | |- | ||
+ | | Common | ||
+ | |org.eclipse.wst.common.tests.performance.PerformancePlugin.HTMLValidator | ||
+ | |- | ||
+ | | Server | ||
+ | |org.eclipse.wst.server.tests.performance.StartupExtensionTestCase | ||
+ | |- | ||
+ | | Webservices | ||
+ | |plugin: org.eclipse.jst.ws.tests.performance <br> | ||
+ | test: org.eclipse.jst.ws.tests.axis.tomcat.v50.perfmsr.PerfmsrClientAxisTC50 <br> | ||
+ | vmargs: -Dorg.eclipse.jst.server.tomcat.50=<tomcat 5 install location> -Dtomcat50Dir=<tomcat 5 install location> | ||
+ | |- | ||
+ | | Source Editing | ||
+ | | org.eclipse.wst.xml.ui.tests.performance.FormatTest | ||
+ | |- | ||
+ | | JEE Tools | ||
+ | | org.eclipse.jst.j2ee.tests.performance.TestCaseSAXParser | ||
+ | |- | ||
+ | | EJB Tools | ||
+ | | org.eclipse.jst.j2ee.tests.performance.EJBProjectCreationTestCase | ||
+ | |- | ||
+ | | JSF Tools | ||
+ | | | ||
+ | |- | ||
+ | | Dali | ||
+ | | org.eclipse.jpt.core.tests.internal.performance.JpaPerformanceTests | ||
+ | testFacetInstallUninstallPerformance | ||
|} | |} | ||
Line 66: | Line 131: | ||
:*[https://bugs.eclipse.org/bugs/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=%5Bperformance+tests%5D&classification=WebTools&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&emailtype1=exact&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&known_name=My+Bugs&query_based_on=My+Bugs&field0-0-0=noop&type0-0-0=noop&value0-0-0= Performance Test Bugs and Enhancements] | :*[https://bugs.eclipse.org/bugs/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=%5Bperformance+tests%5D&classification=WebTools&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&emailtype1=exact&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&known_name=My+Bugs&query_based_on=My+Bugs&field0-0-0=noop&type0-0-0=noop&value0-0-0= Performance Test Bugs and Enhancements] | ||
:*Improve the actual tests. Why do some take so long? Why do some results fluctuate? Why do some have drastic performance degradations? Are the tests even still valid? | :*Improve the actual tests. Why do some take so long? Why do some results fluctuate? Why do some have drastic performance degradations? Are the tests even still valid? | ||
+ | |||
+ | == Best Practices == | ||
+ | |||
+ | * The first iterations of the above for-loop will generally take more time because the code is not optimized by the JIT compiler yet. This can introduce some variance to the measurements, especially if other tests run before and change in some way that affects the JIT's optimization of the measured code. A simple way to stabilize the measurements is to run the code a few times before the measurements start. Caches also need special caution as they can affect the measurements. | ||
+ | * As a rule of thumb the measured code should take at least 100ms on the target machine in order for the measurements to be relevant. For example, Windows' and Linux 2.4's system time increases in 10ms steps. In some cases the measured code can be invoked repeatedly to accumulate the elapsed time, however, it should be kept in mind that JIT could optimize this more aggressively than in real-world scenarios. | ||
+ | * There needs to be a set of "larger scale" tests that can be used by adopting products, to ensure that the performance of WTP has not regressed. You can think of these as assurance performance tests. These would work on large workspaces, and would cover the operations that could take a long time. (Import, Clean Build and Deploy). These tests need to measure both elapsed time as well as memory. Typically these tests take minutes to run. | ||
== Reference Material == | == Reference Material == | ||
Line 71: | Line 142: | ||
:[[Performance]] wiki page | :[[Performance]] wiki page | ||
:[http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.test.performance/doc/Performance%20Tests%20HowTo.html?view=co How to Write an Eclipse Performance Test] | :[http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.test.performance/doc/Performance%20Tests%20HowTo.html?view=co How to Write an Eclipse Performance Test] | ||
+ | :[http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.releng.basebuilder/plugins/org.eclipse.test.performance.ui/readme.html?view=co Graph Generation Tool HowTo] | ||
:[http://download.eclipse.org/eclipse/downloads/drops/R-3.4-200806172000/performance/performance.php Sample Eclipse Performance Results] | :[http://download.eclipse.org/eclipse/downloads/drops/R-3.4-200806172000/performance/performance.php Sample Eclipse Performance Results] | ||
:Article [http://www.eclipse.org/articles/article.php?file=Article-PDEJUnitAntAutomation/index.html Automating Eclipse PDE Unit Tests using Ant] | :Article [http://www.eclipse.org/articles/article.php?file=Article-PDEJUnitAntAutomation/index.html Automating Eclipse PDE Unit Tests using Ant] | ||
+ | :Bugzilla: (performance tests) [https://bugs.eclipse.org/bugs/show_bug.cgi?id=244986 No Results for WTP Tests] | ||
+ | :Eclipse-dev fingerprint changes coming: http://dev.eclipse.org/mhonarc/lists/eclipse-dev/msg08403.html | ||
+ | :[http://www.eclipsecon.org/2005/presentations/EclipseCon2005_13.2ContinuousPerformance.pdf EclipseCon 2005 - Continuous Performance - Monitoring Performance with Automated Tests] | ||
<br> | <br> | ||
[[Web_Tools_Project | Back to Web Tools Wiki Home]] | [[Web_Tools_Project | Back to Web Tools Wiki Home]] | ||
[[Category:Eclipse Web Tools Platform Project]][[Category:WTP Testing Related| ]] | [[Category:Eclipse Web Tools Platform Project]][[Category:WTP Testing Related| ]] |
Latest revision as of 01:17, 21 November 2010
Contents
WTP Performance Cross Team
The goal of the team is to revive the WTP Performance tests. The test execution had to be moved to another server and there are still problems with setting up the execution environment. It was decided to start over WTP Performance tests with something small - each team will contribute some simple tests and after having successfully setup this, we will continue building on top.
Team Lead: Kaloyan Raev
Each WTP project team contributes one person who will participate in the WTP Performance Cross Team. See the list of participants below.
How To
Latest findings on how to run performance tests are documented on the How To wiki page.
Status Meeting
Meetings are held on Monday's, 12 Noon to 1 PM Eastern Time.
Dial-in numbers:
Toll Free: 877-421-0030
Toll (USA): 770-615-1247
Participant Passcode: 269746
Meeting Minutes
Next meeting: will be scheduled on demand
- October
- 2008-10-20
- 2008-10-13
- 2008-10-06
- September
- 2008-09-29
- 2008-09-15
- 2008-09-08
- August
- 2008-08-25
- 2008-08-18
- 2008-08-11
- 2008-08-04
- July
- 2008-07-28
- 2008-07-21
Open Action Items
- Top action item: Refactor Ant scripts for automating performance test execution - see bug 252334.
- Patch Platform code - see bug 244986.
- Define a single test per project (see Initial Tests section).
- Write thoughts about best practices and priorities for measurement (see Best Practices section).
- Review the available performance test suites if they are semantically relevant for performance tests.
Team Members
Team | Participant |
---|---|
Common | Gary Karasiuk |
Server | Angel Vera |
Webservices | Mark Hutchinson |
Source Editing | Nick Sandonato |
JEE Tools | Jason Peterson |
EJB Tools | Kaloyan Raev |
JSF Tools | Raghu Srinivasan |
Dali | Neil Hauge |
Releng | David Williams |
Initial Tests
Each component should continue to identify a single test case that should work correctly.
Team | Testcase |
---|---|
Common | org.eclipse.wst.common.tests.performance.PerformancePlugin.HTMLValidator |
Server | org.eclipse.wst.server.tests.performance.StartupExtensionTestCase |
Webservices | plugin: org.eclipse.jst.ws.tests.performance test: org.eclipse.jst.ws.tests.axis.tomcat.v50.perfmsr.PerfmsrClientAxisTC50 |
Source Editing | org.eclipse.wst.xml.ui.tests.performance.FormatTest |
JEE Tools | org.eclipse.jst.j2ee.tests.performance.TestCaseSAXParser |
EJB Tools | org.eclipse.jst.j2ee.tests.performance.EJBProjectCreationTestCase |
JSF Tools | |
Dali | org.eclipse.jpt.core.tests.internal.performance.JpaPerformanceTests
testFacetInstallUninstallPerformance |
Latest Results
Known Issues
- Performance Test Bugs and Enhancements
- Improve the actual tests. Why do some take so long? Why do some results fluctuate? Why do some have drastic performance degradations? Are the tests even still valid?
Best Practices
- The first iterations of the above for-loop will generally take more time because the code is not optimized by the JIT compiler yet. This can introduce some variance to the measurements, especially if other tests run before and change in some way that affects the JIT's optimization of the measured code. A simple way to stabilize the measurements is to run the code a few times before the measurements start. Caches also need special caution as they can affect the measurements.
- As a rule of thumb the measured code should take at least 100ms on the target machine in order for the measurements to be relevant. For example, Windows' and Linux 2.4's system time increases in 10ms steps. In some cases the measured code can be invoked repeatedly to accumulate the elapsed time, however, it should be kept in mind that JIT could optimize this more aggressively than in real-world scenarios.
- There needs to be a set of "larger scale" tests that can be used by adopting products, to ensure that the performance of WTP has not regressed. You can think of these as assurance performance tests. These would work on large workspaces, and would cover the operations that could take a long time. (Import, Clean Build and Deploy). These tests need to measure both elapsed time as well as memory. Typically these tests take minutes to run.
Reference Material
- Performance wiki page
- How to Write an Eclipse Performance Test
- Graph Generation Tool HowTo
- Sample Eclipse Performance Results
- Article Automating Eclipse PDE Unit Tests using Ant
- Bugzilla: (performance tests) No Results for WTP Tests
- Eclipse-dev fingerprint changes coming: http://dev.eclipse.org/mhonarc/lists/eclipse-dev/msg08403.html
- EclipseCon 2005 - Continuous Performance - Monitoring Performance with Automated Tests