Jump to: navigation, search

Difference between revisions of "WTP Performance Tests"

(Initial Tests)
(Status Meeting -- removed list of numbers, which is obsolte)
 
(6 intermediate revisions by 3 users not shown)
Line 20: Line 20:
 
Toll (USA): 770-615-1247 <br>
 
Toll (USA): 770-615-1247 <br>
 
Participant Passcode:  269746 <br>
 
Participant Passcode:  269746 <br>
[[Media:WTP_status_phone_access.pdf| Full list of phone numbers]]
 
  
 
=== Meeting Minutes ===
 
=== Meeting Minutes ===
  
Next meeting: [[WTP Performance Tests/2008-10-13|2008-10-13]]
+
Next meeting: '''will be scheduled on demand'''
  
 
;October
 
;October
 +
:[[WTP Performance Tests/2008-10-20|2008-10-20]]
 +
:[[WTP Performance Tests/2008-10-13|2008-10-13]]
 
:[[WTP Performance Tests/2008-10-06|2008-10-06]]
 
:[[WTP Performance Tests/2008-10-06|2008-10-06]]
  
Line 46: Line 47:
 
=== Open Action Items ===
 
=== Open Action Items ===
  
 +
* '''Top action item:''' Refactor Ant scripts for automating performance test execution - see bug [https://bugs.eclipse.org/252334 252334].
 
* Patch Platform code - see bug [https://bugs.eclipse.org/244986 244986].  
 
* Patch Platform code - see bug [https://bugs.eclipse.org/244986 244986].  
* Refactor Ant scripts for automating performance test execution.
 
 
* Define a single test per project (see [[WTP Performance Tests#Initial Tests|Initial Tests]] section).  
 
* Define a single test per project (see [[WTP Performance Tests#Initial Tests|Initial Tests]] section).  
 
* Write thoughts about best practices and priorities for measurement (see [[WTP Performance Tests#Best Practices|Best Practices]] section).  
 
* Write thoughts about best practices and priorities for measurement (see [[WTP Performance Tests#Best Practices|Best Practices]] section).  
Line 71: Line 72:
 
|-
 
|-
 
| JEE Tools
 
| JEE Tools
| Carl Anderson
+
| Jason Peterson
 
|-
 
|-
 
| EJB Tools
 
| EJB Tools
Line 145: Line 146:
 
:Article [http://www.eclipse.org/articles/article.php?file=Article-PDEJUnitAntAutomation/index.html Automating Eclipse PDE Unit Tests using Ant]
 
:Article [http://www.eclipse.org/articles/article.php?file=Article-PDEJUnitAntAutomation/index.html Automating Eclipse PDE Unit Tests using Ant]
 
:Bugzilla: (performance tests) [https://bugs.eclipse.org/bugs/show_bug.cgi?id=244986 No Results for WTP Tests]
 
:Bugzilla: (performance tests) [https://bugs.eclipse.org/bugs/show_bug.cgi?id=244986 No Results for WTP Tests]
 +
:Eclipse-dev fingerprint changes coming: http://dev.eclipse.org/mhonarc/lists/eclipse-dev/msg08403.html
 +
:[http://www.eclipsecon.org/2005/presentations/EclipseCon2005_13.2ContinuousPerformance.pdf EclipseCon 2005 - Continuous Performance - Monitoring Performance with Automated Tests]
  
 
<br>
 
<br>
 
[[Web_Tools_Project | Back to Web Tools Wiki Home]]
 
[[Web_Tools_Project | Back to Web Tools Wiki Home]]
 
[[Category:Eclipse Web Tools Platform Project]][[Category:WTP Testing Related| ]]
 
[[Category:Eclipse Web Tools Platform Project]][[Category:WTP Testing Related| ]]

Latest revision as of 00:17, 21 November 2010

WTP Performance Cross Team

The goal of the team is to revive the WTP Performance tests. The test execution had to be moved to another server and there are still problems with setting up the execution environment. It was decided to start over WTP Performance tests with something small - each team will contribute some simple tests and after having successfully setup this, we will continue building on top.

Team Lead: Kaloyan Raev

Each WTP project team contributes one person who will participate in the WTP Performance Cross Team. See the list of participants below.

How To

Latest findings on how to run performance tests are documented on the How To wiki page.

Status Meeting

Meetings are held on Monday's, 12 Noon to 1 PM Eastern Time.

Dial-in numbers:

Toll Free: 877-421-0030
Toll (USA): 770-615-1247
Participant Passcode: 269746

Meeting Minutes

Next meeting: will be scheduled on demand

October
2008-10-20
2008-10-13
2008-10-06
September
2008-09-29
2008-09-15
2008-09-08
August
2008-08-25
2008-08-18
2008-08-11
2008-08-04
July
2008-07-28
2008-07-21

Open Action Items

  • Top action item: Refactor Ant scripts for automating performance test execution - see bug 252334.
  • Patch Platform code - see bug 244986.
  • Define a single test per project (see Initial Tests section).
  • Write thoughts about best practices and priorities for measurement (see Best Practices section).
  • Review the available performance test suites if they are semantically relevant for performance tests.

Team Members

Team Participant
Common Gary Karasiuk
Server Angel Vera
Webservices Mark Hutchinson
Source Editing Nick Sandonato
JEE Tools Jason Peterson
EJB Tools Kaloyan Raev
JSF Tools Raghu Srinivasan
Dali Neil Hauge
Releng David Williams

Initial Tests

Each component should continue to identify a single test case that should work correctly.

Team Testcase
Common org.eclipse.wst.common.tests.performance.PerformancePlugin.HTMLValidator
Server org.eclipse.wst.server.tests.performance.StartupExtensionTestCase
Webservices plugin: org.eclipse.jst.ws.tests.performance

test: org.eclipse.jst.ws.tests.axis.tomcat.v50.perfmsr.PerfmsrClientAxisTC50
vmargs: -Dorg.eclipse.jst.server.tomcat.50=<tomcat 5 install location> -Dtomcat50Dir=<tomcat 5 install location>

Source Editing org.eclipse.wst.xml.ui.tests.performance.FormatTest
JEE Tools org.eclipse.jst.j2ee.tests.performance.TestCaseSAXParser
EJB Tools org.eclipse.jst.j2ee.tests.performance.EJBProjectCreationTestCase
JSF Tools
Dali org.eclipse.jpt.core.tests.internal.performance.JpaPerformanceTests

testFacetInstallUninstallPerformance

Latest Results

WTP 2.0
WTP 2.0.1 (run on the new system as a basis for reference)

Known Issues

  • Performance Test Bugs and Enhancements
  • Improve the actual tests. Why do some take so long? Why do some results fluctuate? Why do some have drastic performance degradations? Are the tests even still valid?

Best Practices

  • The first iterations of the above for-loop will generally take more time because the code is not optimized by the JIT compiler yet. This can introduce some variance to the measurements, especially if other tests run before and change in some way that affects the JIT's optimization of the measured code. A simple way to stabilize the measurements is to run the code a few times before the measurements start. Caches also need special caution as they can affect the measurements.
  • As a rule of thumb the measured code should take at least 100ms on the target machine in order for the measurements to be relevant. For example, Windows' and Linux 2.4's system time increases in 10ms steps. In some cases the measured code can be invoked repeatedly to accumulate the elapsed time, however, it should be kept in mind that JIT could optimize this more aggressively than in real-world scenarios.
  • There needs to be a set of "larger scale" tests that can be used by adopting products, to ensure that the performance of WTP has not regressed. You can think of these as assurance performance tests. These would work on large workspaces, and would cover the operations that could take a long time. (Import, Clean Build and Deploy). These tests need to measure both elapsed time as well as memory. Typically these tests take minutes to run.

Reference Material

Performance wiki page
How to Write an Eclipse Performance Test
Graph Generation Tool HowTo
Sample Eclipse Performance Results
Article Automating Eclipse PDE Unit Tests using Ant
Bugzilla: (performance tests) No Results for WTP Tests
Eclipse-dev fingerprint changes coming: http://dev.eclipse.org/mhonarc/lists/eclipse-dev/msg08403.html
EclipseCon 2005 - Continuous Performance - Monitoring Performance with Automated Tests


Back to Web Tools Wiki Home