Skip to main content
Jump to: navigation, search

Difference between revisions of "Platform-releng-faq-obsolete"

m
(How can I run the update manager from a command line to mirror a remote site?)
 
(285 intermediate revisions by 11 users not shown)
Line 1: Line 1:
<H3>Eclipse Platform Release Engineering FAQ</H3>
+
{{warning|Note: Most of the data in this FAQ is obsolete, but it may still contain some interesting tid-bits.}}
  
<p>
+
'''Eclipse Platform Release Engineering FAQ'''
If you would like additional content added to this FAQ, please send a note to the
+
[https://dev.eclipse.org/mailman/listinfo/platform-releng-dev platform releng developers mailing list]
+
</p>
+
  
<h4>What hardware comprises the platform-releng build infrastructure?</h4>
+
==Is the Eclipse platform build process completely automated?==
  <p>The eclipse platform build infrastructure consists of<br><ol>
+
Yes, the Eclipse build process starts with a cron job on our main build machine that initiates a shell script that checks out the builder projects.
    <li>2 linux build machines (2GHz, 1GB  and  PowerMac G5, 2GB )<br></li>
+
 
    <li>2 junit test machines, one windows (2GHz, 1 GB) one linux (2.66
+
*org.eclipse.releng.eclipsebuilder -&gt; scripts to build each type of drop
    GHz, 1.2 GB )</li>
+
*org.eclipse.releng.basebuilder -&gt; a subset of eclipse plugins required to run the build.
    <li>4 performance test machines - one set of slower machines (one windows, one
+
*we also use several small cvs projects on an internal server that store login credentials, publishing information and jdks
    linux 2GHz, 512 MB) and one set of fast machines (one windows, one linux
+
    3GHz, 2GB)</li>
+
    <li>1 linux cvs test server(500MHz, 500MB)</li>
+
    <li>1 database server with apache derby installed to store performance test
+
  results (2.5GHz, 500MB)</li>
+
</ol></p><br>
+
  
<h4>Is the Eclipse platform build process completely automated?</h4>
+
Fetch and  build scripts are generated automatically by the [[PDE/Build | org.eclipse.pde.build]] bundle in org.eclipse.releng.basebuilder. Fetch scripts specify the version of code to retrieve from the repository. Build scripts are also generated automatically to compile all java code, determine compile order and manage dependencies. We create a master feature of all the non-test bundles and features used in the build.  This feature is signed and packed, and the we create a p2 repo from the packed master feature.  Metadata for the products is published to the repository.  We then use the director application to provision the contents of the SDKs etc to different directories and then zip or tar them up.  We also use custom build scripts that you can see in org.eclipse.releng.eclipsebuilder.
 +
After the SDKs are built, the automated junit and performance testing begins. Tests occur over ssh for Linux machines, and rsh for Windows machines. Each component team contributes their own tests. Once the tests are completed,the results are copied back to the build machine. Also, the images for the performance tests are generated.
  
  <p>Yes, the Eclipse build process starts with a cron job on our main build machine that initiates a shell script that checks out the builder projects. </p>
+
==What is the latest version of org.eclipse.releng.basebuilder?==
 
+
See this document which describes correct tag of [[Platform-releng-basebuilder|org.eclipse.releng.basebuilder]] for use in your builds.
    <li>org.eclipse.releng.eclipsebuilder -&gt; scripts to build each type of drop</li>
+
    <li>org.eclipse.releng.basebuilder -&gt; a subset of eclipse plugins required to run the build.</li>
+
    <li>we also use several small cvs projects on an internal server that
+
  
 +
==I would like to recompile eclipse on a platform that is not currently supported. What is the best approach to this?  How can I ensure that the community can use my work?==
 +
The best approach is use the source build drop and modify the scripts to support that you are interested in. Then [https://bugs.eclipse.org open a bug] with product ''Platform'' and component ''Releng'' attaching the patches to the scripts that were required to build the new drop.  If you are interested in contributing a port to eclipse, here is the [http://www.eclipse.org/eclipse/platform-releng/contributingEclipsePorts.html procedure].
  
          store login credentials, publishing information and jdks</li>
 
  
 +
==How does the platform team sign their builds?==
 +
See [[Platform-releng-signedbuild]]
  
  <p>Fetch, build and assemble scripts are generated automatically the org.eclipse.pde.build plugin in org.eclipse.releng.basebuilder. Fetch scripts specify the version of code to retrieve from the repository. Build
+
==How long does the build take to complete?==
      scripts are also generated automatically to compile all java code, determine compile order and manage dependancies. Assembly scripts are generated to assemble the compiled code into it's final form, tar.gz, zip
+
It takes three hours and 20 minutes for all the drops to be produced. JUnit (eight hours) and performance tests (twelve hours) run in parallel after the windows, mac, linux and sdk tests drops have been built. It takes another two hours for the performance results to be generated.
      etc. We also use custom build scripts that you can see in org.eclipse.releng.eclipsebuilder.</p>
+
    <p>After the build drops are built, the automated junit and performance testing begins. Tests occur over ssh for Linux machines, and rsh for Windows machines.    Each component team contributes it's own tests. Once the tests are completed,the results are copied back to the build machine. Also, the images for the
+
  performance tests are generated.</p><br>
+
  
  <h4>What's the difference between an integration
+
==When is the next build?==
    and nightly build?</h4>
+
Please refer to the [http://www.eclipse.org/eclipse/platform-releng/buildSchedule.html build schedule].
     
+
      <p>With integration builds, we specify a version of the plugin to retrieve in the
+
    map files. (org.eclipse.releng/maps). With nightly builds, we retrieve the plugins specified in the map files from the HEAD stream.  The types of builds we run are described here   http://download.eclipse.org/eclipse/downloads/build_types.html.</p><br>
+
  
<h4>How do I run a build using the scripts in org.eclipse.releng.eclipsebuilder ?</h4>
+
==When is the next milestone?==
 +
Please refer to the [http://www.eclipse.org/eclipse/development/eclipse_project_plan_3_4.html Eclipse Platform Project Plan].
  
<p>
+
==I noticed that you promoted an integration build to a milestone build. The integration build had test failures but the milestone builds doesn't show any. Why is this?==
This document provides an overview of
+
[http://dev.eclipse.org/viewcvs/index.cgi/*checkout*/org.eclipse.releng.eclipsebuilder/readme.html?rev=HEAD&amp;content-type=text/html how to build eclipse components using the releng scripts.]      </p><br>
+
  
<h4>How do I automate my builds with PDE Build?</h4>
+
See {{bug|134413}}.
  
<p>
+
We have a number of tests that intermittently fail. The reasons are
This document describes
+
*issues with the tests themselves
[http://dev.eclipse.org/viewcvs/index.cgi/*checkout*/org.eclipse.releng.basebuilder/readme.html?rev=HEAD&amp;content-type=text/html how to automate builds using PDE Build.]   
+
*tests subject to timing issues
</p>
+
*tests that fail intermittently due to various conditions
 +
 
 +
The component teams are always trying to fix their tests but unfortunately they are still some issues. When we promote a build to a milestone, we rerun the tests that failed.  Many pass on the second time because the tests initially failed due to a timing issue or intermittent condition.  Or a team will have a broken test that doesn't warrant a rebuild for a milestone.  In that case, the releng team sprinkles pixie dust over the build page to erase the red Xes, but leaves the appropriate build failures intact.
 +
 
 +
==Scripts to promote a 4.x stream build==
 +
 
 +
ssh to build.eclipse.org as pwebster
 +
 
 +
./promote4x.sh ${buildId} ${milestonename}
 +
 
 +
Example
 +
./promote4x.sh I20110916-1615 M2
 +
 
 +
update the index.html to reflect the new build
 +
 
 +
/home/data/httpd/download.eclipse.org/eclipse/downloads/index.html
 +
<br>/home/data/httpd/download.eclipse.org/e4/downloads/index.html
 +
 
 +
==How do I run the JUnit tests used in the build?==
 +
With every  build, there is a eclipse-Automated-Tests-${buildId}.zip file.  You can [http://download.eclipse.org/eclipse/downloads/drops4/R-4.2.1-201209141800/automatedtesting.html follow the instructions] associated with this zip to run the JUnit tests on the platform of your choice.
 +
 
 +
==How do you run the tests on the Windows machine via rsh==
 +
To run the windows tests from the build machine,
 
<br>
 
<br>
 +
<tt>
 +
rsh ejwin2 start /min /wait c:\\buildtest\\N20081120-2000\\eclipse-testing\\testAll.bat c:\\buildtest\\N20081120-2000\\eclipse-testing winxplog.txt
 +
</tt>
  
<h4>How do I contribute a JUnit Plugin-in Test to the build?</h4>
+
==How do I set up performance tests?==
 +
Refer to the [http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/org.eclipse.test.performance/doc/Performance%20Tests%20HowTo.html performance tests how-to] or the even better [http://www.eclipsecon.org/2005/presentations/EclipseCon2005_13.2ContinuousPerformance.pdf Performance First talk] at Eclipse Con 2007.
  
<p>
+
Baseline tests are run from a branch of the builder.
[http://www.eclipse.org/eclipse/platform-releng/junit-contributing.html Contributing JUnit Plug-in Tests]
+
</p>
+
  
<h4>How do I contribute an example plugin to the build?</h4>
+
*3.8 builds are compared against 3.4 baselines in the perf_37x branch of org.eclipse.releng
 +
*3.7 builds are compared against 3.4 baselines in the perf_36x branch of org.eclipse.releng
 +
*3.6 builds are compared against 3.4 baselines in the perf_35x branch of org.eclipse.releng
 +
*3.5 builds are compared against 3.4 baselines in the perf_34x branch of org.eclipse.releng
 +
*3.4 builds are compared against 3.3 baselines in the perf_33x branch of org.eclipse.releng
 +
*3.3 builds are compared against 3.2 baselines in the perf_32x branch of org.eclipse.releng
  
<p>
+
==How do I find data for old performance results on existing build page?==
Once you have your plugin ready, install the org.eclipse.releng.tools plug-in, and use the "Release" action in the context menu of the test plug-in project to update and commit the map file.</p>
+
Refer to the "Raw data and Stats" link for a specific test.
  
<p>Send a request to platform-releng-dev@eclipse.org to add the test plugins to the build process. Include the following information:</p>
+
For instance for 3.3.1.1 results, [http://archive.eclipse.org/eclipse/downloads/drops/R-3.3.1.1-200710231652/performance/performance.php click here]
  
<li>the plug-in ids</li>
+
Then look at the [http://archive.eclipse.org/eclipse/downloads/drops/R-3.3.1.1-200710231652/performance/org.eclipse.jdt.ui.php jdt ui tests].
  
<p>The Eclipse release engineering team will update the org.eclipse.sdk.examples-feature to include the new tests. They will also update org.eclipse.releng.eclipsebuilder/all/publishing/templateFiles/testManifest.xml  to indicate the zip that should get a red X if there are compile errors associated with that plugin on the build page.
+
Then click on a  [http://archive.eclipse.org/eclipse/downloads/drops/R-3.3.1.1-200710231652/performance/eclipseperflnx1_R3.3/org.eclipse.jdt.ui.tests.performance.views.CleanUpPerfTest.testSortMembersCleanUp().html specific test].
</p>
+
  
<h4>I would like to recompile eclipse on a platform that is not currently supported. What is the best approach to this?  How can I ensure that the community can use my work?</h4>
+
Then click "Raw data and Stats".  
  
<p>The best approach is use the source build drop and modify the scripts to support that you are interested in. Then
+
You will see the data for [http://archive.eclipse.org/eclipse/downloads/drops/R-3.3.1.1-200710231652/performance/eclipseperflnx1_R3.3/org.eclipse.jdt.ui.tests.performance.views.CleanUpPerfTest.testSortMembersCleanUp()_raw.html previous builds].
      open a bug  https://bugs.eclipse.org with product <i>platform</i> and component <i>releng</i> attaching the patches to the scripts that were required to build the new drop.</p><br>
+
  
<h4>How long does the build take to complete?</h4>
+
==How do I run the performance tests from the zips provided on the download page?==
 +
See [https://bugs.eclipse.org/bugs/show_bug.cgi?id=91229#c3 here].
  
  <p>It takes two hours for all the drops to be produced. Junit (four hours)
+
==Process to implement a new baseline==
  and performance tests (eight hours) run in parallel after the drops have been
+
Implement new performance [[Platform-releng-new-baseline | baseline]] after a release.
  built.
+
  It takes a further
+
  two hours for the performance results to be generated. We will reduce the amount
+
  of time that  the build takes in the 3.2 timeframe by implementing faster hardware
+
  and more parallel ant tasks.
+
</p><br>
+
  
<h4>When is the next build?</h4>
+
==How to regenerate performance results from build artifacts==
<p>Please refer to the build schedule http://www.eclipse.org/eclipse/platform-releng/buildSchedule.html
+
      </p><br>
+
  
<h4>When is the next milestone?</h4>
+
This assumes that the artifacts used to run the build and the resulting build directory itself still exist on the build machine.
<p>Please refer to the Eclipse Platform Project Plan
+
http://www.eclipse.org/eclipse/development/eclipse_project_plan_3_2.html
+
</p><br>
+
  
<h4>I'd like to run a build with the version of the code that was used for build X? How do I know what versions of code in the repository were used?</h4>
+
*cd to build directory on build machine
<p>There are several ways to approach this issue. 
+
*cd org.eclipse.releng.eclipsebuilder
<ul>
+
*apply patch to builder in bug [[https://bugs.eclipse.org/bugs/attachment.cgi?id=118705 256297]]
<li>
+
*rerun generation on hacked builder
Look at the directory.txt linked from the top of every build page.  The directory.txt concatenates all the map files used in a build into a single file. Please note that every nightly build runs from HEAD.  Therefore it is impossible to replicate a nightly build.
+
*on build machine
</li>
+
**at now
<li>
+
**cd /builds/${buildId}/org.eclipse.releng.eclipsebuilder; sh command.txt
In addition, with every maintenance and integration build, the org.eclipse.releng project, which contains the map files for each project, is tagged with a version corresponding to that build. For instance, the version I20051018-0932 corresponds to the integration build of the same name.  Please note that the builder projects, (org.eclipse.releng.eclipsebuilder and org.eclipse.releng.basebuilder) are also tagged during an integration build with the corresponding build name, but are not reflected in the directory.txt because they are not included in the eclipse distribution itself.
+
**ctrl d
</li>
+
<li>
+
Finally, after each release the releng team tags all projects from the map files used in the release.  So, all the projects used to construct version 3.1.1 are tagged R_3_1_1. 
+
</li></ul>
+
</p><br>
+
 
+
<h4>I'm looking for an older build but can't find them on the download page?</h4>
+
  
<p>Check out archive site the http://archive.eclipse.org/eclipse/downloads for vintage builds.</p><br>
 
  
 +
The process needs to be executed using at or cron if you are logged in remotely because the swt libraries are needed to run the generation.  If you run the process while logged in remotely via ssh, the process will fail with "No more handles". The output logs for this process are in the posting directory under buildlogs/perfgen*.txt.
  
 +
The user that's running the tests on the build machine needs run "xhost +" in a terminal on the build machine.
  
<h4>I'm a platform committer and have made major changes today.  I'd like to run a test build to ensure that my changes don't break the build. Can I request a test build from a web page?</h4>
+
==How to regenerate performance results from build artifacts==
  
<p>No, you cannot request a eclipse test build from a web page.  Our build takes too long and much of the time the builds machines are busy.  Please send an email to the releng team requesting a test build and explaining the changes you have made.  If you would like an expediated test build, please provide chocolate.</p><br>
+
This assumes that the artifacts used to run the build and the resulting build directory itself still exist on the build machine.
  
<h4>How do I set up performance tests?</h4>
+
*cd to build directory on build machine
<p>Refer to the
+
*cd org.eclipse.releng.eclipsebuilder
 +
*apply patch to builder in bug [[https://bugs.eclipse.org/bugs/attachment.cgi?id=118705 256297]]
 +
*rerun generation on hacked builder
 +
*on build machine
 +
**at now
 +
**cd /builds/${buildId}/org.eclipse.releng.eclipsebuilder; sh command.txt
 +
**ctrl d
  
[http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/org.eclipse.test.performance/doc/Performance%20Tests%20HowTo.html    performance tests how-to.]
 
</p>
 
<br>
 
  
<h4>How do I run the performance tests from the zips provided on the download page?</h4>
+
The process needs to be executed using at or cron if you are logged in remotely because the swt libraries are needed to run the generation.  If you run the process while logged in remotely via ssh, the process will fail with "No more handles". The output logs for this process are in the posting directory under buildlogs/perfgen*.txt.
  
  <p>Check out this https://bugs.eclipse.org/bugs/show_bug.cgi?id=91229#c3
+
The user that's running the tests on the build machine needs run "xhost +" in a terminal on the build machine.
  bug</p><br>
+
 
+
  <h4>What do I need to do to use org.eclipse.releng.basebuilder from HEAD now that the Xerces plugin has been removed?</h4>
+
  
<p>If you use the org.eclipse.releng.basebuilder from HEAD in your builds and you use
 
  TestVersionTracker to create a test.properties file (search for &quot;TestVersionTracker&quot; in
 
  your customTargets.xml build scripts), please read on.</p>
 
  
        <p>Some older versions of the TestVersionTracker class required xerces
+
==How performance tests are invoked in the build==
        on the classpath. We have removed the xerces jars from
+
        the HEAD stream only of org.eclipse.releng.basebuilder.</p>
+
      <p>For any team broken by this, a &lt;generateTestProperties&gt; custom
+
        Ant task has been created which does the same. It is available in the
+
        org.eclipse.releng.basebuilder/plugins/org.eclipse.build.tools plug-in
+
        from HEAD. To use all you will need to do is the following replacement:</p>
+
      <p>Replace text such as this (taken from GEF):</p><br>
+
 
+
<p><pre>&lt;property name=&quot;build.compiler&quot; value=&quot;org.eclipse.jdt.core.JDTCompilerAdapter&quot;/&gt;<br>&lt;javac verbose=&quot;true&quot; failonerror=&quot;true&quot; srcdir=&quot;${builderDirectory}/tools&quot; destdir=&quot;${builderDirectory}/tools&quot;
+
  classpath=&quot;${eclipse.home}/plugins/org.apache.xerces_4.0.13/xercesImpl.jar:${eclipse.home}/plugins/org.apache.xerces_4.0.13/xmlParserAPIs.jar&quot;/&gt;<br>&lt;java classname=&quot;TestVersionTracker&quot; &gt;
+
  
&lt;arg line=&quot;${workingDirectory}/eclipse/features/org.eclipse.gef.test_3.1.0/feature.xml
+
Performance tests run if the -skipTest or -skipPerf parameters isn't passed to the build when running it. Both JUnit and performance tests are invoked in the build by the testAll target in the org.eclipse.releng.eclipsebuilder/buildAll.xml
${buildDirectory} ${workingDirectory}/gef-testing/test.properties&quot; /&gt;
+
&lt;classpath&gt;
+
&lt;pathelement path=&quot;${eclipse.home}/plugins/org.apache.xerces_4.0.13/xercesImpl.jar:${eclipse.home}/plugins/org.apache.xerces_4.0.13/xmlParserAPIs.jar:${builderDirectory}/tools&quot; /&gt;
+
&lt;/classpath&gt;<br>&lt;/java&gt;</pre>
+
  
with this:
+
<pre><target name="testAll" unless="skip.tests">
 +
<waitfor maxwait="4" maxwaitunit="hour" checkevery="1" checkeveryunit="minute">
 +
<and>
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-Automated-Tests-${buildId}.zip.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-win32.zip.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-linux-gtk.tar.gz.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-macosx-cocoa.tar.gz.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-${buildId}-delta-pack.zip.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-win32.zip.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-linux-gtk.tar.gz.md5" />
 +
<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-macosx-cocoa.tar.gz.md5" />
 +
</and>
 +
</waitfor>
  
<pre>&lt;generateTestProperties
+
<property name="cvstest.properties" value="${base.builder}/../eclipseInternalBuildTools/cvstest.properties" />
buildDirectory=&quot;${buildDirectory}&quot;
+
<antcall target="configure.team.cvs.test" />
  
featureId=&quot;org.eclipse.sdk.tests&quot;
+
<!--replace buildid in vm.properties for JVM location settings-->
outputFile=&quot;${workingDirectory}/eclipse-testing/test.properties&quot;
+
<replace dir="${eclipse.build.configs}/sdk.tests/testConfigs" token="@buildid@" value="${buildId}" includes="**/vm.properties" />
/&gt;</pre></p><br>
+
  
<h4>How do I update from 3.1 to 3.1.1 or 3.1.2 using update manager?</h4>
+
<antcall target="addnoperfmarker" />
  
<p>Refer to this document http://www.eclipse.org/eclipse/platform-releng/updatesfor3.1.1.html</p><br>
+
<parallel>
 +
<antcall target="test-JUnit" />
 +
<antcall target="test-performance" />
 +
</parallel>
 +
</target>
 +
</pre>
  
<h4>How do I update between integration builds</h4>
+
The test-performance target in the buildAll.xml looks like this
  
If you have at least 3.2M5 installed, you can update between integration builds by adding the following as a remote update site.  This site provides the updates jars for integration builds as of April 12, 2006. It will be cleaned up occaisionally and stale builds removed.
+
<pre>
 +
<target name="test-performance" unless="skip.performance.tests">
  
http://download.eclipse.org/eclipse/testUpdates
+
<echo message="Starting performance tests." />
 +
<property name="dropLocation" value="${postingDirectory}" />
 +
<ant antfile="testAll.xml" dir="${eclipse.build.configs}/sdk.tests/testConfigs" target="performanceTests" />
 +
<antcall target="generatePerformanceResults" />
 +
</target>
 +
</pre>
  
<h4>Why should I package plugins and features as jars?</h4>
 
<p>See the Core team's document <i>Running Eclipse from JARs</i> http://www.eclipse.org/eclipse/platform-core/documents/3.1/run_from_jars.html</p>
 
  
<h4>Why should I use qualifiers?</h4>
+
This calls the testAll.xml in org.eclipse.releng.eclipsebuilder/sdk.tests/testConfigs
  
<p> See the Core team's document <i>Recommendation to version plug-ins</i> http://www.eclipse.org/equinox/documents/plugin-versioning.html </p>
+
<pre>
 +
<target name="performanceTests">
  
<h4>How do I incorporate qualifiers into the build process?</h4>
+
<condition property="internalPlugins" value="../../../eclipsePerformanceBuildTools/plugins">
 +
<isset property="performance.base" />
 +
</condition>
  
<p>Update your plugins and features to use qualifiers as described in the previous FAQ entry.</p>
+
<property name="testResults" value="${postingDirectory}/${buildLabel}/performance" />
 +
<mkdir dir="${testResults}" />
  
<p>Additional steps that the platform releng team uses to incorporate qualifiers into the build process</p>
+
<parallel>
 +
<antcall target="test">
 +
<param name="tester" value="${basedir}/win32xp-perf" />
 +
<param name="cfgKey" value="win32xp-perf" />
 +
<param name="markerName" value="eclipse-win32xp-perf-${buildId}" />
 +
</antcall>
 +
<antcall target="test">
 +
<param name="tester" value="${basedir}/win32xp2-perf" />
 +
<param name="cfgKey" value="win32xp2-perf" />
 +
<param name="markerName" value="eclipse-win32xp2-perf-${buildId}" />
 +
</antcall>
 +
<antcall target="test">
 +
<param name="tester" value="${basedir}/rhelws5-perf" />
 +
<param name="sleep" value="120" />
 +
<param name="cfgKey" value="rhelws5-perf" />
 +
<param name="markerName" value="eclipse-rhelws5-perf-${buildId}" />
 +
</antcall>
 +
<antcall target="test">
 +
<param name="tester" value="${basedir}/sled10-perf" />
 +
<param name="sleep" value="300" />
 +
<param name="cfgKey" value="sled10-perf" />
 +
<param name="markerName" value="eclipse-sled10-perf-${buildId}" />
 +
</antcall>
 +
</pre>
  
<p>In our bootstrap script that kicks off the build, versionQualifier set to an empty string by default</p>
+
This invokes the tests in parallel on the performance test machines.  In this case, there is a machine.cfg file in the same directory as the above file that maps the "cfgKey" value written above to the hostname of the machine.  The tests are invoked on the windows machines via rsh and on the linux machines via ssh.
  
<tt>#versionQualifier - qualifier to use in plug-ins and features where specified<br>
+
<pre>
versionQualifier=""</tt><br>
+
#Windows XP
 +
win32xp-perf=epwin2
 +
win32xp2-perf=epwin3
  
For nightly builds<br>
+
#RedHat Enterprise Linux WS 5
<tt>if [ "$buildType" = "N" ]</tt><br>
+
rhelws5-perf=eplnx2
<tt>then</tt><br>
+
        <tt>versionQualifier="-DforceContextQualifier=$buildId"</tt><br>
+
<tt>fi</tt><br>
+
  
<p>For nightly builds, we set the  qualifier to the buildId.
+
#sled 10
For  integration builds, pde build sets it to the tag in the map file for plugins, and generates feature qualifiers.</p>
+
sled10-perf=eplnx1
 +
</pre>
  
The versionQualifier property is passed to the antRunner in our bootstrap script
+
This invokes all the tests in the org.eclipse.releng.eclipsebuilder\eclipse\buildConfigs\sdk.tests\testScripts\test.xml on each machine.  If the test bundle has a performance target in the test.xml, the performance tests for that machine will run.  The test scripts use the values in the (for example when running on window xp) org.eclipse.releng.eclipsebuilder\eclipse\buildConfigs\sdk.tests\testConfigs\win32xp2-perf\vm.properties which specifies the database to write to as well as the port, and url of that database.
  
<tt>
+
When the performances tests complete, the results are generated. 
$antRunner -buildfile all.xml $mail $testBuild $compareMaps -DbuildingOSGi=true -DmapVersionTag=$mapVersionTag -DbuildDirectory=$buildDirectory -DpostingDirectory=$postingDirectory -Dbootclasspath=$bootclasspath -DbuildType=$buildType -D$buildType=true -DbuildId=$buildId -Dbuildid=$buildId -DbuildLabel=$buildLabel -Dtimestamp=$timestamp -DmapCvsRoot=:ext:someuser@dev.eclipse.org:/cvsroot/eclipse -Dzipargs=-y $skipPerf $skipTest $tag $tagMaps -DjavacDebugInfo=true -DjavacSource=1.3 -DjavacTarget=1.2 -DcompilerArg="-enableJavadoc -encoding ISO-8859-1" <b>$versionQualifier</b> -DJ2SE-1.5=$bootclasspath_15 -listener org.eclipse.releng.build.listeners.EclipseBuildListener</tt>
+
 
 +
<pre>
 +
<target name="generatePerformanceResults">
 +
<mkdir dir="${buildDirectory}/${buildLabel}/performance" />
 +
<mkdir dir="${postingDirectory}/${buildLabel}/performance" />
 +
<taskdef name="performanceResults" classname="org.eclipse.releng.performance.PerformanceResultHtmlGenerator" />
 +
<condition property="configArgs" value="-ws gtk -arch ppc">
 +
<equals arg1="${os.arch}" arg2="ppc64" />
 +
</condition>
 +
<condition property="configArgs" value="-ws gtk -arch x86">
 +
<equals arg1="${os.arch}" arg2="i386" />
 +
</condition>
 +
<property name="configArgs" value="" />
 +
 
 +
<java jar="${basedir}/../org.eclipse.releng.basebuilder/plugins/org.eclipse.equinox.launcher.jar" fork="true" maxmemory="512m" error="${buildlogs}/perfgenerror.txt" output="${buildlogs}/perfgenoutput.txt">
 +
<arg line="${configArgs} -consolelog -nosplash -data ${buildDirectory}/perf-workspace -application org.eclipse.test.performance.ui.resultGenerator
 +
-current ${buildId}
 +
-jvm ${eclipse.perf.jvm}
 +
-print    
 +
-output ${postingDirectory}/${buildLabel}/performance/
 +
-config eplnx1,eplnx2,epwin2,epwin3
 +
            -dataDir ${postingDirectory}/../../data/v38
 +
-config.properties ${eclipse.perf.config.descriptors}
 +
-scenario.pattern org.eclipse.%.test%" />
 +
<!-- baselines arguments are no longer necessary since bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=209322 has been fixed...
 +
-baseline ${eclipse.perf.ref}
 +
-baseline.prefix R-3.4_200806172000
 +
-->
 +
<!-- add this argument to list above when there are milestone builds to highlight
 +
-highlight.latest 3.3M1_
 +
-->
 +
<env key="LD_LIBRARY_PATH" value="${basedir}/../org.eclipse.releng.basebuilder" />
 +
<sysproperty key="eclipse.perf.dbloc" value="${dbloc}" />
 +
</java>
 +
</pre>
 +
 
 +
Two important things about generating performance results:
 +
#  xhost+ needs to be enabled in a terminal of the user generating the performance results
 +
#  The derby database needs to be running on port 1528 (There is an init script for that)
 +
#  X has to be open on the machine generating the results or you'll get the "SWT - no more handles error"
 +
 
 +
==Why should I package plugins and features as jars?==
 +
See the [http://www.eclipse.org/eclipse/platform-core/documents/3.1/run_from_jars.html Running Eclipse from JARs] document from the Core team.
 +
 
 +
==Debugging pde build with missing dependancies==
 +
Breakpoint in BuildTimeSite class, in missingPlugins method, set breakpoint
 +
In display view,
 +
state.getState().getResolverErrors(state.getBundle("org.eclipse.ui.workbench", null, false))
 +
print the errors for the bundle in question.
 +
 
 +
==If I add a new plugin to a build, how do I ensure that javadoc will be included in the build.==
 +
See the [[How_to_add_things_to_the_Eclipse_doc | adding javadoc]] document.
 +
 
 +
 
 +
 
 +
==Troubleshooting test failures==
 +
If tests pass in a dev workspace but fail in the automated test harness, check to make sure that your build.properties is exporting all the necessary items; check that plug-in dependencies are correct, since the test harness environment will only include depended-upon plug-ins; and check that file references do not depend on the current working directory, since that may be different in the test harness.
 +
 
 +
To debug tests in the context of the automated test harness, add the following element to the test.xml file in your plug-in's directory in the test harness.
 +
 
 +
<source lang="xml">
 +
<property
 +
        name="extraVMargs"
 +
        value="-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Djava.compiler=NONE"/>
 +
</source>
 +
 
 +
Then,
 +
*create a remote debugging launch target in your dev environment
 +
*put a breakpoint somewhere in your code
 +
*launch the tests from the command line, using the -noclean option to preserve the modified test.xml
 +
*launch the debug target
 +
 
 +
==Troubleshooting tests that crash or time out, aka "(-1)DNF" ==
 +
The table "Unit Test Results" on testResults.php sometimes shows "(-1)DNF" instead of (0) or the number of failing tests. This means the tests '''D'''id '''N'''ot '''F'''inish, i.e., for some reason, no &lt;test-suite-name&gt;.xml file was produced.
 +
 
 +
Note that the absence of a DNF entry not always means that everything is alright! E.g. in {{bug|474161}}, one of the two SWT test suites was killed by a timeout, but since the other one passed, the testResults table currently doesn't show that (which needs to be fixed, see {{bug|210792}}).
 +
 
 +
To get more information about crashes and timeouts, consult the "Console Output Logs" logs.php. In the main logs (e.g. linux.gtk.x86_64_8.0_consolelog.txt), look for entries like this:
 +
    [java] EclipseTestRunner almost reached timeout '7200000'.
 +
and
 +
    [java] Timeout: killed the sub-process
 +
 +
collect-results:
 +
[junitreport] the file /Users/hudsonBuild/workspace/ep46I-unit-mac64/workarea/I20150805-2000/eclipse-testing/test-eclipse/Eclipse.app/Contents/Eclipse/org.eclipse.equinox.p2.tests.AutomatedTests.xml is empty.
 +
[junitreport] This can be caused by the test JVM exiting unexpectedly
 +
 
 +
Search for keywords '''almost''', '''killed''', and '''JVM exiting unexpectedly''' to quickly find the relevant region in the console log.
 +
 
 +
Before the Ant task that drives the automated tests kills the test process, the EclipseTestRunner tries to produce a thread dump and take a screenshot (twice within 5 seconds). The stack traces end up in the *_consolelog.txt, and the screenshots are made available on the logs.php page, e.g.:
 +
Screen captures for tests timing out on linux.gtk.x86_64_8.0
 +
    timeoutScreens_org.eclipse.swt.tests.junit.AllBrowserTests_screen0.png
 +
 
 +
Also consult the "Individual * test logs" on the logs.php page (one .txt file per test suite). Stdout output goes into those files. Stderr output goes into the *_consolelog.txt.
 +
 
 +
Since Oxygen (4.7), the org.eclipse.test.performance bundle contains a class [http://git.eclipse.org/c/platform/eclipse.platform.releng.git/tree/bundles/org.eclipse.test.performance/src/org/eclipse/test/TracingSuite.java TracingSuite] that can be used instead of the normal JUnit 4 Suite runner. Just define a test suite with <code>@RunWith(TracingSuite.class)</code>, and you will get a message on System.out before each atomic test starts. The Javadoc of the class has all details.
  
<h4>Where can I learn how to use the packager?</h4>
+
==Eclipse Release checklist==
http://dev.eclipse.org/viewcvs/index.cgi/pde-build-home/doc/packager/doc.html?rev=1.3
+
* [[Eclipse/Release_checklist]]
 +
* [[Platform-releng/Checklist]]
  
<h4>Eclipsecon 2006 presentation</h4>
+
==Where are the p2 update sites for the Eclipse Project?==
<p>http://www.eclipsecon.org/2006/Sub.do?id=254</p>
+
See [http://wiki.eclipse.org/Eclipse_Project_Update_Sites the list]
  
<h4>Best releng practices from our Eclipsecon 2005 poster</h4>
+
==How do I use the p2 zipped repos on the build page to provision my install or a pde target?==
 +
http://wiki.eclipse.org/Equinox/p2/Equinox_p2_zipped_repos
  
  <p>1. Automate the build process from end-to-end and automate early.<br>
+
==How to avoid breaking the build==
    2. Use PDE Build in Eclipse to generate build scripts.<br>
+
    3. Automate JUnit testing and performance monitoring.<br>
+
    4. Automate build notifications (email)<br>
+
    5. Use gentle humiliation to encourage developers to contribute carefully.
+
    (Clown nose technique).<br>
+
    6. Ensure stability in the build process by thorough testing of builder changes.<br>
+
    7. Schedule builds at regular intervals and enforce rebuild policies.<br>
+
    8. Use map files in a central cvs repository.<br>
+
    9. Build on Linux.<br>
+
    10. Have fun!
+
  <br></p><br>
+
 
+
 
+
<h4>Useful reference documents</h4>
+
  
<p>Build and Test Automation for plug-ins and features http://eclipse.org/articles/Article-PDE-Automation/automation.html </p>  
+
[http://wiki.eclipse.org/Platform-releng-avoid-build-breakage Avoid breaking the build]
<p>        Eclipse's culture of shipping http://www.artima.com/lejava/articles/eclipse_culture.html </p>
+
<p>        The Eclipse Way: Proccesses that Adapt http://eclipsecon.org/2005/presentations/econ2005-eclipse-way.pdf
+
  
  
  
</p>
+
== How do you incorporate the p2MirrorURL into your repo at build time ==
  
<h4>Who are the platform-releng committers?</h4>
+
See this excellent document [[Equinox/p2/p2.mirrorsURL | p2.mirrorsURL]] written by Stephan Herrmann.
  
<p>The platform releng team consists of Sonia Dimitrov and Kim Moir. </p>
+
The p2.mirrorsURL should be added to your metadata so that p2 will see the list of available mirrors to choose during installation. Mirrors = less eclipse.org
 +
bandwidth utilization = happy Eclipse.org webmasters.  Always try to keep the sysadmins happy is my motto.
  
<h4>As the holiday season approaches, what gifts are appropriate for your favourite release engineer?</h4>
+
==Useful reference documents==
 +
*Build and Test Automation for plug-ins and features http://eclipse.org/articles/Article-PDE-Automation/automation.html
 +
*Eclipse's culture of shipping http://www.artima.com/lejava/articles/eclipse_culture.html
 +
*The Eclipse Way: Proccesses that Adapt http://eclipsecon.org/2005/presentations/econ2005-eclipse-way.pdf
 +
*[[Building]]
  
<p>We enjoy fine chocolate and [http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-ui-home/curries.png Shaan] </p>
+
[[Category:Eclipse_Platform_Releng_Obsolete| ]]

Latest revision as of 05:36, 10 February 2022

Warning2.png
Note: Most of the data in this FAQ is obsolete, but it may still contain some interesting tid-bits.


Eclipse Platform Release Engineering FAQ

Contents

Is the Eclipse platform build process completely automated?

Yes, the Eclipse build process starts with a cron job on our main build machine that initiates a shell script that checks out the builder projects.

  • org.eclipse.releng.eclipsebuilder -> scripts to build each type of drop
  • org.eclipse.releng.basebuilder -> a subset of eclipse plugins required to run the build.
  • we also use several small cvs projects on an internal server that store login credentials, publishing information and jdks

Fetch and build scripts are generated automatically by the org.eclipse.pde.build bundle in org.eclipse.releng.basebuilder. Fetch scripts specify the version of code to retrieve from the repository. Build scripts are also generated automatically to compile all java code, determine compile order and manage dependencies. We create a master feature of all the non-test bundles and features used in the build. This feature is signed and packed, and the we create a p2 repo from the packed master feature. Metadata for the products is published to the repository. We then use the director application to provision the contents of the SDKs etc to different directories and then zip or tar them up. We also use custom build scripts that you can see in org.eclipse.releng.eclipsebuilder. After the SDKs are built, the automated junit and performance testing begins. Tests occur over ssh for Linux machines, and rsh for Windows machines. Each component team contributes their own tests. Once the tests are completed,the results are copied back to the build machine. Also, the images for the performance tests are generated.

What is the latest version of org.eclipse.releng.basebuilder?

See this document which describes correct tag of org.eclipse.releng.basebuilder for use in your builds.

I would like to recompile eclipse on a platform that is not currently supported. What is the best approach to this? How can I ensure that the community can use my work?

The best approach is use the source build drop and modify the scripts to support that you are interested in. Then open a bug with product Platform and component Releng attaching the patches to the scripts that were required to build the new drop. If you are interested in contributing a port to eclipse, here is the procedure.


How does the platform team sign their builds?

See Platform-releng-signedbuild

How long does the build take to complete?

It takes three hours and 20 minutes for all the drops to be produced. JUnit (eight hours) and performance tests (twelve hours) run in parallel after the windows, mac, linux and sdk tests drops have been built. It takes another two hours for the performance results to be generated.

When is the next build?

Please refer to the build schedule.

When is the next milestone?

Please refer to the Eclipse Platform Project Plan.

I noticed that you promoted an integration build to a milestone build. The integration build had test failures but the milestone builds doesn't show any. Why is this?

See bug 134413.

We have a number of tests that intermittently fail. The reasons are

  • issues with the tests themselves
  • tests subject to timing issues
  • tests that fail intermittently due to various conditions

The component teams are always trying to fix their tests but unfortunately they are still some issues. When we promote a build to a milestone, we rerun the tests that failed. Many pass on the second time because the tests initially failed due to a timing issue or intermittent condition. Or a team will have a broken test that doesn't warrant a rebuild for a milestone. In that case, the releng team sprinkles pixie dust over the build page to erase the red Xes, but leaves the appropriate build failures intact.

Scripts to promote a 4.x stream build

ssh to build.eclipse.org as pwebster

./promote4x.sh ${buildId} ${milestonename}

Example ./promote4x.sh I20110916-1615 M2

update the index.html to reflect the new build

/home/data/httpd/download.eclipse.org/eclipse/downloads/index.html
/home/data/httpd/download.eclipse.org/e4/downloads/index.html

How do I run the JUnit tests used in the build?

With every build, there is a eclipse-Automated-Tests-${buildId}.zip file. You can follow the instructions associated with this zip to run the JUnit tests on the platform of your choice.

How do you run the tests on the Windows machine via rsh

To run the windows tests from the build machine,

rsh ejwin2 start /min /wait c:\\buildtest\\N20081120-2000\\eclipse-testing\\testAll.bat c:\\buildtest\\N20081120-2000\\eclipse-testing winxplog.txt

How do I set up performance tests?

Refer to the performance tests how-to or the even better Performance First talk at Eclipse Con 2007.

Baseline tests are run from a branch of the builder.

  • 3.8 builds are compared against 3.4 baselines in the perf_37x branch of org.eclipse.releng
  • 3.7 builds are compared against 3.4 baselines in the perf_36x branch of org.eclipse.releng
  • 3.6 builds are compared against 3.4 baselines in the perf_35x branch of org.eclipse.releng
  • 3.5 builds are compared against 3.4 baselines in the perf_34x branch of org.eclipse.releng
  • 3.4 builds are compared against 3.3 baselines in the perf_33x branch of org.eclipse.releng
  • 3.3 builds are compared against 3.2 baselines in the perf_32x branch of org.eclipse.releng

How do I find data for old performance results on existing build page?

Refer to the "Raw data and Stats" link for a specific test.

For instance for 3.3.1.1 results, click here

Then look at the jdt ui tests.

Then click on a specific test.

Then click "Raw data and Stats".

You will see the data for previous builds.

How do I run the performance tests from the zips provided on the download page?

See here.

Process to implement a new baseline

Implement new performance baseline after a release.

How to regenerate performance results from build artifacts

This assumes that the artifacts used to run the build and the resulting build directory itself still exist on the build machine.

  • cd to build directory on build machine
  • cd org.eclipse.releng.eclipsebuilder
  • apply patch to builder in bug [256297]
  • rerun generation on hacked builder
  • on build machine
    • at now
    • cd /builds/${buildId}/org.eclipse.releng.eclipsebuilder; sh command.txt
    • ctrl d


The process needs to be executed using at or cron if you are logged in remotely because the swt libraries are needed to run the generation. If you run the process while logged in remotely via ssh, the process will fail with "No more handles". The output logs for this process are in the posting directory under buildlogs/perfgen*.txt.

The user that's running the tests on the build machine needs run "xhost +" in a terminal on the build machine.

How to regenerate performance results from build artifacts

This assumes that the artifacts used to run the build and the resulting build directory itself still exist on the build machine.

  • cd to build directory on build machine
  • cd org.eclipse.releng.eclipsebuilder
  • apply patch to builder in bug [256297]
  • rerun generation on hacked builder
  • on build machine
    • at now
    • cd /builds/${buildId}/org.eclipse.releng.eclipsebuilder; sh command.txt
    • ctrl d


The process needs to be executed using at or cron if you are logged in remotely because the swt libraries are needed to run the generation. If you run the process while logged in remotely via ssh, the process will fail with "No more handles". The output logs for this process are in the posting directory under buildlogs/perfgen*.txt.

The user that's running the tests on the build machine needs run "xhost +" in a terminal on the build machine.


How performance tests are invoked in the build

Performance tests run if the -skipTest or -skipPerf parameters isn't passed to the build when running it. Both JUnit and performance tests are invoked in the build by the testAll target in the org.eclipse.releng.eclipsebuilder/buildAll.xml

<target name="testAll" unless="skip.tests">
		<waitfor maxwait="4" maxwaitunit="hour" checkevery="1" checkeveryunit="minute">
			<and>
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-Automated-Tests-${buildId}.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-win32.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-linux-gtk.tar.gz.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-SDK-${buildId}-macosx-cocoa.tar.gz.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-${buildId}-delta-pack.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-win32.zip.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-linux-gtk.tar.gz.md5" />
				<available file="${postingDirectory}/${buildLabel}/checksum/eclipse-platform-${buildId}-macosx-cocoa.tar.gz.md5" />
			</and>
		</waitfor>

		<property name="cvstest.properties" value="${base.builder}/../eclipseInternalBuildTools/cvstest.properties" />
		<antcall target="configure.team.cvs.test" />

		<!--replace buildid in vm.properties for JVM location settings-->
		<replace dir="${eclipse.build.configs}/sdk.tests/testConfigs" token="@buildid@" value="${buildId}" includes="**/vm.properties" />

		<antcall target="addnoperfmarker" />

		<parallel>
			<antcall target="test-JUnit" />
			<antcall target="test-performance" />
		</parallel>
	</target>

The test-performance target in the buildAll.xml looks like this

	<target name="test-performance" unless="skip.performance.tests">

		<echo message="Starting performance tests." />
		<property name="dropLocation" value="${postingDirectory}" />
		<ant antfile="testAll.xml" dir="${eclipse.build.configs}/sdk.tests/testConfigs" target="performanceTests" />
		<antcall target="generatePerformanceResults" />
	</target>


This calls the testAll.xml in org.eclipse.releng.eclipsebuilder/sdk.tests/testConfigs

<target name="performanceTests">

		<condition property="internalPlugins" value="../../../eclipsePerformanceBuildTools/plugins">
			<isset property="performance.base" />
		</condition>

		<property name="testResults" value="${postingDirectory}/${buildLabel}/performance" />
		<mkdir dir="${testResults}" />

		<parallel>
			<antcall target="test">
				<param name="tester" value="${basedir}/win32xp-perf" />
				<param name="cfgKey" value="win32xp-perf" />
				<param name="markerName" value="eclipse-win32xp-perf-${buildId}" />
			</antcall>
			<antcall target="test">
				<param name="tester" value="${basedir}/win32xp2-perf" />
				<param name="cfgKey" value="win32xp2-perf" />
				<param name="markerName" value="eclipse-win32xp2-perf-${buildId}" />
			</antcall>
			<antcall target="test">
				<param name="tester" value="${basedir}/rhelws5-perf" />
				<param name="sleep" value="120" />
				<param name="cfgKey" value="rhelws5-perf" />
				<param name="markerName" value="eclipse-rhelws5-perf-${buildId}" />
			</antcall>
			<antcall target="test">
				<param name="tester" value="${basedir}/sled10-perf" />
				<param name="sleep" value="300" />
				<param name="cfgKey" value="sled10-perf" />
				<param name="markerName" value="eclipse-sled10-perf-${buildId}" />
			</antcall>

This invokes the tests in parallel on the performance test machines. In this case, there is a machine.cfg file in the same directory as the above file that maps the "cfgKey" value written above to the hostname of the machine. The tests are invoked on the windows machines via rsh and on the linux machines via ssh.

#Windows XP
win32xp-perf=epwin2
win32xp2-perf=epwin3

#RedHat Enterprise Linux WS 5
rhelws5-perf=eplnx2

#sled 10 
sled10-perf=eplnx1

This invokes all the tests in the org.eclipse.releng.eclipsebuilder\eclipse\buildConfigs\sdk.tests\testScripts\test.xml on each machine. If the test bundle has a performance target in the test.xml, the performance tests for that machine will run. The test scripts use the values in the (for example when running on window xp) org.eclipse.releng.eclipsebuilder\eclipse\buildConfigs\sdk.tests\testConfigs\win32xp2-perf\vm.properties which specifies the database to write to as well as the port, and url of that database.

When the performances tests complete, the results are generated.

<target name="generatePerformanceResults">
		<mkdir dir="${buildDirectory}/${buildLabel}/performance" />
		<mkdir dir="${postingDirectory}/${buildLabel}/performance" />
		<taskdef name="performanceResults" classname="org.eclipse.releng.performance.PerformanceResultHtmlGenerator" />
		<condition property="configArgs" value="-ws gtk -arch ppc">
			<equals arg1="${os.arch}" arg2="ppc64" />
		</condition>
		<condition property="configArgs" value="-ws gtk -arch x86">
			<equals arg1="${os.arch}" arg2="i386" />
		</condition>
		<property name="configArgs" value="" />

		<java jar="${basedir}/../org.eclipse.releng.basebuilder/plugins/org.eclipse.equinox.launcher.jar" fork="true" maxmemory="512m" error="${buildlogs}/perfgenerror.txt" output="${buildlogs}/perfgenoutput.txt">
			<arg line="${configArgs} -consolelog -nosplash -data ${buildDirectory}/perf-workspace -application org.eclipse.test.performance.ui.resultGenerator
						-current ${buildId}
						-jvm ${eclipse.perf.jvm}
						-print					    
						-output ${postingDirectory}/${buildLabel}/performance/
						-config eplnx1,eplnx2,epwin2,epwin3
			            -dataDir ${postingDirectory}/../../data/v38
						-config.properties ${eclipse.perf.config.descriptors}
						-scenario.pattern org.eclipse.%.test%" />
			<!-- baselines arguments are no longer necessary since bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=209322 has been fixed...
						-baseline ${eclipse.perf.ref}
						-baseline.prefix R-3.4_200806172000
			-->
			<!-- add this argument to list above when there are milestone builds to highlight 
			-highlight.latest 3.3M1_
			-->
			<env key="LD_LIBRARY_PATH" value="${basedir}/../org.eclipse.releng.basebuilder" />
			<sysproperty key="eclipse.perf.dbloc" value="${dbloc}" />
		</java>

Two important things about generating performance results:

  1. xhost+ needs to be enabled in a terminal of the user generating the performance results
  2. The derby database needs to be running on port 1528 (There is an init script for that)
  3. X has to be open on the machine generating the results or you'll get the "SWT - no more handles error"

Why should I package plugins and features as jars?

See the Running Eclipse from JARs document from the Core team.

Debugging pde build with missing dependancies

Breakpoint in BuildTimeSite class, in missingPlugins method, set breakpoint In display view, state.getState().getResolverErrors(state.getBundle("org.eclipse.ui.workbench", null, false)) print the errors for the bundle in question.

If I add a new plugin to a build, how do I ensure that javadoc will be included in the build.

See the adding javadoc document.


Troubleshooting test failures

If tests pass in a dev workspace but fail in the automated test harness, check to make sure that your build.properties is exporting all the necessary items; check that plug-in dependencies are correct, since the test harness environment will only include depended-upon plug-ins; and check that file references do not depend on the current working directory, since that may be different in the test harness.

To debug tests in the context of the automated test harness, add the following element to the test.xml file in your plug-in's directory in the test harness.

<property
        name="extraVMargs" 
        value="-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Djava.compiler=NONE"/>

Then,

  • create a remote debugging launch target in your dev environment
  • put a breakpoint somewhere in your code
  • launch the tests from the command line, using the -noclean option to preserve the modified test.xml
  • launch the debug target

Troubleshooting tests that crash or time out, aka "(-1)DNF"

The table "Unit Test Results" on testResults.php sometimes shows "(-1)DNF" instead of (0) or the number of failing tests. This means the tests Did Not Finish, i.e., for some reason, no <test-suite-name>.xml file was produced.

Note that the absence of a DNF entry not always means that everything is alright! E.g. in bug 474161, one of the two SWT test suites was killed by a timeout, but since the other one passed, the testResults table currently doesn't show that (which needs to be fixed, see bug 210792).

To get more information about crashes and timeouts, consult the "Console Output Logs" logs.php. In the main logs (e.g. linux.gtk.x86_64_8.0_consolelog.txt), look for entries like this:

    [java] EclipseTestRunner almost reached timeout '7200000'.

and

    [java] Timeout: killed the sub-process

collect-results:
[junitreport] the file /Users/hudsonBuild/workspace/ep46I-unit-mac64/workarea/I20150805-2000/eclipse-testing/test-eclipse/Eclipse.app/Contents/Eclipse/org.eclipse.equinox.p2.tests.AutomatedTests.xml is empty.
[junitreport] This can be caused by the test JVM exiting unexpectedly

Search for keywords almost, killed, and JVM exiting unexpectedly to quickly find the relevant region in the console log.

Before the Ant task that drives the automated tests kills the test process, the EclipseTestRunner tries to produce a thread dump and take a screenshot (twice within 5 seconds). The stack traces end up in the *_consolelog.txt, and the screenshots are made available on the logs.php page, e.g.:

Screen captures for tests timing out on linux.gtk.x86_64_8.0
   timeoutScreens_org.eclipse.swt.tests.junit.AllBrowserTests_screen0.png

Also consult the "Individual * test logs" on the logs.php page (one .txt file per test suite). Stdout output goes into those files. Stderr output goes into the *_consolelog.txt.

Since Oxygen (4.7), the org.eclipse.test.performance bundle contains a class TracingSuite that can be used instead of the normal JUnit 4 Suite runner. Just define a test suite with @RunWith(TracingSuite.class), and you will get a message on System.out before each atomic test starts. The Javadoc of the class has all details.

Eclipse Release checklist

Where are the p2 update sites for the Eclipse Project?

See the list

How do I use the p2 zipped repos on the build page to provision my install or a pde target?

http://wiki.eclipse.org/Equinox/p2/Equinox_p2_zipped_repos

How to avoid breaking the build

Avoid breaking the build


How do you incorporate the p2MirrorURL into your repo at build time

See this excellent document p2.mirrorsURL written by Stephan Herrmann.

The p2.mirrorsURL should be added to your metadata so that p2 will see the list of available mirrors to choose during installation. Mirrors = less eclipse.org bandwidth utilization = happy Eclipse.org webmasters. Always try to keep the sysadmins happy is my motto.

Useful reference documents

Back to the top