SimRel/Simultaneous Release Reports FAQ
Common Software Repository Reports
[Note: this page is new, under development, and subject to frequent changes and updates]
This page describes collection of automated checks and reports to run against p2 repositories or directories of jars.
The reports are currently ran against the latest contents of the "staging" repository and placed in the shared space on the <a href"http://build.eclipse.org/indigo/simrel/">build machine</a> where they can be viewed with a web browser.
These are very temporary reports, and will be removed the next time new reports are created. Which is normally when staging repository is updated (Once or twice or three times a day, when busy).
While many projects have their own, similar tests (which have one way or another provided the starting point for all these tests) it is worth some effort effort to collect some common tests in a common place to encourage reuse and improvements.
For now, the tests and scripts are in cvs, in the module
org.eclipse.indigo.tests in the
The code and scripts are not ready for "prime time", that is they are not ready to be reused directly, as they are, on other repositories, but over time, and with community help, they can be abstracted to run against many p2 repositories, with just a few parameters.
Put another way, the ultimate goal is that the reports for the simultaneous release common repository are simply a final sanity check and that all projects can perform these tests themselves, early in development cycle. But ... until then, the reports will be ran against, at least, the common staging repositories. Most of the tests can be used, with some copy/paste/hacking directly from an IDEs workbench, against a local repository on your file system. Start small. :)
If/when others have fixes, abstractions, or new tests, please open a but in the cross-project component, so these tests and reports can be improved over time by community effort.
The code and scripts, as of right now, are oriented towards simply "producing reports". But if you browse the code, you'll see some commented-out code in places that can cause "failed" flags to be set, which would be appropriate for some projects ... and, long term, maybe even the common repository. Similarly, most, now, are oriented towards being a simple "ant task". It might be possible, with moderate effort, to convert most to "unit tests" for more finely tuned testing and error reporting, if there was ever an advantage to that.
One big problem right now, are the "signing checking scripts". These are literally shell scripts, that call 'jarsigner -verify', one jar at a time, so it takes forever. Several hours. Where as all the other tests combined finish in about 10 minutes, total. It'd be much better to do the jar verification with Java more directly, so processes and executables do not have to start and stop 5000 times. Volunteers? :)