SimRel/Simultaneous Release Reports FAQ
Common Software Repository Reports
This page roughly describes collection of automated checks and reports to run against p2 repositories or directories of jars. Eventually it will become more of a "how to" and FAQ oriented page.
The reports are ran against the latest successful CLEAN BUILD on Hudson, so can always be viewed there. There is even a link on the Hudson build instance main page that links to the latest report. The report is also copied, along with the repository, so it ends up on staging area too, so can be also be viewed under staging repo.
While many projects have their own, similar tests (which have one way or another provided the starting point for all these tests) it is worth some effort effort to collect some common tests in a common place to encourage reuse and improvements.
For now, the tests and scripts are in Git repo. To load into workspace, and appropriate URL would be similar to
or for Gerrit access:
The reports can be ran locally or adopted for your own builds.
Put another way, the ultimate goal is that the reports for the simultaneous release common repository are simply a final sanity check and that all projects can perform these tests themselves, early in development cycle. But ... until then, the reports will be ran against, at least, the common staging repositories. Most of the tests can be used, with some copy/paste/hacking directly from an IDEs workbench, against a local repository on your file system. Start small. :)
If/when others have fixes, abstractions, or new tests, please open a but in the cross-project component, so these tests and reports can be improved over time by community effort.
The code and scripts, as of right now, are oriented towards simply "producing reports". But if you browse the code, you'll see some commented-out code in places that can cause "failed" flags to be set, which would be appropriate for some projects ... and, long term, maybe even the common repository. Similarly, most, now, are oriented towards being a simple "ant task". It might be possible, with moderate effort, to convert most to "unit tests" for more finely tuned testing and error reporting, if there was ever an advantage to that.
One big problem right now, are the "signing checking scripts". These are literally shell scripts, that call 'jarsigner -verify', one jar at a time, so it takes forever. Several hours. Where as all the other tests combined finish in about 10 minutes, total. It'd be much better to do the jar verification with Java more directly, so processes and executables do not have to start and stop 5000 times. Volunteers? :)