Skip to main content
Jump to: navigation, search

Difference between revisions of "CBI/p2repoAnalyzers/Repo Reports"

(Common Software Repository Reports)
m (added Luna category)
Line 36: Line 36:
[[Category:Kepler| ]] [[Category:Juno| ]] [[Category:Coordinated]]
[[Category:Luna| ]] [[Category:Kepler| ]] [[Category:Juno| ]] [[Category:Coordinated]]

Revision as of 12:30, 1 October 2013

Common Software Repository Reports

This page roughly describes collection of automated checks and reports to run against p2 repositories or directories of jars. Eventually it will become more of a "how to" and FAQ oriented page.

The reports are currently ran against the latest contents of the "staging" repository and placed in the shared space on the build machine where they can be viewed with a web browser.

These are very temporary reports, and will be removed the next time new reports are created. Which is normally when staging repository is updated (Once or twice or three times a day, when busy).

While many projects have their own, similar tests (which have one way or another provided the starting point for all these tests) it is worth some effort effort to collect some common tests in a common place to encourage reuse and improvements.

For now, the tests and scripts are in Git repo. To load into workspace, and appropriate URL would be similar to


Or, for casual browsing, see org.eclipse.simrel.tests.git (browse, stats, fork on OrionHub) .

The reports can be ran locally or adopted for your own builds.

Put another way, the ultimate goal is that the reports for the simultaneous release common repository are simply a final sanity check and that all projects can perform these tests themselves, early in development cycle. But ... until then, the reports will be ran against, at least, the common staging repositories. Most of the tests can be used, with some copy/paste/hacking directly from an IDEs workbench, against a local repository on your file system. Start small. :)

If/when others have fixes, abstractions, or new tests, please open a but in the cross-project component, so these tests and reports can be improved over time by community effort.

The code and scripts, as of right now, are oriented towards simply "producing reports". But if you browse the code, you'll see some commented-out code in places that can cause "failed" flags to be set, which would be appropriate for some projects ... and, long term, maybe even the common repository. Similarly, most, now, are oriented towards being a simple "ant task". It might be possible, with moderate effort, to convert most to "unit tests" for more finely tuned testing and error reporting, if there was ever an advantage to that.

One big problem right now, are the "signing checking scripts". These are literally shell scripts, that call 'jarsigner -verify', one jar at a time, so it takes forever. Several hours. Where as all the other tests combined finish in about 10 minutes, total. It'd be much better to do the jar verification with Java more directly, so processes and executables do not have to start and stop 5000 times. Volunteers? :)

Back to the top