Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

CBI/aggregator

Introduction

The CBI Aggregator is one of the deliverables of the CBI project. It is a tool to combine several p2 repositories. Among other things, it makes sure they all have consistent constraints (that is, can be "installed together") unlike a raw p2 mirror task.

The CBI Aggregator was formerly known as "b3 aggregator" from the now defunct "Eclipse b3 Project" (see bug 506726).

The purpose of the CBI Aggregator is to aggregate or combine several p2 repositories. The aggregator does a number of things above and beyond simply "p2 mirroring" the different repositories: first and foremost, it makes sure that all the bundles could be "installed together" (that is, all the constraints are met and none are contradictory); it also ensures any "pack.gz" files are valid (i.e. can be unpacked into valid jars); and it also offers custom categories since the aggregated repository in most cases would not want to use the same categories as each individual repository uses. More details on its functionality follows.

The Aggregator combines repositories from various sources into a new aggregated p2 repository. It can also be configured to produce a hybrid p2/Maven2 repository. (This feature is currently labeled "experimental" in the CBI Aggregator, since it is not yet production quality).

There are many situations where using aggregated repositories is a good solution. The reasons vary from licensing issues to organizational requirements:

  1. Owners of a p2 repo for a given project may not be in position to host all required or recommended components due to licensing issues - SVN support one example: as it requires components available through the main Eclipse p2 repo as well as third-party components. Hence users would normally have to visit several repos for a complete install, but by using the aggregator an institution could create a custom repository that has everything needed.
  2. Projects want to provide convenient access to their products - Installation instructions requiring the user to visit several repos for a complete install are not uncommon. An aggregated repo for all those locations provides a convenient one-stop strategy. The aggregation may support mirroring all consumed p2 repos or simply providing an indirection via a composite repo.
  3. Organizations or teams want control over internally used components - It may be necessary to have gated access to relevant p2 repos and do an organizational "healthcheck" of those before internal distribution. Furthermore, internally used aggregated repos can provide a common basis for all organizational users.
  4. Increase repository availability - by aggregating and mirroring what is used from multiple update sites into an internally controlled server (or several).
  5. Distributed Development Support - an overall product repository is produced by aggregating contributions from multiple teams.

The Aggregator tool is focused on supporting these specific requirements. The Aggregator is used in scenarios outside of the traditional "build domain" and this has been reflected in the user interface which does not delve into the details of "building" and should therefore be easy to use by non build experts.

Furthermore, it is worth noting that:

  • the Aggregator is based on EMF models
  • headless execution of aggregation definitions (once they have been created) is possible using a command line packaging of the Aggregator

Functional Overview

The CBI Aggregator performs aggregation and validation of repositories. The input to the aggregator engine (that tells it what to do) is a aggr EMF model. Such a model is most conveniently created by using the CBI Aggregator editor. This editor provides both editing and interactive execution of aggregation commands. The editor is based on a standard EMF "tree and properties view" style editor where nodes are added and removed to from a tree, and the details of nodes are edited in a separate properties view. Once a aggr model has been created it is possible to use the command line / headless aggregator to perform aggregation (and other related commands). (Note that since the aggr is "just an EMF model", it can be produced via EMF APIs, transformation tools, etc., and thus support advanced use cases).

The model mainly consists of Contributions - specifications of what to include from different repositories, and Validation Repositories - repositories that are used when validating, but which are not included in the produced aggregation (i.e., they are not copied). Contributions and Validation Repositories are grouped into Validation Sets. Everything in a Validation Set will be validated as one unit, i.e. it must be possible to install everything in a Validation Set together. The model also contains specification of various processing rules (exclusions, transformation of names, etc.), and specification of Contacts - individuals/mailing-lists to inform when processing fails.

Here are some of the important features supported by the CBI Aggregator:

  • p2 and maven2 support — the aggregator can aggregate from and to both p2 and maven2 repositories.
  • Maven2 name mapping support — names in the p2 domain are automatically mapped to maven2 names using built-in rules. Custom rules are also supported.
  • Mirroring — artifacts from repositories are mirrored/downloaded/copied to a single location.
  • Selective mirroring — an aggregation can produce an aggregate consisting of a mix of references to repositories and mirrored repositories.
  • Cherry picking — it is possible to pick individual items when the entire content of a repository is not wanted. Detailed picking is supported as well as picking transitive closures like a product, or a category to get everything it contains/requires.
  • Pruning — it is possible to specify mirroring based on version ranges. This can be used to reduce the size of the produced result when historical versions are not needed in the aggregated result.
  • Categorization — categorization of installable units is important to the consumers of the aggregated repository. Categories are often choosen by repository publishers in a fashion that makes sense when looking at a particular repository in isolation, but when they are combined with others it can be very difficult for the user to understand what they relate to. An important task for the constructor of an aggregation is to be able to organize the aggregated material in an easily consumable fashion. The CBI Aggregator has support for category prefixing, category renaming, addition of custom categories, as well as adding and removing features in categories.
  • Validation — the CBI Aggregator validates the aggregated Validation Sets to ensure that everything in them is installable at the same time.
  • Blame Email — when issues are found during validation, the aggregator supports sending emails describing the issue. This is very useful when aggregating the result of many different projects. Advanced features include specifying contacts for parts of the aggregation, which is useful in large multi-layer project structures where issues may be related to the combination of a group of projects rather than one individual project - someone responsible for the aggregation itself should be informed about these cross-project issues. The aggregator supports detailed control over email generation, including handling of mock emails when testing aggregation scripts.

Git repository

Git repository: http://git.eclipse.org/c/cbi/org.eclipse.cbi.p2repo.aggregator.git/

Installation

It is best to start by downloading a fresh release of the Eclipse SDK. The latest version of the aggregator, from its own software repository, typically goes with the most recently released version of the Eclipse SDK or the Eclipse Platform. The aggregator builds do not typically keep up with Milestones. It might work, but "user beware".

The CBI aggregator can either be integrated with your Eclipse SDK or it can be installed as a standalone headless product (i.e. pure command line, without any graphical UI).

The instructions below show the URLs at the time this section was written (that is, not necessarily the current URLs).

Always check the for the latest aggregator software repository on the CBI download page.

Eclipse SDK installation

  • Start your Eclipse installation and open the Install New Software wizard. You'll find in under the top menu Help
Install New Software wizard
  • Click the Add... button and enter the URL http://download.eclipse.org/cbi/updates/aggregator/ide/4.7/ in the Location field.
  • Select the Eclipse CBI Aggregator Editor and click Next twice.
CBI Aggregator Editor selection
  • Accept the Eclipse Public License and click Finish
  • Restart the IDE once the installation is finished.

Headless installation

Installation of the headless version of the Aggregator is similar to any typical headless installation using the p2Director. The following steps focus on the installation of the headless Aggregator feature.

  1. Install CBI Aggregator with the following command:
    • <p2_DIRECTOR> -r <HEADLESS_REPO> -d <INSTALL_DIR> -p CBIProfile -i org.eclipse.cbi.p2repo.cli.product

where

      • <p2_DIRECTOR> is whatever method you use to invoke the p2Director application.
      • -r <HEADLESS_REPO> is the headless p2 update site: Current stable version is: http://download.eclipse.org/cbi/updates/aggregator/headless/4.7/
      • -d <INSTALL_DIR> is the chosen install location of the headless aggregator
      • -p CBIProfile is the name of the p2 profile
      • -i org.eclipse.cbi.p2repo.cli.product is the name of the headless CBI Aggregator

Getting started with standard examples

In the following sections we provide two simple examples that are easy to replicate and should highlight the most important features of the Aggregator. The first example deals with the creation of two variations of a p2 repo. The second shows the Aggregator's Maven support.

The *.aggr model files shown be below can be created via File > New > CBI Aggregator > Repository Aggregation.

Aggregating a p2 repo

The first sample aggregation is build around Buckminster and its support for Subversive. The objective of this aggregated repo is to:

  • provide a "one-stop shop" experience
  • conveniently pull in third-party components that are not hosted at Eclipse
  • provide this repo as an indirection mechanism if required

Mirroring contributions

Warning2.png
Out of date
This section is out of date and has yet to be updated for the CBI aggregator


(This example aggregation can be downloaded via the CBI project Git and opened in an appropriately set up workbench: [1]).

The background is that Buckminster provides support for Subclipse. In addition to all components hosted at Eclipse, a complete installation will also require Subclipse components from Tigris.org (http://subclipse.tigris.org/update_1.6.x). We want to create a repo that combines these components and makes them accessible from one location. We want to make several platform configurations available.

CBI Aggregator - Sample 1

This example already includes some of the more advanced aggregation features. Stepping through the model view from the top node the following components can be identified:

  1. The top node Aggregation groups all model definitions regarding ValidationSets and Contributions. Looking at the properties section at the bottom we see that:
    • the top node has been provided with a mandatory Label with the value "Indigo + Buckminster for Subclipse"; this is also the label that is shown to users when accessing the aggregated repo via the p2 update manager
    • the Build Root identifies the location to which the artifacts of the aggregated repo will be deployed
  2. The aggregation is defined for three configurations (i.e. os=win32, ws=win32, arch=x86; etc)
    • any number of configurations can be defined
    • during the aggregation process all dependencies of the contributed components will be verified for all provided configurations, unless exceptions are defined (see below)
  3. We have one ValidationSet labeled "main". A ValidationSet constitutes everything that will be validated as one unit by the p2 planner.
  4. The main ValidationSet contains three different contributions.
  5. The first Contribution to the aggregation is labeled "Indigo". This contribution includes binary configuration-specific artifacts which are only available for linux. If a simple contribution would be defined the aggregation would fail for all non-linux configurations, and hence the aggregation would fail as a whole.
    • this requires a definition of Valid Configurations Rules that state exceptions
    • the rules defined for the the three components in question essentially state that the verification process for those components should only be performed for linux-based configurations
    • one Mapped Repository is defined for this contribution (it can have multiple); all that is needed is a user-defined label and the URL of the repository that should be included
    • the result of this definition is that all categories, products, and features from Indigo p2 repo will be included in the aggregated repo.
  6. The second Contribution is labeled "Subclipse" and deals with the inclusion of bundles provided from the Subclipse project. #*this contribution represents the simplest example of a contribution
  7. The third Contribution is labeled "Buckminster (latest)". It shows another advanced feature - an Exclusion Rule.
    • remember that the objective of the sample repo is to provide convenient setup of Buckminster with Subclipse support, and since Buckminster's Subclipse and Subversive support are mutually exclusive, we want to exclude the features for Subversive from the aggregated repo to make it easier for the user.
    • this is done using an Exclusion Rule defined for each Installable Unit that should be excluded
  8. A list of all included repos is displayed at the bottom of the model editor view
    • this list allows browsing the contents of all repos
    • this part of the model is not editable

The aggregation can be run by right-clicking any node in the model and selecting Build Aggregation. This example was setup to use a mirroring approach for all contributed repos. Hence, the complete contents of all included can be found in the aggregated repos target location specified under Build Root.

Check the next section for a slightly different approach.

Providing a repo indirection

Mirroring all repo artifacts of your aggregated contributions is a very valuable and important feature when performing aggregation, but there are also many cases where this is not necessary. It is possible to turn off artifact mirroring/copying by changing one property for a defined contribution. Each Mapped Repository has a boolean property called Mirror Artifacts which can be set to false in order to prevent copying all artifacts of the contributed repo to the aggregated repo.

The following buckminster_indigo_redirect.aggr is a variation of the first example with the Mirror Artifacts property set to false for all contributed repos. Running this aggregation will result in a composite repository that provides indirection to the contributed repos.

Creating a Maven-conformant p2 repo

A powerful feature of the Aggregator is the ability to create aggregated repos that can be consumed as Maven repos, (i.e. providing the directory/file structure and artifacts required by Maven). An aggregated repository that supports Maven can be consumed both as a p2 and a Maven repo (at the same time). This flexibility is possible thanks to p2's separation of dependency meta-data and the actual location of the referenced artifacts.

In order to create a Maven-conformant aggregate repo all that is required is to set the property Maven Result property of the Aggregation to true. The aggregation deployed to the Build Root location will be a Maven repo. In this repo all .source artifacts will be co-located with their main artifact, sharing the same artifactId but with qualifier -sources, as expected by Maven.

Additionally, the property Version Format of the Aggregation allows to select between three strategies for version numbers:

Normal
Versions numbers are kept unmodified
Strict Maven
Use a '-' (dash) to separate three-part versions from any qualifier, to make the resulting versions parsable by Maven (so 3.12.2.v20161117-1814 will be converted to 3.12.2-v20161117-1814).
In previous versions this strategy was selected via property Strict Maven Versions which is now deprecated in favor of Version Format.
Maven Release
Cut off any qualifier, i.e., produce three-part version numbers to tell Maven that all artifacts are "releases".

If more fine grained control over version formats is needed, each Maven Mapping can be fine tuned with a pair of Version Pattern and Version Template, which will only be applied to artifacts matching the corresponding Maven Mapping. In this case, the existing version will be matched against the pattern, and if it matches, the version will be replaced by applying the corresponding template, where $1, $2 ... represent match groups from the pattern. As an example consider the case where the micro digit of a version should be omitted if it is equal to '0', so "4.12.0.v201504281640" will be translated to "4.12":

  • Version Pattern: ([^.]+)\.([^.]+)\.0(?:\..*)?
  • Version Template: $1.$2

If no version pattern is defined for any Maven Mapping, or if a defined version pattern does not match, the aggregator will fall back to the global Version Format strategy.

A sample aggr file, SDK4Mvn.aggr, shows an easy-to-use example of how to create a Maven-conformant p2 repository (i.e. one that works with Maven or p2). The sample has a short description file that explains some of the details.

Headless support

You will need a headless installation of CBI with the Aggregator feature installed.

Running from the command line

Just type:
cbiAggr aggregate <options>

Note: if you install the aggregator into an Eclipse SDK or Platform (rather than a pure headless installation), you can run from the command line, by using

eclipse -application org.eclipse.cbi.p2repo.cli.headless aggregate <options>

For a detailed listing of the available options consult the next section.

Command line options

Option Value Default Description
--buildModel <path to build model> This value is required Appoints the aggregation definition that drives the execution. The value must be a valid path to a file (absolute or relative to current directory).
--action VALIDATE (VERIFY in 0.1)

BUILD

CLEAN

CLEAN_BUILD

BUILD Specifies the type of the execution.
  • VALIDATE - verifies model validity and resolves dependencies; no artifacts are copied or created
  • BUILD - performs the aggregation and creates the aggregated repository in the target location
  • CLEAN - cleans all traces of previous aggregations in the specified target location
  • CLEAN_BUILD - performs a CLEAN followed by a BUILD
--buildId <string> build-<timestamp in the format yyyyMMddHHmm> Assigns a build identifier to the aggregation. The identifier is used to identify the build in notification emails. Defaults to: build-<timestamp> where <timestamp> is formatted according as yyyyMMddHHmm, i.e. build-200911031527
--buildRoot <path to directory> buildRoot declared in the aggregation model Controls the output. Defaults to the build root defined in the aggregation definition. The value must be a valid path to a directory (absolute or relative to current folder). If the desired directory structure does not exist then it is created.
--agentLocation <path to directory> <buildRoot>/p2 Controls the location of the p2 agent. When specified as outside of the build root, the agent will not be cleaned out by the aggregator and thus retain its cache.
--production N/A N/A Indicates that the build is running in real production. That means that no mock emails will be sent. Instead, the contacts listed for each contribution will get emails when things go wrong.

CAUTION: Use this option only if you are absolutely sure that you know what you are doing, especially when using models "borrowed" from other production builds without changing the emails first.

--emailFrom <email> Address of build master Becomes the sender of the emails sent from the aggregator.
--emailFromName <name> If --emailFrom is set then this option sets the real name of the email sender.
--mockEmailTo <email> no value Becomes the receiver of the mock-emails sent from the aggregator. If not set, no mock emails are sent.
--mockEmailCC <email> no value Becomes the CC receiver of the mock-emails sent from the aggregator. If not set, no mock CC address is used.
--logURL <url> N/A The URL that will be pasted into the emails. Should normally point to the a public URL for output log for the aggregator so that the receiver can browse the log for details on failures.
--logLevel DEBUG

INFO

WARNING

ERROR

INFO Controls the verbosity of the console trace output. Defaults to global settings.
--eclipseLogLevel DEBUG

INFO

WARNING

ERROR

INFO Controls the verbosity of the eclipse log trace output. Defaults to global settings.
--stacktrace --- no value Display stack trace on uncaught error.
--subjectPrefix <string>  ? The prefix to use for the subject when sending emails. Defaults to the label defined in the aggregation definition. The subject is formatted as: "[<subjectPrefix>] Failed for build <buildId>"
--smtpHost <host name> localhost The SMTP host to talk to when sending emails. Defaults to "localhost".
--smtpPort <port number> 25 The SMTP port number to use when talking to the SMTP host. Default is 25.
--packedStrategy COPY

VERIFY

UNPACK_AS_SIBLING

UNPACK

SKIP

value from the model Deprecated (Use only with on-the-fly transformation from deprecated build models)

Controls how mirroring is done of packed artifacts found in the source repository. Defaults to the setting in the aggregation definition.

--trustedContributions <comma separated list> no value Deprecated (Use only with on-the-fly transformation from deprecated build models)

A comma separated list of contributions with repositories that will be referenced directly (through a composite repository) rather than mirrored into the final repository (even if the repository is set to mirror artifacts by default).

Note: this option is mutually exclusive with option --mavenResult (see below)

--validationContributions <comma separated list> no value Deprecated (Use only with on-the-fly transformation from deprecated build models)

A comma separated list of contributions with repositories that will be used for aggregation validation only rather than mirrored or referenced into the final repository.

--mavenResult --- no value Deprecated (Use only with on-the-fly transformation from deprecated build models)

Tells the aggregator to generate a hybrid repository that is compatible with p2 and maven2. Note: this option is mutually exclusive with option --trustedContributions (see above)

--mirrorReferences --- no value Mirror meta-data repository references from the contributed repositories. Note the current recommendation is to not have update sites in feature definitions, but instead in "products", that is, to allow adopters or consumers to specify where to get updates from and what "extras" to offer. See bug 358415. But, this option can still useful for some situations.
--referenceIncludePattern <regexp> no value Include only those references that matches the given regular expression pattern.
--referenceExcludePattern <regexp> no value Exclude all references that matches the given regular expression pattern. An exclusion takes precedence over an inclusion in case both patterns match a reference.

Aggregation model components and specific actions

This section provides an in-depth description and reference of the Aggregation model, listing all model components, properties and available actions.

Global actions

The following aggregator-specific actions are available via the context menu that can be invoked on any node in the Aggregation model editor:

  • Clean Aggregation - cleans all traces of previous aggregations in the specified target location (Build Root)
  • Validate Aggregation - verifies model validity and resolves dependencies; no artifacts are copied or created
  • Build Aggregation - performs the aggregation and creates the aggregated repository in the target location (Build Root)
  • Clean then Build Aggregation - performs a Clean followed by a Build

Aggregation

The root node of any aggregation model is the Aggregation node. It specifies a number of global properties including the Build Root (the target location of the aggregated repository) as well as the repo structure (maven-conformant or classic p2 setup). There are several child components some of which can be reference in other parts of the model: Configuration, Contact,Validation Set, Custom Category, Maven Mapping.

Property Value(s) Default Value Comment
Buildmaster <Contact>? - Specifies an optional build master that will be notified of progress and problems if sending of mail is enabled. This is a reference to any Contact that has been added to this Aggregation model.
Build Root <urn> - This is a required property specifying the target location of the aggregated repository.
Description <string> - An optional description of the aggregated repository that will be shown when accessing the aggregated repository via the p2 update manager.
Label <string> - A required label that will be used for the aggregated repository. This will be shown as the repo label when accessing the aggregated repository via the p2 update manager.
Maven Mappings <Maven Mapping>* - References Maven Mappings added as children to the Aggregation model root node. See Maven Mapping for details.
Maven Result true

false

false Controls the output structure of the aggregated repo. If true, the aggregated repo will be Maven-conformant. Both the structure and meta-data of the aggregated repository will follow the conventions required by Maven.

NOTE that due to the flexibility of p2 (separation of meta-data about dependencies and location of artifacts) the aggregated repo will at the same time also function as a valid p2 repository.

Packed Strategy Copy

Verify

Unpack as Sibling

Unpack

Skip

Copy This property controls how packed artifacts found in contributed repositories are handled when building the Aggregation:
  • Copy - if the source contains packed artifacts, copy and store them verbatim. No further action
  • Verify - same as copy but unpack the artifact and then discard the result
  • Unpack as Sibling - same as copy but unpack the artifact and store the result as a sibling
  • Unpack - use the packed artifact for data transfer but store it in unpacked form
  • Skip - do not consider packed artifacts. This option will require unpacked artifacts in the source
Sendmail false

true

false Controls whether or not emails are sent when errors are detected during the aggregation process. A value of false disables sending of emails (including mock emails).
Type S

I

N

M

C

R

S Indicates the Aggregation type. This is an annotation merely for the benefit of the build master. It is not visible in the resulting repo.
  • S - stable
  • I - integration
  • N - nightly
  • M - milestone
  • C - continuous
  • R - release

Configuration

An Aggregation may have one or more Configuration definitions. The aggregated repo will be verified for all added configurations. If dependencies for any of the given configurations fails the aggregation as a whole fails. It is however possible to specify exceptions for individual Contributions.

A Configuration is a combination of the following properties:

Property Value(s) Default Value Comment
Architecture X86

PPC

X86_64

IA64

IA64_32

Sparc

PPC64

S390

S390x

X86 Specifies the architecture for which this configuration should be verified.
Enabled true

false

true Disables (false) or enables (true) the configuration for the aggregation process.
Operating System Win32

Linux

MacOSX

AIX

HPUX

Solaris

QNX

Win32 Specifies the operating system for which this configuration should be verified.
Window System Win32

GTK

Carbon

Cocoa

Motif

Photon

Win32 Specifies the windowing system for which this configuration should be verified.

Validation Set

The aggregator uses the p2 planner tool to determine what needs to be copied. The validation set determines the scope of one such planning session per valid configuration. This vouches for that everything that is contained within a validation set will be installable into the same target. Sometimes it is desirable to have more than one version of a feature in a repository even though multiple versions cannot be installed. This can be done using one validation set for each version.

A validation set can extend other validation set and thereby provide a convenient way for sharing common things that multiple validation sets will make use of. An extended validation set will not be validated by its own. It will only be validated as part of the sets that extend it.

A validation set can have two types of children: Contribution and Validation Repository.

Versions prior to 0.2.0 did not have the validation set node. Older aggr files will therefore be converted and given one validation set called main

Property Value(s) Default Value Comment
Description <string> - An optional description of the validation set. For documentation purposes only.
Enabled true

false

true Disables (false) or enables (true) the validation set to be considered in the aggregation process. Note that this may lead to missing dependencies and hence verification errors.
Extends <Validation Set>* - Zero or more references to Validation Sets defined in the given Aggregation model that this validation set extends.
Label <string> - Mandatory label for this validation set.

Contribution

Contributions are the key element of any aggregation. Contributions specify which repositories (or parts thereof (category, feature, product, IU)) to include, and the constraints controlling their inclusion in the aggregated repository. A contribution definition may consist of several Mapped Repository and Maven Mapping components.

Property Value(s) Default Value Comment
Contacts <Contact>* - Zero or more references to Contacts defined in the given Aggregation model. The referenced contacts will be notified if aggregation errors related to the contribution as a whole occur.
Description <string> - Optional description of the contribution. This is for documentation purposes only and will not end up in the aggregated repository.
Enabled true

false

true Disables (false) or enables (true) the contribution to be considered in the aggregation process. Note that this may lead to missing dependencies and hence verification errors.
Label <string> - Mandatory label for this contribution.
Maven Mappings <Maven Mapping>* - See Maven Mapping for details ...


Mapped Repository

A Contribution may define several Mapped Repositories defining the actual content of the contribution. The Aggregator provides fine-grained control over the contribution from each Mapped Repository through references to Products, Bundles, Features, Categories, Exclusion Rules, Valid Configuration Rules.

Property Value(s) Default Value Comment
Category Prefix <string> - A prefix added to the label of this repository when displayed in the p2 update manager. In the absence of custom categories this allows a useful grouping of repositories in an aggregated repository to be specified.
Description <string> - Description of the Mapped Repository. For documentation purposes only.
Enabled true

false

true Controls whether a Mapped Repository is considered as part of the Contribution. Setting this property to false excludes the repository from the verification and aggregation process.
Location <URL> This (required property) specifies the location of the repository that should be added to the enclosing Contribution.
Mirror Artifacts true

false

true Controls whether the contents (artifacts) of the specified repository will be copied to the target location of the aggregated repository.
Nature <nature> p2 This specifies the nature of the repository, i.e. which loader should be used for loading the repository. The number of available repository loaders depends on installed extensions. By default, p2 and maven2 loaders are present in the installation.


Product

Defining Product components allows the addition of individual Eclipse products to the aggregation to be specified (as opposed to mapping the complete contents of a given Mapped Repository. This naturally requires that products are present in the repositories being mapped.

Property Value(s) Default Value Comment
Description <description> - Optional description of the mapping.
Enabled true

false

true Controls whether a Product is considered as part of the Contribution. Setting this property to false excludes the product from the verification and aggregation process.
Name <product IU's id> - The id of the product to be included in the aggregation. A drop-down of available names is provided.
Valid Configurations <Configuration>* - References to zero or more configurations for which this product should be verified. If no references are given the product is verified for all Configurations defined for the aggregation.
Version Range <version range> 0.0.0 A version range that specifies acceptable versions for this installable unit. A pop-up editor is available.
Category

Defining Category components allows the addition of the content in specific categories (provided by the contributed repository) rather than the complete contents of a given Mapped Repository.

Property Value(s) Default Value Comment
Description <description> no value Optional description of the mapping.
Enabled true

false

true Controls whether a repository Category is considered as part of the Contribution. Setting this property to false excludes the category from the verification and aggregation process.
Name <category IU id> - The id of the category to be included in the aggregation. A drop-down of available names is provided.
Label Override <string> - New category name used instead of the default name.
Valid Configurations <Configuration>* - References to zero or more configurations for which the category contents should be verified. If no references are given the category is verified for all Configurations defined for the aggregation.
Version Range <version range> 0.0.0 A version range that specifies acceptable versions for this installable unit. A pop-up editor is available.
Bundle

Defining Bundle components allows addition of individual Eclipse bundles to the aggregation to be specified (rather than the complete contents of a given Mapped Repository).

Property Value(s) Default Value Comment
Description <description> no value Optional description of the mapping.
Enabled true

false

true Controls whether a referenced bundle is considered as part of the Contribution. Setting this property to false excludes the bundle from the verification and aggregation process.
Name <bundle IU id> - The id of the bundle to be included in the aggregation. A drop-down of available names is provided.
Valid Configurations <Configuration>* - References to zero or more configurations for which the referenced bundle has to be verified. If no references are given the bundle has to be verified for all Configurations defined for the aggregation.
Version Range <version range> 0.0.0 A version range that specifies acceptable versions for this installable unit. A pop-up editor is available.
Feature

Defining Feature components allows the addition of individual Eclipse features to the aggregation to be specified (rather than the complete contents of a given Mapped Repository). The features to include must be present in the mapped repository.

Furthermore, this component provides the means to group features implicitly into Custom Categories.

Property Value(s) Default Value Comment
Categories <Custom Category>* - Optionally references the Custom Category definitions into which the feature should be placed upon aggregation. The relationship to the custom category is bi-directional so adding the feature to a custom category will update this property automatically in the Custom Category definition, and vice versa.
Description <description> no value Optional description of the mapping.
Enabled true

false

true Controls whether a referenced feature is considered as part of the Contribution. Setting this property to false excludes the feature from the verification and aggregation process.
Name <feature IU id> - The id of the feature to be included in the aggregation. A drop-down of available names is provided.
Valid Configurations <Configuration>* - References to zero or more configurations for which the referenced feature should be verified. If no references are given the feature is verified for all Configurations defined for the aggregation.
Version Range <version range> 0.0.0 A version range that specifies acceptable versions for this installable unit. A pop-up editor is available.
Exclusion Rule

The Exclusion Rules provides an alternative way to control the content of the aggregated repository. An exclusion rule may specify exclusion of any bundle, feature or product. The excluded IU will not be considered in the aggregation and verification process. Each exclusion rule can only reference one IU id.

Property Value(s) Default Value Comment
Description <string> - Description for documentation purposes.
Name <IU id> - The id of the installable unit to be excluded from the aggregation. A drop-down of available names is provided.
Version Range <version range> 0.0.0 A version range that specifies versions of this IU to be excluded. A pop-up editor is available.
Valid Configuration Rule

By default all contributed contents of a Mapped Repository will be verified for all Configurations defined for the aggregation. A Valid Configuration Rule provides more control over validation. When using a Valid Configuration Rule, the referenced IUs (product, feature, or bundle) will only be verified and aggregated for the configurations specified in the rule. The rule only applies if the whole repository is mapped (i.e. when no explicit features, products, bundles or categories are mapped, regardless if they are enabled or disabled).

Property Value(s) Default Value Comment
Description <string> - Description for documentation purposes.
Name <IU id> - The id of the IU to be included in the verification process. A drop-down of available names is provided.
Valid Configurations <Configuration>* - References to one or more configurations for which the referenced IU should be verified. This implicitly excludes verification and aggregation for all other Configurations defined as part of the aggregation model.
Version Range <version range> 0.0.0 A version range that specifies versions of this installable unit for which the rule should apply. A pop-up editor is available.

Maven Mapping (Contribution)

See Maven Mapping for a detailed description and list of properties.

Contact

Defines a resuseable contact element which can be referenced in other parts of the model and may be used to send notifications about the aggregation process.

Property Value(s) Default Value Comment
Email <email> - The email to be used when notifying the contact.
Name <string> - Full name of the contact. May be displayed as label when referenced in other parts of the model.

Custom Category

A Custom Category provides a grouping mechanism for features in the aggregated repository. A custom category can be referenced by Features defined for inclusion from a Mapped Repository. The relationship to between Custom Category and a Feature is bi-directional. Thus, adding the feature to a custom category will update this property automatically in the Feature definition, and vice versa.

Property Value(s) Default Value Comment
Description <string> - Description of the category as displayed when accessing the aggregated repository.
Features <Feature>* - Features included in this category by reference.
Identifier <string> - Category id. Required Eclipse category property.
Label <string> - Label displayed for the category when accessing the aggregated repository via the p2 update manager.

Validation Repository

A Validation Repository is used to define that a repository should be used when validating dependencies for the aggregation but whose contents should not be included in the aggregated repository. It supports the cases where the objective is to create a repository that is not self sufficient, but rather a complement to something else (typical use case is an aggregation of everything produced by an organization with validation against the main Eclipse repository).

Property Value(s) Default Value Comment
Enabled true

false

true Controls whether a Validation Repository is considered during the verification and aggregation process. Setting this property to false excludes the repository from the verification and aggregation process.
Location <URL> Specifies the location of the repository.
Nature <nature> p2 This specifies the nature of the repository, i.e. which loader should be used for loading the repository. The number of available repository loaders depends on installed extensions. By default, p2 and maven2 loaders are present in the installation.

Maven Mapping

The Aggregator supports the creation of Maven-conformant repositories. A Maven repository requires a structure and use of naming conventions that may have to be achieved by a transformation of the Bundle-SymbolicName (BSN) when working with Eclipse bundles. There is a default translation from BSN naming standard to Maven naming. If that is not satisfactory, custom transformations are supported by the definition of one or more Maven Mappings which can be defined at the Aggregator and the Contribution level.

This only applies when the Maven Result property of the Aggregator model is set to true. In that case all defined Maven Mappings are applied in the order in which they appear in the model starting from the most specific to the most generic. That means for each artifact that a Contribution adds to the aggregated repository:

  1. first Maven Mappings defined as children of a Contribution are applied in the order in which they appear as children of the parent Contribution node
  2. second Maven Mappings defined as children of the Aggregator model are applied in the order in which they appear as children of the parent Aggregator node
  3. finally the default Maven Mapping is applied

The most generic mapping is a default pattern that is applied whenever a Maven is created. It does not need to be added explicitly to the model. A mapping is specified using a regular expression that is applied to each BSN. The regular expression must specify two replacements; one for the maven groupId, and one for the maven artifactId. The group and artifact ids have an effect on the resulting Maven repo structure. The default pattern is:

^([^.]+(?:\.[^.]+(?:\.[^.]+)?)?)(?:\.[^.]+)*$

The default maven mapping use the replacement $1 for groupId and $0 for artifactId. Hence, when applying the default maven mapping regular expression to a BSN, up to 3 segments (with dots as segment delimiters) are taken as the group id, and the entire BSN is taken as the artifact name. If this is a applied to something like org.eclipse.cbi.p2repo.aggregator the groupId would be org.eclipse.cbi.p2repo and the artifactId org.eclipse.cbi.p2repo.aggregator. The resulting Maven repo will have the folder structure:

org
   |
   eclipse
      |
      cbi 
         |
         org.eclipse.cbi.p2repo.aggregator
            |
            <folder name after version>
               |
               org.eclipse.cbi.p2repo.aggregator-<version>.jar
               ...


Property Value(s) Default Value Comment
Name Pattern regex - A regular expression which is applied to the Bundle-SymbolicName (BSN). Pattern groups may be referenced in the replacement properties.
Group Id <string> (may contain pattern group references (i.e. $1, ...)) - Group Id applied in a maven mapping structure (groupId/artifactId). The Group Id replacement may contain pattern group references (i.e. $1, ...).
Artifact Id <string> (may contain pattern group references (i.e. $1, ...)) - Artifact Id applied in a maven mapping structure (groupId/artifactId). The Artifact Id replacement may contain pattern group references (i.e. $1, ...).
Version Pattern regex - An optional regular expression which is applied to the Version. Pattern groups may be referenced in the Version Template property.
Version Template <string> (may contain pattern group references (i.e. $1, ...)) - If Version Pattern matched the given Version, then this Version Template is used to construct the version as seen in Maven, useful in particular to shorten versions ending in ".0" to 2-part versions etc.. The Version Template may contain pattern group references (i.e. $1, ...).

If no Version Pattern is given or if the Version Pattern doesn't match, then the global strategy from Version Format is applied.

Legacy format support

[Note: this section is not completely accurate, with the current version of the CBI aggregator. It is certainly desired to offer this level of support, but that is not currently the case.]

As time passes, the aggregator model will undergo changes. All such changes will result in a new XML schema describing the aggregator model. The aggregator will always detect the schema in use by a model and, if needed, transform it into the current schema.

The transformation is fully automated when you run in headless mode but you will see a message like The build model file is obsolete, using the default transformation indicating that a transformation took place. The new model only exists in memory while the aggregation runs and is never stored on disk.

When you are opening an old model in the IDE, a transformation wizard will be presented where you are given a chance to specify a new file name for the transformed model. Please note that if you are using detached contributions (contributions that live in files of their own), and you want to keep it that way, then you should not change the name. If the name is changed, the aggregator has no choice but to create a new file that embeds all the contributions. This is because a detached contribution is referencing the aggregation file by name.

The behavior of the transformation wizard is:

  • If the name is unchanged, also retain the detached contributions
  • If a new name is given, embed all contributions in the new file

If you want a new name and still retain the detached contributions then you have two options

Either:

  1. Do the transformation using the same name
  2. Manually rename the main aggr file
  3. Manually search/replace and change the name in all your contributions

Or:

  1. Do the transformation using a different name
  2. Remove all your detached contributions
  3. Detach all contributions again

What else should be documented

  1. version editor (version formats...)
  2. how enable/disable on the context menu works
  3. drag&drop support
  4. 'available versions' tree
  5. context menu actions (mapping IUs from repository view, searching for IUs from 'available version' tree...)

Questions

  • How is blame-mail handled when running in the UI? Are the options "production" etc. available then?

When launched from IDE, only --buildModel and --action options are set (all other options have default values). Perhaps it's worth adding a launching dialog which would enable setting all options that are available in headless mode.

  • How do you know what the "log output url" is? How can it be set?

You can, for example, redirect a copy of the console output to a publicly available resource (a text file served by a http server). The public URL should be passed in the --logURL option so that a link to it would appear in the informational email.

  • Some configurations seems to be missing - or is the list open ended? (Sparc, Motif, etc..) - I assume user can put anything there?

Only listed configurations are supported (they are a part of the model). Adding an option means changing the model and creation of a new aggregator build.

  • It seems that it is possible to load maven repositories. I suppose the result of aggregation is a valid p2 repository (or a combination of p2/maven)??

Yes, it is possible to load p2 and maven2 repositories by default (if someone adds a custom loader then he can load any type of repository). The output is always p2 compatible, optionally combined with maven2 (with no extensibility - at least now). However, if a maven2 repo is loaded and the result is also maven2 compatible, it is not identical to the original (not all attributes loaded from maven are stored in the p2 model).

Information for developers of CBI aggregator

This section is to list "how to" information for developers (contributors and committers) of the code in the CBI Aggregator, not simply those using the CBI aggregator to combine p2 repositories.

Cloning the repo

The typical (Eclipse) methods apply:

git clone ssh://git.eclipse.org:29418/cbi/org.eclipse.cbi.p2repo.aggregator

or

git clone ssh://<userid>@git.eclipse.org:29418/cbi/org.eclipse.cbi.p2repo.aggregator

Building or Running locally

If desired, simply running mvn clean verify in the root of the repo will do a generic local build of the project. However, to do a clean build that is most like the production build, (after some possible edits for local customizing) one may take advantage of the script file named runLocalBuild.sh in

.../org.eclipse.cbi.p2repo.releng.parent/localScripts

Similarly, in the same directory is a CBI Aggregator.launch file that will launch the CBI aggregator from Eclipse for running or debugging.

Production Build Jobs

The main build jobs for CBI aggregator are listed in a view at

https://ci.eclipse.org/cbi/view/p2RepoRelated/

This following list simply duplicates the documentation that describes the main jobs there and are listed in roughly the order they are typically ran in.

Job to run a build of CBI aggregator based on a Gerrit patch in master branch.

Job to run a build of CBI aggregator based on tip of master branch. This job is run manually when it is considered the right time to have a new "I-build" deployed to the two repository sites on the download server. Note: the "clean" in the name refers to only the "tmp" directory and the "local Maven Repository". If it is desired to literally wipe everything, then that should be done via the Jenkins "wipe workspace" function.

(Not always needed) this job is simply to remove some no longer needed repositories at the update sites for cbi.p2repo.aggregator.

This manual-only job creates the composites at .../cbi/updates/aggregator/ide/4.8 and .../cbi/updates/aggregator/headless/4.8.

We have a separate job for composites so that an I-build can be tested some before adding to the composite where everyone will get pseudo-automatically.

The composite XML files are literally re-created each time, based on the most recent three I-builds (so "bad" I-builds should be removed before running this job!).

This is a "script only" job, but the repository was left in source control, since technically the script here could be ran from its git version in ${WORKSPACE}/org.eclipse.cbi.p2repo.aggregator/org.eclipse.cbi.p2repo.releng.parent/buildScripts/writeComposites.sh

Standard XML Formatting

For this project, if a file that contains XML must be formatted, it must be formatted with the Eclipse XML Editor (this does NOT include EMF auto-created files!). So, first, naturally, the Eclipse XML Editor must be installed into your development environment.

Then, the default formatting preferences must be changed as follows:

  • Line width: 132
  • Split multiple attributes each on a new line (checked)
  • Preserve whitespace in tags with PCDATA content: true (checked)
  • Join lines: false (unchecked)
  • Insert whitespace before closing empty end-tags: true (checked)
  • Indent using spaces: true (checked)
  • Indentation size: 2

Below is an image of what the XML Editor formatting preference page ends up looking. XmlEditorFormatPreferences.png

Updating the .target file for this project

When to update

Typically the .target file is updated after every release to the Eclipse Platform since typically after every release of the Eclipse Platform there should be a new releasable build that is deployed and added to the composite update site. Occasionally, such as is there is major change to p2 during a development cycle, then the project needs to be branched with the maintenance branch being based purely on released prerequisites, with the master branch being based on the unreleased version of the Platform and its prerequisites.

How to update

Principles

Ultimately, the .target file should be treated as a ".txt" file so that the comments, spacing, and order of elements are not lost. (But there are ways around doing this literally, as described below).

Also, by convention -- for this project -- the repository location URLs are listed as specifically as possible.

Similarly, the versions of features and bundles are listed as specifically as possible.

Before committing changes, at a minimum be sure to go to preferences and select Target Platform and "Reload" and/or "Edit" to make sure that all URLs and versions are found. This is simply an easy way to spot typos, etc.

Manual Method

The manual method is the hardest way to update the .target file, but is often the safest way to be sure to get prerequisites which "match" the Eclipse Platform. When using the manual method, but .target file is edit with an ordinary text editor and typically copy/paste is used to update the required values (when needed -- not all prerequisites will need to be updated each time, but it is best to check each one to be sure).

The URLs are updated based on the specific URLs of the released prerequisites. For example, the Oxygen.3 release of the Eclipse Platform (4.7.3) has a download page that list both a general software site and a specific software site. For builds (and hence our .target file) we use the most specific URL available. (For updating our development installs, the general software site is typically fine).

To find out which versions are provided in that most recent site, one must simply go and look -- with by logging in with a shell account and navigating to the directory and then use something like ls -l to list the features or bundles of interest, and copy/paste the most recent version number to the .target file.

Or, one may also navigate to the web page of the software site and use the "Show Directory Contents" link to see features and bundles available along with their specific versions.

Tools Method

It is possible to use tools such as the Target Editor to simplify some of the editing, by using some pre- and post-steps that allows for comments to be maintained, etc. It also requires that the Eclipse XML Editor be installed and the formatting set to the project's Standard XML Formatting.

  1. First, copy the current (old) .target file in org.eclipse.cbi.p2repo.releng.target to something like 'org.eclipse.cbi.p2repo.aggregator.prereqs.target.save.xml'.
  2. Then open the .target file with the Target Editor, update the URLs as needed, then press the reload and update buttons as needed.
  3. Save that .target file, and notice the original formatting (and comments) are lost.
  4. Now rename or copy the .target file to something that ends with ".xml", for example, 'org.eclipse.cbi.p2repo.aggregator.prereqs.target.xml' and then format it with the XML Editor.
  5. Now that you have two xml files, one old, one new, you can do a "compare with each other" and should be a simple task to copy comments and formatting from one to the other.
  6. Once the comments and formatting has been restored, you can rename the ".xml file" back to simply the original ".target" file name.
Important.png
Selecting the right bundle versions
The P2 URL for Orbit should point to the latest available release of the current branch, e.g. the Oxygen.3a Orbit release for the Oxygen branch. While the latest Orbit release should be used, the bundle versions of dependencies from Orbit should not necessarily be the latest version. Instead, the bundle versions of dependencies (e.g. SLF4J) should match the bundle versions that the Platform depends upon.

Back to the top