Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

EMF DiffMerge/Co-Evolution/Programmatic Usage

Principles

Co-Evolution supports implementing 'bridges', i.e., incremental model/data transformations of arbitrary complexity. Its primary goal is to enable engineers of different disciplines, working on the same system in a Model-Based Systems Engineering approach, to discuss and synchronize their (heterogeneous) models during co-engineering sessions within iterative processes.

Step 1: Define what must be synchronized

Co-Evolution classically assumes the existence of two data sets that need to be synchronized. These are arbitrary sets of data elements, for example: an EMF model, a model in some other arbitrary but known format, a subset of a model, or a subset of the union of models. One data set is given the role of source and the other one the role of target. Deviations or inconsistencies between them are expressed as differences on the target side between: a) the target data set as it is now, and b) the target data set as it should be according to the source.

The data sets must comply to the following assumptions.

  • Their data elements can be identified uniquely and consistently through their life cycle. In other words, if an element is identified as E within a data set then there is no ambiguity as to which element of the data set is E. And even though the data set evolves, the identification of the element as E remains.
  • They can be read in Java, and for the target side written in Java, from the same Java Virtual Machine.
  • They are disjoint: no data element belongs to both data sets.

In the case where a data set is only made of EMF model elements, it naturally maps to an EMF Diff/Merge model scope. This is only important for the target side, though: for the source side, any Java Object that allows retrieving the data elements will do, as long as the assumptions above hold.

Step 2: Define how elements are traced

The synchronization mechanism requires a persistent trace that relies on the identification mechanism mentioned above. Although different implementations can be used, a default one is provided that supports arbitrarily complex mappings of EMF model elements that have identifiers (ID attributes or XML/XMI IDs assigned via EMF Resource-based persistence).

The most reliable way to ensure traceability of EMF model elements is to use meta-models that define ID attributes (see org.eclipse.emf.ecore.EAttribute#isID()) and org.eclipse.emf.ecore.util.EcoreUtil#generateUUID()). In this situation, the default behavior is probably appropriate so nothing more needs to be done.

Step 3: Define a mapping (model/data transformation)

A model transformation must be defined that takes as input a source data set and produces a corresponding target data set and a trace. It should be a simple one-shot (non-incremental) transformation.

It can be implemented with any (Java-based) technology as long as the technology conforms to, or adapts to, the Co-Evolution APIs (see org.eclipse.emf.diffmerge.bridge.api.IBridge). By default, Co-Evolution includes a Mapping module that allows implementing transformations as sets of queries and rules without bothering about traces (see org.eclipse.emf.diffmerge.bridge.mapping.impl.MappingBridge). Co-Evolution also integrates the Transposer transformation framework from Polarsys Kitalpha. Other transformation technologies have been experimented with such as Xtend.

Step 3: Make it incremental

Back to the top