From Eclipsepedia

< M2M
Jump to: navigation, search

The objective of this page is to offer several model-to-model transformation benchmarks. Currently, any type of benchmark (e.g., that test performance, functionality) may be contributed.


Each benchmark consists of:

  • A definition of the transformation:
    • Source and target metamodels.
    • Source and target samples.
    • A language-neutral specification of the transformation.
  • Implementations of the transformation using several languages, with execution time measures (for performance evaluation).

Synthetic Representation

The following table shows how information about the benchmarks could be represented. The first column contains the benchmark name, and links to the language-neutral benchmark definition. This definition should be given in a wiki page like M2M/Benchmarks/ABenchmark. It should point to source and target metamodels as well as source and target samples.

Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations). For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations. Each of the language columns may be split into several implementations of the language when appropriate. For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).

Execution times on different architectures may be listed in a per-benchmark page. The measures presented in the table here should be performed on a single architecture, so that they may be compared.

Note that this is an early proposal, and that this table is likely to evolve. Other, more appropriate, representations may also be used later.

Model-to-model Transformation Benchmarks
Benchmarks ATL QVT OM QVT R
Regular VM emfvm Java ATL VM
RSM to TOPCASED v0: 45 min v0: 75 s v0: v0: ?min
 ?min  ?min
v1: ? min v1: 45 s v1:
 ?min  ?min
UML2 API to Platform Ontologies v0: 92 min 37 sec v0: 29 min 53 sec v0: v0: ?min
 ?min  ?min
v1: ?min v1: ?min v1:
 ?min  ?min

How to contribute

Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>. Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL). Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.