Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "M2M/Benchmarks"

< M2M
(Synthetic Representation)
(Synthetic Representation)
Line 41: Line 41:
 
|- <!-- First entry - v1 -->
 
|- <!-- First entry - v1 -->
 
! rowspan=4 | [[M2M/Benchmarks/RSM2TPC|RSM to TOPCASED]]
 
! rowspan=4 | [[M2M/Benchmarks/RSM2TPC|RSM to TOPCASED]]
| rowspan=2 | [http://www.eclipse.org/m2m/atl/atlTransformations#FirstV1 v1]: 45 min
+
| rowspan=2 | [[Image:RSM2Topcased.zip|v1]]: 45 min
 
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV1 v1]:
 
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV1 v1]:
 
| rowspan=4 | [http://www.eclipse.org/m2m/qvtr/FirstV1 v1]: ?min
 
| rowspan=4 | [http://www.eclipse.org/m2m/qvtr/FirstV1 v1]: ?min

Revision as of 09:28, 29 October 2007

The objective of this page is to offer several model-to-model transformation benchmarks. Currently, any type of benchmark (e.g., that test performance, functionality) may be contributed.

Overview

Each benchmark consists of:

  • A definition of the transformation:
    • Source and target metamodels.
    • Source and target samples.
    • A language-neutral specification of the transformation.
  • Implementations of the transformation using several languages, with execution time measures (for performance evaluation).

Synthetic Representation

The following table shows how information about the benchmarks could be represented. The first column contains the benchmark name, and links to the language-neutral benchmark definition. This definition should be given in a wiki page like M2M/Benchmarks/ABenchmark. It should point to source and target metamodels as well as source and target samples.

Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations). For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations. Each of the language columns may be split into several implementations of the language when appropriate. For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).

Execution times on different architectures may be listed in a per-benchmark page. The measures presented in the table here should be performed on a single architecture, so that they may be compared.

Note that this is an early proposal, and that this table is likely to evolve. Other, more appropriate, representations may also be used later.

Model-to-model Transformation Benchmarks
Benchmarks ATL QVT OM QVT R
Java ATL VM
RSM to TOPCASED File:RSM2Topcased.zip: 45 min v1: v1: ?min
 ?min  ?min
v2: ?min v2:
 ?min  ?min
UML2 API to Platform Ontologies v1: ?min v1: v1: ?min
 ?min  ?min
v2: ?min v2:
 ?min  ?min

How to contribute

Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>. Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL). Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.

Back to the top