Skip to main content
Jump to: navigation, search

Difference between revisions of "M2M/Benchmarks"

< M2M
(created page for M2M benchmarks)
 
(Synthetic Representation)
 
(21 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The objective of this page is to list several model-to-model transformation benchmarks.
+
The objective of this page is to offer several model-to-model transformation [http://en.wikipedia.org/wiki/Benchmark_%28computing%29 benchmarks].
Each of these will consist of:
+
Currently, any [http://en.wikipedia.org/wiki/Benchmark_%28computing%29#Types_of_benchmarks type of benchmark] (e.g., that test performance, functionality) may be contributed.
* Source and target metamodels
+
 
* Source and target samples
+
==Overview==
* A language-neutral specification of the transformation
+
 
* Implementations of the transformation using several languages
+
Each benchmark consists of:
 +
* A definition of the transformation:
 +
** Source and target metamodels.
 +
** Source and target samples.
 +
** A language-neutral specification of the transformation.
 +
* Implementations of the transformation using several languages, with execution time measures (for performance evaluation).
 +
 
 +
==Synthetic Representation==
  
 
The following table shows how information about the benchmarks could be represented.
 
The following table shows how information about the benchmarks could be represented.
Line 12: Line 19:
  
 
Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations).
 
Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations).
For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2, each implementing different optimizations).
+
For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations.
 
Each of the language columns may be split into several implementations of the language when appropriate.
 
Each of the language columns may be split into several implementations of the language when appropriate.
For each version and implementation a running time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).
+
For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).
 +
 
 +
Execution times on different architectures may be listed in a per-benchmark page.
 +
The measures presented in the table here should be performed on a single architecture, so that they may be compared.
  
 
Note that this is an early proposal, and that this table is likely to evolve.
 
Note that this is an early proposal, and that this table is likely to evolve.
Line 23: Line 33:
 
|+ Model-to-model Transformation Benchmarks
 
|+ Model-to-model Transformation Benchmarks
 
! rowspan=2 | Benchmarks
 
! rowspan=2 | Benchmarks
! rowspan=2 | [[ATL]]
+
! colspan=2 | [[ATL]]
 
! colspan=2 | QVT OM
 
! colspan=2 | QVT OM
 
! rowspan=2 | QVT R
 
! rowspan=2 | QVT R
 
|-
 
|-
 +
! [http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.m2m/org.eclipse.m2m.atl/plugins/org.eclipse.m2m.atl.engine.vm/?root=Modeling_Project Regular VM]
 +
! [http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.m2m/org.eclipse.m2m.atl/plugins/org.eclipse.m2m.atl.engine.emfvm/?root=Modeling_Project emfvm]
 
! [http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.m2m/org.eclipse.m2m.qvt.oml/?root=Modeling_Project Java]
 
! [http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.m2m/org.eclipse.m2m.qvt.oml/?root=Modeling_Project Java]
 
! [http://www.eclipse.org/m2m/atl/usecases/QVT2ATLVM/ ATL VM]
 
! [http://www.eclipse.org/m2m/atl/usecases/QVT2ATLVM/ ATL VM]
 +
|- <!-- First entry - v0 -->
 +
! rowspan=4 | [[M2M/Benchmarks/RSM2TPC|RSM to TOPCASED]]
 +
| rowspan=2 | [http://wiki.eclipse.org/images/1/1b/RSM2TPC_V0.zip v0]: 45 min
 +
| rowspan=2 | [http://wiki.eclipse.org/images/1/1b/RSM2TPC_V0.zip v0]: 75 s
 +
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV0 v0]:
 +
| rowspan=4 | [http://www.eclipse.org/m2m/qvtr/FirstV0 v0]: ?min
 +
|-
 +
| ?min
 +
| ?min
 
|- <!-- First entry - v1 -->
 
|- <!-- First entry - v1 -->
! rowspan=4 | [[M2M/Benchmarks/First|First benchmark]]
+
| rowspan=2 | [http://wiki.eclipse.org/images/5/55/RSM2TPC_V1.zip v1]: ? min
| rowspan=2 | [http://www.eclipse.org/m2m/atl/atlTransformations#FirstV1 v1]: ?min
+
| rowspan=2 | [http://wiki.eclipse.org/images/5/55/RSM2TPC_V1.zip v1]: 45 s
 
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV1 v1]:
 
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV1 v1]:
| rowspan=4 | [http://www.eclipse.org/m2m/qvtr/FirstV1 v1]: ?min
 
 
|-
 
|-
 
| ?min
 
| ?min
 
| ?min
 
| ?min
|- <!-- First entry - v2 -->
+
|- <!-- Second entry - v0 -->
| rowspan=2 | [http://www.eclipse.org/m2m/atl/atlTransformations#FirstV2 v2]: ?min
+
! rowspan=4 | [[M2M/Benchmarks/UML2PlatformOWL|UML2 API to Platform Ontologies]]
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV2 v2]:
+
| rowspan=2 | [http://ssel.vub.ac.be/viewvc/PlatformKit/platformkit-java/transformations/UML2ToPackageAPIOntology.atl?revision=7380&view=markup v0]: 92 min 37 sec
 +
| rowspan=2 | [http://ssel.vub.ac.be/viewvc/PlatformKit/platformkit-java/transformations/UML2ToPackageAPIOntology.atl?revision=7380&view=markup v0]: 29 min 53 sec
 +
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV0 v0]:
 +
| rowspan=4 | [http://www.eclipse.org/m2m/qvtr/FirstV0 v0]: ?min
 +
|-
 +
| ?min
 +
| ?min
 +
|- <!-- Second entry - v1 -->
 +
| rowspan=2 | [http://www.eclipse.org/m2m/atl/atlTransformations#FirstV1 v1]: ?min
 +
| rowspan=2 | [http://www.eclipse.org/m2m/atl/atlTransformations#FirstV1 v1]: ?min
 +
| colspan=2 | [http://www.eclipse.org/m2m/qvtom/FirstV1 v1]:
 
|-
 
|-
 
| ?min
 
| ?min
 
| ?min
 
| ?min
 
|}
 
|}
 +
 +
==How to contribute==
 +
 +
Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>.
 +
Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL).
 +
Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.

Latest revision as of 10:21, 14 February 2008

The objective of this page is to offer several model-to-model transformation benchmarks. Currently, any type of benchmark (e.g., that test performance, functionality) may be contributed.

Overview

Each benchmark consists of:

  • A definition of the transformation:
    • Source and target metamodels.
    • Source and target samples.
    • A language-neutral specification of the transformation.
  • Implementations of the transformation using several languages, with execution time measures (for performance evaluation).

Synthetic Representation

The following table shows how information about the benchmarks could be represented. The first column contains the benchmark name, and links to the language-neutral benchmark definition. This definition should be given in a wiki page like M2M/Benchmarks/ABenchmark. It should point to source and target metamodels as well as source and target samples.

Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations). For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations. Each of the language columns may be split into several implementations of the language when appropriate. For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).

Execution times on different architectures may be listed in a per-benchmark page. The measures presented in the table here should be performed on a single architecture, so that they may be compared.

Note that this is an early proposal, and that this table is likely to evolve. Other, more appropriate, representations may also be used later.

Model-to-model Transformation Benchmarks
Benchmarks ATL QVT OM QVT R
Regular VM emfvm Java ATL VM
RSM to TOPCASED v0: 45 min v0: 75 s v0: v0: ?min
 ?min  ?min
v1: ? min v1: 45 s v1:
 ?min  ?min
UML2 API to Platform Ontologies v0: 92 min 37 sec v0: 29 min 53 sec v0: v0: ?min
 ?min  ?min
v1: ?min v1: ?min v1:
 ?min  ?min

How to contribute

Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>. Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL). Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.

Back to the top