Skip to main content
Jump to: navigation, search

Difference between revisions of "M2M/Benchmarks"

< M2M
(created page for M2M benchmarks)
 
(structured the page and added contribution information)
Line 1: Line 1:
The objective of this page is to list several model-to-model transformation benchmarks.
+
The objective of this page is to offer several model-to-model transformation [http://en.wikipedia.org/wiki/Benchmark_%28computing%29 benchmarks].
Each of these will consist of:
+
Currently, any [http://en.wikipedia.org/wiki/Benchmark_%28computing%29#Types_of_benchmarks type of benchmark] (e.g., that test performance, functionality) may be contributed.
* Source and target metamodels
+
 
* Source and target samples
+
==Overview==
* A language-neutral specification of the transformation
+
 
* Implementations of the transformation using several languages
+
Each benchmark consists of:
 +
* A definition of the transformation:
 +
** Source and target metamodels.
 +
** Source and target samples.
 +
** A language-neutral specification of the transformation.
 +
* Implementations of the transformation using several languages, with execution time measures (for performance evaluation).
 +
 
 +
==Synthetic Representation==
  
 
The following table shows how information about the benchmarks could be represented.
 
The following table shows how information about the benchmarks could be represented.
Line 12: Line 19:
  
 
Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations).
 
Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations).
For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2, each implementing different optimizations).
+
For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations.
 
Each of the language columns may be split into several implementations of the language when appropriate.
 
Each of the language columns may be split into several implementations of the language when appropriate.
For each version and implementation a running time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).
+
For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).
 +
 
 +
Execution times on different architectures may be listed in a per-benchmark page.
 +
The measures presented in the table here should be performed on a single architecture, so that they may be compared.
  
 
Note that this is an early proposal, and that this table is likely to evolve.
 
Note that this is an early proposal, and that this table is likely to evolve.
Line 44: Line 54:
 
| ?min
 
| ?min
 
|}
 
|}
 +
 +
==How to contribute==
 +
 +
Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>.
 +
Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL).
 +
Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.

Revision as of 16:06, 10 September 2007

The objective of this page is to offer several model-to-model transformation benchmarks. Currently, any type of benchmark (e.g., that test performance, functionality) may be contributed.

Overview

Each benchmark consists of:

  • A definition of the transformation:
    • Source and target metamodels.
    • Source and target samples.
    • A language-neutral specification of the transformation.
  • Implementations of the transformation using several languages, with execution time measures (for performance evaluation).

Synthetic Representation

The following table shows how information about the benchmarks could be represented. The first column contains the benchmark name, and links to the language-neutral benchmark definition. This definition should be given in a wiki page like M2M/Benchmarks/ABenchmark. It should point to source and target metamodels as well as source and target samples.

Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations). For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations. Each of the language columns may be split into several implementations of the language when appropriate. For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).

Execution times on different architectures may be listed in a per-benchmark page. The measures presented in the table here should be performed on a single architecture, so that they may be compared.

Note that this is an early proposal, and that this table is likely to evolve. Other, more appropriate, representations may also be used later.

Model-to-model Transformation Benchmarks
Benchmarks ATL QVT OM QVT R
Java ATL VM
First benchmark v1: ?min v1: v1: ?min
 ?min  ?min
v2: ?min v2:
 ?min  ?min

How to contribute

Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>. Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL). Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.

Back to the top