Each benchmark consists of:
- A definition of the transformation:
- Source and target metamodels.
- Source and target samples.
- A language-neutral specification of the transformation.
- Implementations of the transformation using several languages, with execution time measures (for performance evaluation).
The following table shows how information about the benchmarks could be represented. The first column contains the benchmark name, and links to the language-neutral benchmark definition. This definition should be given in a wiki page like M2M/Benchmarks/ABenchmark. It should point to source and target metamodels as well as source and target samples.
Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations). For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2), each implementing different optimizations. Each of the language columns may be split into several implementations of the language when appropriate. For each version and implementation an execution time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).
Execution times on different architectures may be listed in a per-benchmark page. The measures presented in the table here should be performed on a single architecture, so that they may be compared.
Note that this is an early proposal, and that this table is likely to evolve. Other, more appropriate, representations may also be used later.
|Benchmarks||ATL||QVT OM||QVT R|
|First benchmark||v1: ?min||v1:||v1: ?min|
How to contribute
Benchmark definitions may be contributed by creating a page per benchmark following the naming scheme: M2M/Benchmark/<benchmark-name>. Implementations should be contributed to the corresponding component (e.g., QVT OM if written in QVT Operational Mapping, ATL if written in ATL). Then, the table presented above can be completed with links to the definition, and implementations, as well as execution time measures.