The objective of this page is to list several model-to-model transformation benchmarks. Each of these will consist of:
- Source and target metamodels
- Source and target samples
- A language-neutral specification of the transformation
- Implementations of the transformation using several languages
The following table shows how information about the benchmarks could be represented. The first column contains the benchmark name, and links to the language-neutral benchmark definition. This definition should be given in a wiki page like M2M/Benchmarks/ABenchmark. It should point to source and target metamodels as well as source and target samples.
Then, each column corresponds to a language (e.g., ATL, QVT Operational Mapping, QVT Relations). For each benchmark, several implementations in the corresponding language may be given (e.g., v1, v2, each implementing different optimizations). Each of the language columns may be split into several implementations of the language when appropriate. For each version and implementation a running time (e.g., in minutes) may be given along with relevant information (e.g., CPU, RAM).
Note that this is an early proposal, and that this table is likely to evolve. Other, more appropriate, representations may also be used later.
|Benchmarks||ATL||QVT OM||QVT R|
|First benchmark||v1: ?min||v1:||v1: ?min|