- 1 General
- 2 Workflow / Generator
- 2.1 Why do I get warnings like "warning(200): InternalFoo.g:42:3: Decision can match ..." when running the generator ?
- 2.2 OK, but I didn't get these warnings in oAW Xtext !
- 2.3 Why are generated packages from an imported grammar A duplicated in dependent grammar B ?
- 2.4 How can I control the Xtext meta model inference ?
How do I load my model in a standalone Java application ?
Assuming you have the standard example grammar MyDsl.xtext the Java code to load a corresponding top-level Model object from a resource (here with URI platform:/resource/org.xtext.example.mydsl/src/example.mydsl) should look something like this:
new org.eclipse.emf.mwe.utils.StandaloneSetup().setPlatformUri("../"); Injector injector = new MyDslStandaloneSetup().createInjectorAndDoEMFRegistration(); XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class); resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE); Resource resource = resourceSet.getResource(URI.createURI("platform:/resource/org.xtext.example.mydsl/src/example.mydsl"), true); Model model = (Model) resource.getContents().get(0);
Note that the argument in the first line is the path to your workspace root and is only required if you use platform:/resource URIs (as in the example) to reference your model files. I.e. if you for instance use file: URIs instead this line is not required.
You can also load a model from a String or an InputStream, yet you still need a resource. As the resource's URI you can any legal dummy URI (here we use dummy:/example.mydsl), just make sure it has the correct file extension, as that is required to look up your DSL's EMF ResourceFactory. Building on the previous example you just need to replace the last two lines with the following:
Resource resource = resourceSet.createResource(URI.createURI("dummy:/example.mydsl")); InputStream in = new ByteArrayInputStream("type foo type bar".getBytes()); resource.load(in, resourceSet.getLoadOptions()); Model model = (Model) resource.getContents().get(0);
How can I load my model in the EMF reflective model editor ?
Simply select your model and click the context action Open With > Sample Reflective Ecore Model Editor. This works because Xtext generates and registers a standard EMF resource factory for your DSL (see also previous question) and thus complies with the EMF resource API.
Workflow / Generator
Why do I get warnings like "warning(200): InternalFoo.g:42:3: Decision can match ..." when running the generator ?
Here's an example of the full error message:
warning(200): InternalFoo.g:42:3: Decision can match input such as "FOO" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input
These warnings are generated by the ANTLR code generator. Based on the rules in your grammar the ANTLR generated parser cannot disambiguate what rules to apply for a given input. You should try to refactor your grammar (see example) or you can enable backtracking for your parser (see next question).
OK, but I didn't get these warnings in oAW Xtext !
Unlike in oAW Xtext the ANTLR grammar generated by TMF Xtext doesn't have backtracking enabled by default. To enable backtracking you have to add a nested element
<options backtrack="true"/> to the ANTLR generator fragments in your Xtext project's MWE workflow. So replace:
<!-- Antlr Generator fragment --> <fragment class="org.eclipse.xtext.generator.AntlrDelegatingFragment"/>
<!-- Antlr Generator fragment --> <fragment class="de.itemis.xtext.antlr.XtextAntlrGeneratorFragment"> <options backtrack="true"/> </fragment>
Further down in the workflow you will find another use of the
AntlrDelegatingFragment fragment (used for content assist) you have to replace with:
<fragment class="de.itemis.xtext.antlr.XtextAntlrUiGeneratorFragment"> <options backtrack="true"/> </fragment>
Note that you can in both cases also specify
memoize="true" as an additional option.
Why are generated packages from an imported grammar A duplicated in dependent grammar B ?
In addition to the import statement in B.xtext you must also configure your GenerateB.mwe workflow to let it know about the corresponding GenModels of grammar A. You do this by setting the genModels attribute of the EcoreGeneratorFragment:
<fragment class="org.eclipse.xtext.generator.ecore.EcoreGeneratorFragment" genModels="platform:/resource/my.b.project/src-gen/my/b/B.genmodel"/>
How can I control the Xtext meta model inference ?
The typical use case is to let Xtext automatically infer a meta model corresponding to the Xtext grammar. Quite often this meta model is exactly what you want. If you on the other hand want to make some small changes to the inferred meta model (e.g. set attribute default values, add operations, features, or enumeration literals, etc.) you must implement a post processor. Please refer to the relevant documentation and the following example for more details.
The following example shows how to add an enumeration literal (here NULL) to an enumeration (here VisibilityModifier) and sets it as the default value for all attributes of that type.
import ecore; process(xtext::GeneratedMetamodel this) : ePackage.process() ; process(EPackage this) : eClassifiers.process() ; process(EClassifier this) : null ; process(EClass this) : eStructuralFeatures.process() ; process(EEnum this) : if name == 'VisibilityModifier' then eLiterals.add(newLiteral('null', 'NULL', eLiterals.size)) ; create EEnumLiteral newLiteral(String literal, String name, int value) : setLiteral(literal) -> setName(name) -> setValue(value) ; process(EStructuralFeature this) : null ; process(EAttribute this) : if eAttributeType.name == 'VisibilityModifier' then setDefaultValueLiteral('NULL') ;