JDT Core Programmer Guide/ECJ
- 1 A Hitchhiker's Guide to ECJ
- 2 Subpages
A Hitchhiker's Guide to ECJ
What IS the Compiler / ECJ?
Strange enough this question does not have a single true answer.
The following locations contribute to the compiler:
Since the compiler does not directly correspond to any project / plug-in the following measures are relevant:
- Classes in source folders
batchare not allowed to access classes in other source folders of org.eclipse.jdt.core. To avoid any violations, a secondary project has been created: org.eclipse.jdt.core.ecj.validation. This project should be imported into the workspace before working on the compiler. It contains only links to the two mentioned source folders and will signal errors, if any class outside this scope is used. The project is not intended for editing.
- During production builds class files from different projects need to be merged into the single ecj.jar (this jar file is created as org.eclipse.jdt.core-*-SNAPSHOT-batch-compiler.jar and renamed to ecj.jar afterwards). Search for "batch-compiler" in pom files of the projects mentioned above, to see how the compiler is assembled.
- Additionally, an ant script exists, org.eclipse.jdt.core/scripts/export-ecj.xml, that should allow manually creating ecj.jar from within Eclipse. This script is also executed when building org.eclipse.jdt.core using PDE/Build, probably happening also when interactively exporting org.eclipse.jdt.core as a deployable plug-in using the export wizard.
The ant adapter
For using ecj with ant, jdtCompilerAdapter.jar is created from
org.eclipse.jdt.core/antadapter. The same class files are also added to ecj.jar.
Interfacing with other components
- Name Environments: To interface with its environment, the compiler needs an instance of
org.eclipse.jdt.internal.compiler.env.INameEnvironment. During batch compilation, class
org.eclipse.jdt.internal.compiler.batch.FileSystemis used. But using different implementations of this interface other components like the builder can provide required classes into the compiler.
- IBinaryType: Different use cases use different implementations to represent existing .class files to which Java sources being compiled can refer.
- ITypeRequestor: Whenever a new type is found from the name environment, it is first passed to methods of the type requestor. Normally, the
Compileritself acts as the type requestor, which will add a representation of the discovered type to the internal data structures of the compiler, but code assist, type hierarchy, search and indexing each have their own implementation of these hooks into the compiler.
Variants of the compiler
The central class is
org.eclipse.jdt.internal.compiler.Compiler, which is used as-is in some use cases, but also a few subclasses exist, which are variants of the compiler, with purposes different from generating
.class files. In other use cases, not the Compiler class, but
org.eclipse.jdt.internal.compiler.parser.Parser is subclassed to achieve different functionality. The latter strategy is used notably for code select and code complete functionality.
As is standard in compiler technology, ecj operates on Java files in several phases, which are roughly outlined as:
- Scan and parse, i.e., transform a character stream first into a stream of tokens, then into the abstract syntax tree (AST)
- Build and connect type bindings, i.e., overlay the syntactic tree structure (AST) with a semantic graph of bindings.
- Verify methods: analyse inheritance, overriding and overloading of methods
- Resolve: interpret identifiers and link them to the bindings which they represent
- Analyse: perform flow analysis in order to detect errors like variables read before assigned, final variables re-assigned, and also analysis of (potential) null pointers and resource leaks. This phase may also detect a few more errors that need the AST to be fully resolved.
- Generate: Allocate positions to variables (as used in load and store operations of the byte code), then generate the byte code, in the steps shown below. Note that still during code generation some errors may be detected and reported.
- Generate the general class file structure with relevant byte code attributes
- Generate the
Codeattributes containing the actual byte code instructions for methods, constructors and initializers.
Looking at class
Compiler the phases are written slightly differently:
completeTypeBindings, here bindings are linked / connected with each other, which requires all bindings to already exist.
process-- at this point a separate compilation thread may be spawned:
getMethodBodies: the initial parse may have skipped method bodies, parse them now, perhaps only selectively
faultInTypes: ensure that all bindings are properly created and initialized
finalizeProblems: before errors and warnings are actually reported to the user, they are filtered by any
@SuppressWarningsannotations found in the source.
- Sequentiel phases vs. demand-driven computations
In addition to the sequential process outlined by the phases above, some computations will be triggered on demand.
Existing tricks to fine-tune order of processing steps:
ParameterizedQualifiedTypeReference.internalResolveLeafTypewe normally perform a
boundCheck(). However, in some situations, notably during
Scope.connectTypeVariables()we may not be ready yet to perform that check. If argument
boundCheckis true, the check is deferred, adding an element to the list
deferredBoundChecks, which will be processed via
MemberValuePair.resolveTypeExpecting(..)may add runnables to the same list
deferredBoundChecks. This accounts for the fact that resolving annotations may happen at particularly unexpected points in time.
scanMethodForNullAnnotation()we want to mark the generated enum methods "valueOf" and "values" as returning a nonnull type. To do so we want to add the configured nonnull annotation, but null annotations may not yet be initialized, in particular because "@NonNullByDefault" depends on an enum, whose "valueOf" and "values" methods should be marked as returning nonnull. To cut this circular dependency, we check if null annotations have been initialized, and if not we add the enum method to
deferredEnumMethodsfor processing from
- To handle a specific scoping issue of instanceof pattern variables, the current implementation admits possibly-duplicate variables during
LocalDeclaration.resolve(), because at that point we don't yet have the necessary flow information. Only later during
analyseCodewe process that deferred
Note, that to be true to the spec, more situations need flow information right during resolving, so the entire design of separating phases resolve and analyseCode is at stake, see bug 562824 .