Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

JDT Core/Null Analysis/Brainstorming

< JDT Core‎ | Null Analysis
Revision as of 14:33, 30 January 2011 by Stephan.cs.tu-berlin.de (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Purpose and Disclaimer

This page documents some preliminary pondering about null annotations.

Some of its considerations contributed motivation to the actual design of how null annotations shall be supported by the JDT, other considerations maybe completely outdated by now.

Design Space for Inter-Procedural Null Analysis

This section discusses different options how tools for inter-procedural analysis could be designed. This write-up was made in preparation of designing a concrete strategy for the JDT.

Degree of Annotating

A radical approach would suggest that every reference type in the program must explicitly exclude or include the value null. E.g., String would not be legal type in any declaration (local variable, field, method signature) but only @NonNull String and @MaybeNull String are.

This radical approach has two problems:

  • it introduces vast efforts for annotating every type reference in the program
  • it is very difficult to apply to intermediate variables within a method body with branches, loops etc.

E.g., the JDT compiler has no problem seeing that this is safe:

Foo foo2 = null;
if ((foo != null) && foo.isOK())
  foo2 = foo;
else
  foo2 = new Foo();
foo2.bar();

In the radical approach at least two more variable were necessary: at each point where analysis finds that a value cannot be null, a new variable with a differently annotated type would be needed. By contrast, the compiler can manage these intermediate states implicitly.

Thus it seems better feasible not to strive for full proofs of the absence of runtime errors, but to focus on gradually feeding more information into the analysis in order to just detect more (instead of all) potential runtime errors already during compilation. The radical approach can be weakened in two ways:

  • make annotations optional
  • limit the program locations where annotations should occur (note, how this actually relates to JSR 308).

In the vein of design by contract the following locations are relevant

  • method parameters (= method precondition)
  • method return value (= method postcondition)
  • fields (= invariant)

To truly reflect design by contract one might want to weaken the rules to

  • exclude non-API methods from contracts
    but inter-procedural analysis is also relevant within a class, so annotating even private methods is useful, too.
  • accept inconsistent field states in the middle of a method body (only the first read and the last write within each method would be checked)
    but such intermediate inconsistent states might be observed by concurrent executions and would make the analysis unreliable.

Syntax

Generally annotations could happen in two ways:

Extended Javadoc, like:

/**
 * @param @nonnull input ...
 * @return @maybenull ...
 */
public String foo(String input) { ... }

Alternatively, Java 5 annotations can be used to say the same:

public @maybenull String foo(@nonnull String input) { ... }

This is where JSR 305 comes into focus which covers the issue of standardizing "Annotations for Software Defect Detection"

After a first analysis I see two problems with JSR 305:

  • It has stalled, no official documents produced in 4 years, I couldn't find a proof of any activity during the last 2 years. JSR is marked as inactive, may soon be withdrawn.
  • It is far more generic than what we need for this specific issue: everything is built upon a meta annotation @TypeQualifier, it invests in supporting four states:
    • unspecified (no annotation)
    • @UnknownNullness (same interpretation as unspecified)
    • @Nonnull
    • @NullFeasible
The rationale for @UnknownNullness is for discarding an inherited specification. I wasn't convinced that any contract could be specialized to "no contract" - when specializing an inherited contract you should be explicit what the new contract is (which must be conform to the inherited contract).

See also these slides (May 2008) by William Pugh.

The main issue with both syntaxes is the lack of standardization.

Retention

Annotations have the advantage that a CLASS retention (or perhaps even RUNTIME) would support compiling against contracts in class files, which is not easily possible with the Javadoc based approach.

Standard vs. Configuration

Once the JDT compiler officially supports any specific syntax this creates a de-facto standard which might conflict with existing and future standards (any code written against the de-facto standard might be incompatible with future tools).

Possible solutions:

  1. Wait for the standard annotations (which may be waiting for ever)
  2. Make the concrete syntax configurable
    • select between Javadoc and annotation styles
    • select the exact annotation classes to use (cf. bug 186342#c12).
    Do not provide a default, as not to define anything standardish
  3. Provide implementation as a separate "use on your own risk" plug-in

(1) doesn's look attractive to me. (2) is kind-of a workaround, carefully giving the message: we are not defining a standard, if you use this you should be prepared to change your annotations once a standard is created (automatic migration to a new standard shouldn't be so hard OTOH). (3) might be used to side-step the whole issue by saying: we're only doing technical exploration, but interested folks may still download and use this as early adopters. Technically, I would implement this using OT/Equinox :). This would be an intermediate solution (still need to download additional stuff - but works as integral part of the incremental compiler) - shouldn't be difficult to migrate into the JDT/Core once we know more about standardization.

My irrational hopes are, that once users find out how great this is, someone will step forward and declare a standard. It would be great if people have something to play with and actually see the difference.

Semantical details

I see these issues worth discussing:

  1. Do we need more than @Nonnull and @MaybeNull?
  2. How exactly do annotations interact with inheritance (should mainly just apply the rules of design by contract, actually)
  3. What are the defaults?
  4. How do field specifications interact with concurrency?

Regarding (3) I believe in "Non-null References by Default in Java: Alleviating the Nullity Annotation Burden". OTOH, for perfect freedom one could make the default configurable in a hierarchical way (see also Pugh pp. 43 ff):

  • per project, package, class, method
  • all public, protected, default members
  • all fields / method parameters / method return values (also for convenience: signatures (param|return) / all)

bug 186342#c14 points out that finding a good default is difficult because different not-annotated library functions have different contracts (e.g: HashMap.get() -> @MaybeNull, JTable.getSelectedColumns() -> @NonNull). Thus either default would produce a lot of false positives. For solving this issue one might use more fine grained control also over 3rd-party code, like a nullity-profile, with externalized defaults and exceptions / contracts.

Actually, a nullity-profile could also be used to map existing annotations within 3rd-party code to a different annotation class.

Back to the top