Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

JSDT/JSDT Code Analytics

< JSDT
Revision as of 02:20, 23 May 2017 by Psuzzi.gmail.com (Talk | contribs) (Improve Closure Compiler)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This document presents the status of the code analysis in JSDT and discusses possible improvements.

Please, feel free to edit this document and to contribute to the discussion.

Current Status

We're using Closure Compiler because of its good performances and architecture (See discussion #).

There are a lot of resources about CC: tutorials, FAQs, design documents.

JSDT 2.0 is missing the content outline and content proposal functionality.

  • In JSDT 1.0 we used to parse all the .js files with Rhino and to store the AST in memory. With the full AST in memory, it was easy to generate a content outline tree and then to use an inference engine to provide the content assist.
  • In JSDT 2.0 we introduced tolerant parsing, but we parse only the .js files which are open in the editor. Also, we do not load the full AST in memory, and we do not generate a full content outline tree and the content assist is not good enough.

Despite JSDT 2.0 is modern and fast, we should fix content outline and content assist for making users happy, (ie Bug 510677#c3).

Improve JSDT 2.0

How can we improve JSDT by restoring the content outline and the content proposal?

  • We need to parse all the .js sources with a fast parser and produce an output tree.
  • with the output tree, we should build the content outline: the JavaScript object hierarchy
  • with the output tree, and the current position in code, we should feed an inference engine, to give content assist proposals.


Ideas

Possible ideas to restore content outline and content assist:

  • use Closure Compiler to parse all files and generate an in-memory content tree; then provide content assist with an inference engine.
  • use Tern.js to read the whole source code, generate the content tree and provide the content assist.
  • write a Node.js program that loads all the .js files, and generates a content tree in json format, to be stored in as Eclipse project file. Then, provide content assist with an inference engine.

Below, there is a list of possible ideas to use a tolerant parser to produce a partial tree that wee could use both for the outline tree and for the content assist.

Please, comment below if you think you can help!

Use Closure Compiler AST

Closure Compiler is a tolerant javascript-to-javascript written in java. The current version supports ES6, and we're using it for JSDT parsing.

After CC parsing, we convert its ParseTree into a jsdt.dom.AST, with the class ClosureCompilerASTConverter.

We know there is at least one Eclipse-based IDE using CC to generate its content outline tree, and the result looks good (i.e. this, this).

We could use the Closure Compiler's Parse Tree to generate a tree which is reusable from the outline tree and that can be used as input for the inference engine.

If reusing the parse tree is not an option, we could check if we writing a compiler pass can be useful to generate a content tree.

Also, we could kindly ask the Closure Compiler forum to check which is the suggested direction.

Improve with Tern

Tern is a code-analysis engine for JavaScript written in javascript. It uses the Acorn parser to provide javascript type inference. It is a good model for the functionalities we want to improve (demo), and it is currently included in JBossTools. Its downside is that it requires loading all the source files into an http server, which makes the loading slow.

Ideally, we could improve the communication times by using file communication instead of http communication. However, even in this case, we'll still have the problem of communicating data between Tern (JavaScript) and JSDT (Java).

Improve with Node.js Program

We could write a .js program which loads all the available source files, and then outputs a json file with all the information needed to build a content tree and store it on disk. Then, we could use the .json file on disk to infer the suggestions.

Back to the top