Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

JSDT/JSDT Code Analytics

This document is to present the current status of the code analysis in JSDT and to discuss possible improvements. Please, feel free to edit this document and contribute to the discussion. The more participation, the higher chances we have to go in the right direction.

JSDT Code Analytics

JSDT 2.0 is missing the content outline and content proposal functionality.

  • In JSDT 1.0 we used to parse all the .js files with Rhino and to store the AST in memory. With the full AST in memory, it was easy to generate a content outline tree and then to use an inference engine to provide the content assist.
  • In JSDT 2.0 we introduced tolerant parsing, but we parse only the .js files which are open in the editor. Also, we do not load the full AST in memory, and we do not generate a full content outline tree and the content assist is not good enough.

Despite JSDT 2.0 is modern and fast, we should fix content outline and content assist for making users happy, (ie Bug 510677#c3).

Improve JSDT 2.0

How can we improve JSDT by restoring the content outline and the content proposal?

  • We need to parse all the .js sources with a fast parser and produce an output tree.
  • with the output tree, we should build the content outline: the JavaScript object hierarchy
  • with the output tree, and the current position in code, we should feed an inference engine, to give content assist proposals.

Current Status

We're using Closure Compiler because of its good performances and architecture (See discussion #).

There are a lot of resources about CC: tutorials, FAQs, design documents.

== Idea 1 - We know it is possible to use Closure Compiler to We know there are IDEs using Closure Compiler to provide


Closure Compiler provides We could extend Closure Compiler

Ideas

Possible ideas to restore content outline and content assist:

  • use Closure Compiler to parse all files and generate an in-memory content tree; then provide content assiwt with an inference engine.
  • use Tern.js to read the whole source code, generate the content tree and provide the content assist.
  • write a Node.js program that loads all the .js files, and generates a content tree in json format, to be stored in as Eclipse project file. Then, provide content assist with an inference engine.

Below, there is a list of possible ideas to use a tolerant parser to produce a partial tree that wee could use both for the outline tree and for the content assist.

Please, remember that any comment will be useful!

Closure Compiler

Closure Compiler is a tolerant javascript-to-javascript written in java. The current version supports ES6, and we're using it for JSDT parsing.

After CC parsing, we convert its ParseTree into a jsdt.dom.AST, with the class ClosureCompilerASTConverter.

We know it is possible to generate a content treee with it (i.e. this, this)

I think we could tweak the CC parsing, and make an extra pass just for generating a tree that we can use for the content outline, and as input to the inference engine, to provide content assist.


Tern

Tern.js is a code-analysis engine for JavaScript written in javascript. It is a good model for the functionalities we want to improve. As a downside, it requires loading all the source files, which should be sent POST to an http server.

Js Program

We could write a .js program which loads all the available source files, and then outputs a json file with all the information needed to build a content tree and store it on disk. Then, we could use the .json file on disk to infer the suggestions.

Back to the top