JSDT/JSDT Code Analytics
This document presents the status of the code analysis in JSDT and discusses possible improvements.
Please, feel free to edit this document and to contribute to the discussion.
JSDT 2.0 is missing the content outline and content proposal functionality.
- In JSDT 1.0 we used to parse all the .js files with Rhino and to store the AST in memory. With the full AST in memory, it was easy to generate a content outline tree and then to use an inference engine to provide the content assist.
- In JSDT 2.0 we introduced tolerant parsing, but we parse only the .js files which are open in the editor. Also, we do not load the full AST in memory, and we do not generate a full content outline tree and the content assist is not good enough.
Despite JSDT 2.0 is modern and fast, we should fix content outline and content assist for making users happy, (ie Bug 510677#c3).
Improve JSDT 2.0
How can we improve JSDT by restoring the content outline and the content proposal?
- We need to parse all the .js sources with a fast parser and produce an output tree.
- with the output tree, and the current position in code, we should feed an inference engine, to give content assist proposals.
Possible ideas to restore content outline and content assist:
- use Closure Compiler to parse all files and generate an in-memory content tree; then provide content assist with an inference engine.
- use Tern.js to read the whole source code, generate the content tree and provide the content assist.
- write a Node.js program that loads all the .js files, and generates a content tree in json format, to be stored in as Eclipse project file. Then, provide content assist with an inference engine.
Below, there is a list of possible ideas to use a tolerant parser to produce a partial tree that wee could use both for the outline tree and for the content assist.
Please, comment below if you think you can help!
Use Closure Compiler AST
After CC parsing, we convert its ParseTree into a jsdt.dom.AST, with the class ClosureCompilerASTConverter.
We could use the Closure Compiler's Parse Tree to generate a tree which is reusable from the outline tree and that can be used as input for the inference engine.
If reusing the parse tree is not an option, we could check if we writing a compiler pass can be useful to generate a content tree.
Also, we could kindly ask the Closure Compiler forum to check which is the suggested direction.
Improve with Tern
Improve with Node.js Program
We could write a .js program which loads all the available source files, and then outputs a json file with all the information needed to build a content tree and store it on disk. Then, we could use the .json file on disk to infer the suggestions.