Doug, that’s a good start! I’ll add my comments using the same sections that are in your document.
The target platform currently defines the default binary parser – similar to a tool specifying its default error parser.
Error parsers can be invoked with the output produced by the tool invocation (preferred) or with the output from the entire build process, depending upon the capabilities of the Builder.
Another important distinction between builders is whether the builder generates the build file, or whether the build file is created/controlled explicitly by the user. If the build file is controlled by the user, then certain UI, such as Tool-chain editing, probably needs to be disabled.
I’d like to add this object to the model, rather than specifying library usage via project templates as in your proposal. As a separate object, a library could still be used in a project template, but it could also be added by a user to any existing project.
A library is a set of reusable components defined by a set of header files which define its API and object files or shared objects (DLLs) which provide the implementation. Libraries can be used in project templates or added by a user to an existing configuration. The library provides information on the include paths and the linker paths to be added to the build environment. The library can also provide a mapping of header files to library file names. For example, including Foo.h in a source file informs the Tool-chain to add Foo.lib to the linker input.
This puts Scanner Discovery in its proper place! I have one addition:
The decision whether to use Scanner Discovery is per Tool-chain, per Builder. For example, a Tool-chain may support both the Linux Managed Make Builder and the External Builder. When the Tool-chain is used with the Linux Managed Make Builder, it may be able to provide all of the scanner information itself (including settings that the user entered into its build system specific property pages). However when it is used with the External Builder, it may need to use the Scanner Discovery service.
Build Settings UI
Build settings (tool “options”) contribute a great deal to the complexity of the current MBS – both in volume and in the need for dynamic behavior. As you point out, build settings are only useful to a subset of builders, and so are probably better handled by a separate “model” that can be referenced from the other Build Model objects – e.g. Tools can have options and Tool-chains can have pseudo options that impact actions of the Tool-chain across multiple Tools.
There must be an “inheritance” model for Build settings. E.g. they can be specified at the Tool level, the Tool-chain level, the workspace level, the project level, the configuration level, the folder level, the file level. The current MBS implementation handles most of this, but maybe not as simply as it could. I know the Wind River build system has a well-defined model that could be used as another example of the requirements for the inheritance model.
You haven’t discussed persistence of the configuration settings in the CDT project file. The current model made that fairly straightforward since the build definition schema and the project persistence schema were virtually the same.
There is a generic “philosophical” issue that you touch on that I’d like to discuss further and that is how much of the model is statically defined (e.g. in an XML schema) vs. how much is created by code. I’ve been involved with GUI toolkits even before I was involved with IDEs. There is a continual going back and forth in GUI toolkit land regarding whether GUIs are specified in a static GUI specific language (schema) vs. whether GUIs are created by code. Even when code is the choice there is usually a GUI Builder that generates the code for you. I prefer the static approach but of course there are times when you need dynamic behavior and integrating the 2 can be tricky. The current MBS model probably went too far in trying to map inherently dynamic behavior into a static model - for example, by trying to handle too much automatically when upgrading from one version of a Tool-chain to another version. I hope we don’t swing the pendulum back too far.
o The current model and implementation are too complex, as evidence by the fact that it has been difficult/impossible to add other builders. In the MBS model:
o Options dominate the definitions and clutter the rest of the model
o IDs used in the model are too long and clutter the rest of the model
o Extensibility for adding dynamic behavior is haphazard
However, the current MBS model supports some behavior (but not necessarily the specification or implementation) that we can’t afford to lose.
o The ability to define multiple versions of build model objects (e.g. Tools, Tool-chains, Builder, Libraries) and the ability to upgrade old projects to use new versions at the user’s request.
o The ability for a Tool-chain/Library(s) to automatically set up the build configuration environment.
Here’s a new one:
o The ability for a Tool-chain/Library(s) to automatically set up the debugger environment.
This is important for the same reason that setting up the build environment is. That is, it is currently possible to use CDT on a system with multiple Tool-chains and/or multiple versions of the same Tool-chain (e.g. gcc) installed. The Tool-chain is specified per Configuration and so a single build environment (e.g. the system defined environment) may not be sufficient. A single debug environment is also not sufficient. Specifically, if I have built an application with one version of a Tool-chain/Library(s) then I want to ensure that I am using the shared objects that correspond to that version of the Tool-chain/Library(s) and not whatever happens to be set in the system environment