Skip to main content
Jump to: navigation, search

Open-Measured-Data-Management-WG/Quality Guide

< Open-Measured-Data-Management-WG
Revision as of 07:47, 5 December 2014 by C.weyermann.peak-solution.de (Talk | contribs) (First draft of the quality guide)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

THIS IS A DRAFT. This guide concludes the quality guidelines which apply for the MDM@WEB project. The guide is a living document and is regularly updated. The Quality Comittee takes care of this document. Somebody else should not make edits. In case something is missing, should be changed or removed please fill a bug via Bugzilla.

Contents

Code Quality

In this chapter all rules regarding the code quality are collected.

Coding Conventions

The coding conventions from eclipse apply: Coding_Conventions.

Code Documentation

The code has to be documented (JavaDoc and source code comments). This improves readability, transparency and stability of the code. Especially at complex structures this highly improves the understandability.

  • Code has to be documented in English (Understandability in the whole community)
  • Special care has to be taken for interface methods. Public methods, their usage and how they behave in case of an error has to be documented (Reduce error pronability throughout the system)
  • Complex algorithms have to be documented in-code (Comprehensibility)
  • The Author has to be documented (Responsibility)

Code Format

Eclipse currently lacks a common formatter for its technology projects. The following formatter is used: mdmweb_formatter.xml

Documentation

To assemble a software system from its components they have to be documented. Otherwise a system integrator does not know what his system is doing. So the system integrator must be able to find the necessary components easily. The documentation must cover all use cases and the configuration. Therefor DITA has been chosen as documentation system. The documentation is placed in /src/main/doc.

Structure

The components are described by certain agreed topics:

Topic Type Dita Topic Type File Organization File Name Header
Titlepage concept one topic titepage.con.en.dita component name
Purpose and Context concept one topic purpose.con.en.dita Purpose and Context
Requirements concept one topic requirements.con.en.dita Requirements
Service Description and Parameters reference one file per service: one topic for services description plus one per parameter service.ref.en.dita Service service name
Concept of Implementation concept one topic implementation.con.en.dita Concept of Implementation
Dependencies concept one topic dependencies.con.en.dita Dependencies
External Configurationfiles and Systemparamerters reference Multiple topics: one topic per parameter plus one for overview configuration.ref.en.dita External Configurationfiles and Systemparamerters
Known Bugs reference one topic. One Section per bug bugs.ref.en.dita Known Bugs
Work Arounds for Known Bugs task one topic. One Section per bug workarounds.tsk.en.dita Work Arounds for Known Bugs
Organization concept one topic organization.con.en.dita Organization

Chapters

The following chapters are mandatory.

Titlepage

  • Purpose: Identification of the component
  • Not-Purpose: -
  • Target Audience: Configurator, Developer
  • Structure:
    • Title
    • Version
    • Status
    • History
      • Version
      • Date
      • Author
      • Changes
  • Rules: -
  • DITA Topic-Type: concept

Purpose and Context

  • Purpose:
    • Identifies why the component has been developed.
    • Describes the system context and the boundary conditions
    • Lists the functional and nonfunctional requirements
  • Not-Purpose:
  • Target Audience: Configurator, Developer
  • Structure:
    • Title
    • Short overview of the component
    • Optional: Figures
    • Optional: Notes
  • Rules:
    • Describe the component in short and complete sentences
    • Following boundary conditions have to be listed:
      • Software
      • Used business objects
      • Platform for which the component has been developed
    • If apply, changes to the application model
  • DITA Topic-Type: concept


Requirements

  • Purpose: List of all relevant requirements
  • Not-Purpose: Description of the technical realization of the single requirements
  • Target Audience: Configurator, Developer
  • Structure:
    • Title
    • Functional Requirements
    • Non Functional Requirements
  • Rules: Use the same names and identifiers as in the requirements management tool
  • DITA Topic-Type: concept


Service Description and Parameters

  • Purpose:
    • Describes the purpose and the function of the service
    • Calls the version of the service
    • Describes the purpose, the values and the category of the parameters for the service

Note: Events are treated as services

  • Not-Purpose:
  • Target Audience: Configurator, Developer
  • Structure:
    • Service name
    • Service version
    • Mime-Type
    • Purpose
    • Purpose description
    • Used Interfaces
      • Interface name
      • Interface version
    • Description of the function
    • Description of the parameters
  • Rules:
    • The purpose has to be described in one short, complete sentence
    • For complex functions intermediate headers have to be used
    • If their are more services, then they are described in separate reference-topics
    • Parameter description
      • Every parameter is tracked in a separate reference topic
      • It has to be stated whether the parameter is mandatory or optional
      • The purpose has to be described in one short, complete sentence
      • Default values have to be provided
      • Format in case the parameter requires special formatting (e.g. dates)
      • In case the parameter can be choosen from a list all values have to be provided:
        • Every list entry has to have its own entry
        • The purpose has to be described in one short, complete sentence
  • DITA Topic-Type: reference


Concept of Implementation

  • Purpose:
    • Compares the requirements to the actual technical implementation
    • Gives a short overview of the technical implementation of the component
  • Not-Purpose:
    • The requirements from the requirements section should not be repeated
  • Target Audience: Developer
  • Structure:
    • Title
    • Per Requirement:
      • Reference to the use-case
      • Technical implementation
      • Description of the technical implementation
  • Rules:
    • One section per requirement
    • The requirement has to be in the title. Requirements which are tracked in a requirements tracking tool should be referenced. If not possible, apply to rules for topics
    • In the description technical aspects like programming language, target system, constraints and legal constraints should be taken into account
    • In case alternatives have been considered, those should be briefly explained. It should also be explained why it has been rejected
    • Description have to be in a useful order. E.g.:
      • Start condition
      • Good case
      • Error case
      • End condition
      • Used services
      • Used algorithms
      • Design decisions
      • Behavior under load
    • For assemblies only the configuration has to be described
  • DITA Topic-Type: concept

Dependencies

  • Purpose: Provide name and version of the used objects. This also includes libraries.
  • Not-Purpose:
    • Components which are dependent on this component should not be provided
    • Used services should not be listed. Therefore the topic "Service description and Parameters" should be used
  • Target Audience: Configurator, Developer
  • Structure:
    • Title
    • Object type
    • Per object type
      • Name
      • Version
  • Rules:
    • Each object type should have its own section
    • Object type should be in the name of the section
    • Names of the objects have to be the same as in the documentation. No abbrevations or slang.
  • DITA Topic-Type: concept


External Configurationfiles and Systemparamerters

  • Purpose:
    • Describes the structure of external configuration files
    • Provides a list of all configuration parameters and their purpose
  • Not-Purpose: -
  • Target Audience: Configurator, Developer
  • Structure:
    • Title
    • Path
    • File structure
    • Parameter description
  • Rules:
    • Path has to be relative to the component. The actual path has to be called filepath.
    • File structure should be included either as screenshot or with usage of the codeblock attribute
    • Parameter description:
      • Every parameter is tracked in a separate reference topic
      • It has to be stated whether the parameter is mandatory or optional
      • The purpose has to be described in one short, complete sentence
      • Default values have to be provided
      • Format in case the parameter requires special formatting (e.g. dates)
      • In case the parameter can be chosen from a list all values have to be provided:
        • Every list entry has to have its own entry
        • The purpose has to be described in one short, complete sentence
  • DITA Topic-Type: reference


Known Bugs

  • Purpose:
    • Provides a list of all known bugs and how they appear
    • Links to work arounds
  • Not-Purpose:
    • To provide an exact copy of the bug tracking system
  • Target Audience: Configurator, Developer
  • Structure:
    • Title
    • Per bug
      • Title
      • Bugzilla ID
      • Should behavior
      • Work around
  • Rules:
    • A short description should be provided (IS behavior). This also includes the boundary conditions.
    • Bug must be referenced to a Bugzilla bug via an ID
  • DITA Topic-Type: reference


Work Arounds for Known Bugs

  • Purpose: Step-by-step description how to avoid the known bugs
  • Not-Purpose:
  • Target Audience: Configurator, Developer
  • Structure:
    • Target of the work around
    • Name of the bug
    • Preconditions
    • Steps
    • Verifiable results
  • Rules:
    • The target of the workaround has to be provided in the format [known bug][in use-case]
    • Use the same name as in the known bugs section
    • Provide pre conditions if applicable. Trivial conditions should be avoided
    • If the success of the work around can be verified easily the steps to do so should be provided
  • DITA Topic-Type: task


Organization

  • Purpose:
    • Provides the possibility to investigate the responsible person for further questions
    • Provides a link to the source code
    • Provides legal constraints which are applicable
  • Not-Purpose:
  • Target Audience: Configurator, Developer
  • Structure:
    • Company name
    • Department
    • Source code
      • Link to the source code
    • Legal constraints
      • Description of each constraints
  • Rules:
    • Responsible person should not be referenced by name, as people might change company and/or tasks.
    • Legal constraints also include licenses and therefore constraints about the usage
  • DITA Topic-Type: concept

Folder Structure

Every topic can be in an sole file but does not have to be. The given structure of the files should be used. Evey component has to be placed in an own folder.

  • The folder is named accordingly to the naming conventions for components
  • The folder has the following sub folders
    • Subfolder common: This folder includes the conref links within a component. If no conref links are used, the folder can be omitted.
    • Subfolder resources: This folder includes all images (e.g. screen shots) which are used in the topic. If no images or other resources are used, the folder can be omitted.
    • Subfolder topics: This folder contains the topics of the component. It does not have any subfolders.

Mulit-language support

The documentation has to be provided in English. Further languages can be added. All language variants are stored in the "topics" folder. The file names are appended by "en" for US-English rsp. "de" for German, e.g. "overview.con.en.dita". Language dependent files in the resources or common folder are also appended by these suffixes. The XML-Lang tag has to bet set correctly.


Tests

A central topic of software quality is testing the software automatically. A test is used to prove the compliance of requirements and the state of the quality. However testing should be done throughout the project. As later a bug is discovered, the more it costs to fix it.

Testability of bundles

Every bundle has to have automated tests for all its covered use-cases. Test coverage should be as high as possible with a moderate amount of work. For every global interface automated tests have to be provided. Preferable for every interface method. For all other component correctness and stbility should be proven by tests. Every component should be testable by design. It has be taken care, that they can handle all possible input values. This also covers undesired inputs (null, illegal combinations, misconfigurations etc.). UI Tests are currently not focused.

Mock-Implementations

Every bundle must provide a mocking implementation for its interface. The format and behavior is not standardized. However the developer should have the use cases of this mocking implementation in mind.

JUnit-Tests

Every bundle must provide unit tests for its internal implementation. The unit tests must be placed in "src/test/java". The name of the method under test must be contained in the test method. Test classes should be suffixed with "Test".

Testorganisation

Tests have to be organized in Test Suites. Every package should contain a Test Suite which executes all Tests in this package and its subpackages. On the top level package a Test Suite with the name "AllTests" should be defined, which executes all tests. Every Test Suite should be suffixed with "Tests".

Exceptions Tests

It should also be tested that exceptions are fired in case of misuse of the component. This can be done with the @Test(expected...) Syntax.

Text at assert Methods

The fail text at assert methods should be used appropriately. A negative example is Assert.assertEquals( a, b, "must be equal").

Testdata

If a test needs to use the persistence the test should create its own data at startup and delete them after completion. This prevents that data is modified for other tests.

Naming Conventions

The goal of the definition of naming conventions is improved readability. It should be easier to find certain software components with a certain use case.

Bundles

Bundles should be prefixed by "org.eclipse.openmdm". The next level describes the functionality, e.g. api, worker, methodplanner. Then the implementation level is described, e.g. api.ods, worker.delete.

Configuration parameters

These parameters should be defined in CamelCase, e.g. ValidationRegex, DataSourceType, FileFilter

Instance names

Service names should be in the underscore notation, e.g. calc_min_max_worker, testorderaction_orderpath, actionrequest_sensors

Systemparameters

For knowing which systemparameter belongs to which bundle the name of the bundle should be prefixed. This rule can only be dropped if the parameter is used from multiple bundles. In case the systemparameter is bound to a certain instance, the instance name follows. Finally there should be a useful name. All parts are divided by a dot, e.g. domain.company.mdm.xyzimporter.lookup_path, temporary_path

Error Handling

Every error should be logged with an appropriate error description. A stacktrace should be provided. If there is a known workaround, this should also be logged. In openMDM it is not guaranteed that a service is available all the time. It can appear and disappear at every moment. The component must be able to handle this.

The error handling should prevent any runtime errors from appearing on the UI. The user should be shown a useful plain-text error message what happened, in case his workflow got impaired. This error message should be translated if possible. Runtime dependent or technical information, which are not of any worth for the user should be avoided. However, such details should be logged to increase reproducibility.

UI Conventions

To unify the look and feel of openMDM common UI conventions exist. If every component would implement its dedicated workflow, the user has to get used to several usability concepts. There are two openMDM UIs, the rich and the web client. A component should only implement the UI for one of them. It is noted that the component should work under several different configurations, not only the one it has been tested.

Data Collection

The data collection in openMDM should be done with the ShoppingBasket interface. That means, every component which generates data objects, must have a connection to the ShoppingBasket in order to provide these objects.

Internationalization

All text which appears on the UI should be internationalized. For nomenclature which should be uniform in the client (e.g. names of the business objects) the I18N interface should be used. This is explained in detail in chapter 13. For component specific texts a ResourceBundle based approach should be used. The resource files are stored in "src/main/resources". In openMDM the properties format is used. The name of the package should be {Bundle name}.locale, e.g. org.eclipse.mdm.api.ods.locale. The default locale should be in English. For other languages the prefixes from the java specification should be used, e.g. de for German.

Long Running Processes

Every task which can take more than 2 seconds should be visualized via a progress monitor. In the rich client this should be done with a ProgressMonitorDialog. The user should be informed about the current state and the UI. In case the UI should not be used, it has to be blocked.

Resolution

As every user has different resolution and not everyone has a second display the application should scale. It should be possible to configure the component at different places. The given space should be used completely and it should be usable in different scenarios.


Release Notes

The release_notes.txt documents all changes. It has to be written in English. The following format has to be used:

External Resources The external_resources.txt documents all external libraries which are used. The following format has to be used:


System Configuration

The System Configuration is done in one Eclipse project. The structure of this project is congruent to other projects. All configuration files are put in "src/main/configuration".

Performance

Measurement data is usually mass data. Therefore CPU performance and memory performance must be closely watched.

openMDM API

As the response time is crucial, special care should be taken to the work with openMDM API business objects. Every call to a business objects (besides the Structs) will result in a CORBA call, which might additionally lead to a database query. In the past constructs like these have been seen a lot:

    for(Test test: tests) {
        ...
        String testName = test.getName();
        ...
        for(TestStep testStep: test.getTestSteps("*") {
            ...
            String testStepName = testStep.getName();
            ...
            String status = testStep.getStatus();
            ...
            String tplName = testStep.getTplTestStep().getName();
        }
 
    }

For simple querys Structs (e.g. TestStruct instead of Test) have been introduced. They contain the whole business object as values. For complex data querys the openMDM Query Interface should be used.

Memory Performance

Memory leaks normally appear sneaky. Therefore they do not have great developer attention. As anaysis is most likely very time consuming long time tests are mandatory for long running processes. For a developer it is important to release all resources. For Java internal resources the Garbage Collector is used. SWT Resources, which are often forgotten, must be released by calling "dispose()". CORBA objects must be released by calling "release()". The developer should read the approriate documentations for the used technologies.

Load Characteristics

Functional tests are usually done with a minimal amount of data. However the developer should keep in mind, that real data is much bigger. So at requirements definition those should be quantified and documented to have appropriate use cases.

OSGi usage rules

  • Import-Package is used instead of Require-Bundle. This increases main- and extendibility
  • No optional dependencies are used
  • Dynamic Import should only be used for remote and corba services
  • Exported packages should also be imported. This is a best-practice for OSGi. If the same package is exported by more bundles the platform decides which is actually exported. This means if the package is also imported only one version will be published, if not two, which can lead to conflicts.
  • Configuration should be done via Annotations

OSGI Best Practices

  • Semantic Versioning
  • Provide/Require should be used. In this case the capabilities are externally visible. This means a component won't start in case its required services are not available
  • EventAdmin should be used rarely. Usage as selection broker is not recommended. Whiteboard pattern seems to be better suited.

Various

Also mandaotry is:

  • Checkstyle with the following configuration mdmweb_checkstyle.xml
  • FindBugs with Minimum Rank 15. Every concern with an higher severity must be justified.

Quality Assurance Process

In this chapter it is described how quality is handled. This covers the process itself and how the process is applied with the Eclipse infrastructure.

Common Build Infrastructure

The CBI from Eclipse is used. How this is done in particular is added as soon as the project is setup. Upfront it is only defined, that a Tycho manifest approach is used and the signing mechanics are used.

Bugzilla Usage

Bugzilla is used. The statuses as described by Eclipse are used. Every task is tracked via Bugzilla. Furthermore every commit is linked to a bug.

Gerrit Usage

Gerrit is used. Every commit needs to go through a review via Gerrit. The Gerrit Hudson integration is also used. In case anybody, including Hudson, votes -1 the commit must not be merged. Commits surpassing Gerrit are prohibited.

GitHub Usage

GitHub should not be used; the Eclipse internal GIT is used.

Eclipse Development Process

The Eclipse Development Process is obeyed.

Retropersepective

After every project a retroperspective meeting should be hold. The results should be given to the Quality Committee. It is agreed, that nothing which has been written in this document leads to further work or any other reprisals.


Test Management

In this chapter it is described how an MDM system should be tested.

Documentation of Use Cases

The use cases of a component should be documented in a way that they are testable.

Automatic Tests

MDM@WEB should be automated testable. Therefore tests are classified in UI, Service and Unit tests. All parts should be done. As mentioned above, the Unit tests are placed next to the business logic. For further information Martin Fowlers Blog is recommended [1]

Graduation and Review test

Before the mentioned milestones MDM has to be completely tested according to this and the appropriate Eclipse guidelines. This means at Development_Resources/HOWTO/Graduation_Reviews and at every Development_Resources/HOWTO/Release_Reviews.

Structure of Deliverables

To support the work of system configurators there should be pre build deliverables. Next to an integrationclient, this should be a list of all bundles in an uniform format, which contains the binary, documentation and all manual tests which should and have been run. The structure is as followed:

Lessons Learned

In this chapter certain lessons learned are listed and explained. They are in random order. Some are best practices across the software development industry but still listed as they seem even more important for MDM or have been violated in the past, which lead to problems.

Industry standards should be used

In the past several home-grown solutions have been developed. This includes a custom build, which tries to reimplement certain features in ANT, which would exist in other build tools. This leads to a hardly maintainable solution. Furthermore those solutions grew and as the people who developed and maintained them left. In case a bug in these components has been discovered, it was hard to fix properly. Furthermore it is desired, that more people program MDM@WEB.

Interfaces should be stable

Interfaces should not change. Most OEMs also develop internal components, which are not published. In case an Interface changes these components must be adapted. During the time of the adaption the new Interface and all its dependents cannot be used. In case critical bugs are fixed, this might be fixed by the OEM itself, leading to a branch, which needs to be merged, etc. This leads to a vicious circle, as the adaption might take longer.

Batch Updates/Inserts should be used

This has already been covered in the code quality part, but it is still mentioned here, as it was and is one common mistake. MDM is developed to handle mass data. As the ODS Server works remote and CORBA adds some overhead as less calls as possible should be executed. This might lead to using the ExtQuery API instead of the convenient OO-API.

Quality rules must be explicit written (this document)

Many service provider develop MDM components. There might be even more than currently active in the community. They themselves employ programmers with different skills and experience. To allow every service provider to maintain the system, it is desired to have certain rules. In the past this has been done by agreeing on some rules in meetings or any other oral communication. Those rules cannot be handled orally anymore, as the participants are not co-located anymore.

Generic Components should be preferred

It is important, that a SystemAdministraor can get a new system ready without much further development. This is supported by having certain generic components, which can be used for new use cases and easily be adapted.

Logic must be separated from the UI

In the past many components mixed logic and UI. On the one hand this is a poor design and thus comes with decreased readability, maintainability etc. On the other hand it is hard to write unit tests for this kind of code.

Back to the top