Skip to main content
Jump to: navigation, search

CI best practices

Revision as of 12:38, 10 May 2017 by Frederic.gurr.eclipse-foundation.org (Talk | contribs) (Created page with "= Continuous Integration - best practices = == General tips == * '''keep the builds and tests fast!''' ** true to the XP spirit, turnaround time between check in and feedbac...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Continuous Integration - best practices

General tips

  • keep the builds and tests fast!
    • true to the XP spirit, turnaround time between check in and feedback (build successful/failed) should be kept to a minimum
    • long running builds and tests should be split up
      • eg. into staged tests: test a few basic things first and only if they are successful, run more detailed tests
  • a CI server is not a build artifact repository or storage server!
    • build artifacts in general should be seen as ephemeral
    • if a build artifact is considered a milestone, release candidate, release, etc. it should be copied to a storage server (eg. downloads.eclipse.org)
    • public announcements like "You can get the latest release from our Hudson/Jenkins instance" are considered to be bad practice

Hudson/Jenkins

Job configuration smells

  • build logic leaks to job
  • too many build steps
  • shell steps
  • not under SCM

Source: https://narkisr.github.io/jenkins-scaling/#configuration-smells

Deal with branches

  • managing branches from UI results in duplication, create a template project and clone, use programmatic API instead

Disk usage

Builds and tests can consume large amounts of data while checking out Git repositories, downloading Maven artifacts and compiling multi-OS, multi-arch, multi-bitness artifacts, etc.
Disk space is cheap, but not free (yet). Therefore disk usage should be reasonable.

Here is a rough guideline for HIPPs/JIPPs:

  • up to 50GB of disk space is considered to be fine.
  • 50-100GB is mostly considered to be OK, if it has no impact on other projects.
  • upwards of 100GB is considered to be excessive and you should check your job structure/configurations

As always there are exceptions to that rule. If a project has a good reason to use a lot of disk space this will be respected.
If not, you will be notified and ultimately blamed publicly.

Build history

  • a build history is a good indicator to track the health/stability/test coverage, etc. of your project
  • since the build history can take up considerable large amounts of disk space it should be kept at bay
    • it's nice to see the last 500 builds, but in many cases 10-20 builds have nearly the same value to see trends
    • in most cases there is no benefit in keeping more than a few builds with their artifacts, especially if the size is in the 500MB+ range
    • console logs are important, but if all debug levels are dialed to 11, they can become quite big (100s of MB)
      • Hudson is not good at dealing with huge console logs, Jenkins is better
      • browsers in general don't like huge console logs
      • try to find a balance between huge logs and time spent to run a build again with different logging levels if something failed (maybe a log level parameter can help)

Old build jobs

  • best case scenario: all your build configurations are stored in a SCM and you don't have to keep extra job configurations
  • if a build job is deprecated or archived, consider deleting the workspace to save disk space

Back to the top