Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Jenkins

Revision as of 06:22, 15 August 2019 by Frederic.gurr.eclipse-foundation.org (Talk | contribs) (I see the following error message: 'FATAL: Cannot run program "Xvnc"', what do I need to do?)

Jenkins logo.png

Jenkins is a continuous integration (CI) server. It is in use on Eclipse servers for Eclipse projects as part of the Common Build Infrastructure (CBI). This page is about the hosted service at Eclipse.org. For more information on the project itself, or to download Jenkins, please see the Jenkins project page.

Contents

General Information

Jenkins instances are maintained by the Eclipse Webmasters/Release Engineers.

Asking for Help

Requesting a JIPP instance for CBI

Please see CBI#Requesting_a_JIPP_instance

Jenkins configuration and tools (clustered infra)

Tools (and locations on the default JNLP agent container)

Apache Maven

  • apache-maven-latest /opt/tools/apache-maven/latest (= apache-maven-3.6.0)
  • apache-maven-3.6.1 /opt/tools/apache-maven/3.6.1 (Please note: Version 3.6.1 breaks Tycho's dependency resolution, therefore it's not recommended to use it.)
  • apache-maven-3.6.0 /opt/tools/apache-maven/3.6.0
  • apache-maven-3.5.4 /opt/tools/apache-maven/3.5.4
  • apache-maven-3.3.9 /opt/tools/apache-maven/3.3.0
  • apache-maven-3.2.5 /opt/tools/apache-maven/3.2.5

JDK

OpenJDK

The binaries listed below come from http://jdk.java.net. These are production-ready open-source builds of the Java Development Kit, an implementation of the Java SE Platform under the GNU General Public License, version 2, with the Classpath Exception. See the differences between these binaries and Oracle's one for version 11 onward on Oracle's director of product management blog post.

  • openjdk-latest /opt/tools/java/openjdk/latest(= openjdk-jdk12-latest)
  • openjdk-jdk13ea-latest /opt/tools/java/openjdk/jdk-13ea/latest = 13ea+6
  • openjdk-jdk12-latest /opt/tools/java/openjdk/jdk-12/latest = 12 GA
  • openjdk-jdk11-latest /opt/tools/java/openjdk/jdk-11/latest = 11.0.2
  • openjdk-jdk10-latest /opt/tools/java/openjdk/jdk-10/latest = 10.0.2
  • openjdk-jdk9-latest /opt/tools/java/openjdk/jdk-9/latest = 9.0.4

AdoptOpenJDK

The binaries listed below come from https://adoptopenjdk.net. These OpenJDK binaries are built from a fully open source set of build scripts and infrastructure.

With HotSpot
  • adoptopenjdk-hotspot-latest /opt/tools/java/adoptopenjdk/hotspot-latest(= adoptopenjdk-hotspot-jdk12-latest)
  • adoptopenjdk-hotspot-jdk12-latest /opt/tools/java/adoptopenjdk/hotspot-jdk-12/latest = 12 GA
  • adoptopenjdk-hotspot-jdk11-latest /opt/tools/java/adoptopenjdk/hotspot-jdk-11/latest = 11.0.2
  • adoptopenjdk-hotspot-jdk10-latest /opt/tools/java/adoptopenjdk/hotspot-jdk-10/latest = 10.0.2
  • adoptopenjdk-hotspot-jdk9-latest /opt/tools/java/adoptopenjdk/hotspot-jdk-9/latest = 9.0.4
  • adoptopenjdk-hotspot-jdk8-latest /opt/tools/java/adoptopenjdk/hotspot-jdk-8/latest = 1.8.0u202
With OpenJ9

The binaries listed below replace the traditional HotSpot implementation of the Java Virtual Machine implementation with Eclipse OpenJ9. Eclipse OpenJ9 is a high performance, scalable, Java virtual machine implementation that is fully compliant with the Java Virtual Machine Specification.

  • adoptopenjdk-openj9-latest /opt/tools/java/adoptopenjdk/openj9-latest(= adoptopenjdk-openj9-jdk12-latest)
  • adoptopenjdk-openj9-jdk12-latest /opt/tools/java/adoptopenjdk/openj9-jdk-12/latest = 12 GA
  • adoptopenjdk-openj9-jdk11-latest /opt/tools/java/adoptopenjdk/openj9-jdk-11/latest = 11.0.2
  • adoptopenjdk-openj9-jdk10-latest /opt/tools/java/adoptopenjdk/openj9-jdk-10/latest = 10.0.2
  • adoptopenjdk-openj9-jdk9-latest /opt/tools/java/adoptopenjdk/openj9-jdk-9/latest = 9.0.4
  • adoptopenjdk-openj9-jdk8-latest /opt/tools/java/adoptopenjdk/openj9-jdk-8/latest = 1.8.0u202

Oracle

The binaries listed below come from the Oracle Technology Network. Note that Oracle JDK from version 11 onward is licensed under the terms of the new Oracle Technology Network (OTN) License Agreement for Oracle Java SE that is substantially different from the licenses under which previous versions of the JDK were offered. Oracle JDK 10 and earlier versions were released under the Oracle Binary Code License (BCL) for Java SE.

As such, starting with JDK 11, the Eclipse Foundation will not provide any version of the Oracle JDK licensed under the -commercial- OTN terms. Previous versions listed below, will stay available as is. See the cosmetic and packaging differences between Oracle's OpenJDK Builds (GPL+CE) — simply named OpenJDK above — and Oracle JDK (OTN) on Oracle Director of Product Management's blog post.

  • oracle-latest /opt/tools/java/oracle/latest (= oracle-jdk10-latest)
  • oracle-jdk10-latest /opt/tools/java/oracle/jdk-10/latest = 10.0.2
  • oracle-jdk9-latest /opt/tools/java/oracle/jdk-9/latest = 9.0.4
  • oracle-jdk8-latest /opt/tools/java/oracle/jdk-8/latest = 1.8.0u202
  • oracle-jdk7-latest /opt/tools/java/oracle/jdk-7/latest = 1.7.0u80
  • oracle-jdk6-latest /opt/tools/java/oracle/jdk-6/latest = 1.6.0u45
  • oracle-jdk5-latest /opt/tools/java/oracle/jdk-5/latest = 1.5.0u22
  • oracle-jdk1.4-latest /opt/tools/java/oracle/jdk-1.4/latest = 1.4.2u19

IBM

The binaries listed below come from IBM SDK, Java Technology Edition.

  • ibm-latest /opt/tools/java/ibm/latest (= ibm-jdk8-latest)
  • ibm-jdk8-latest /opt/tools/java/ibm/jdk-8/latest = 8.0.5.27

Ant

  • apache-ant-latest (1.10.5, automatically installed from Apache server)

Default plugins - CJE (jenkins.eclipse.org/xxx)

  • ace-editor
  • analysis-core
  • ant
  • antisamy-markup-formatter
  • apache-httpcomponents-client-4-api
  • async-http-client
  • authentication-tokens
  • aws-credentials
  • aws-java-sdk
  • blueocean-commons
  • bouncycastle-api
  • branch-api
  • build-timeout
  • cloudbees-assurance
  • cloudbees-blueocean-default-theme
  • cloudbees-folder
  • cloudbees-folders-plus
  • cloudbees-groovy-view
  • cloudbees-jsync-archiver
  • cloudbees-license
  • cloudbees-monitoring
  • cloudbees-nodes-plus
  • cloudbees-ssh-slaves
  • cloudbees-support
  • cloudbees-template
  • cloudbees-uc-data-api
  • cloudbees-view-creation-filter
  • cloudbees-workflow-template
  • cloudbees-workflow-ui
  • command-launcher
  • conditional-buildstep
  • config-file-provider
  • credentials
  • credentials-binding
  • dashboard-view
  • disk-usage
  • display-url-api
  • docker-commons
  • docker-workflow
  • durable-task
  • email-ext
  • envinject
  • envinject-api
  • extended-read-permission
  • external-monitor-job
  • extra-columns
  • findbugs
  • form-element-path
  • gerrit-trigger
  • ghprb
  • git
  • git-client
  • git-parameter
  • git-server
  • github
  • github-api
  • github-branch-source
  • gradle
  • greenballs
  • handlebars
  • icon-shim
  • infradna-backup
  • jackson2-api
  • javadoc
  • jdk-tool
  • jobConfigHistory
  • jquery
  • jquery-detached
  • jsch
  • junit
  • kube-agent-management
  • kubernetes
  • kubernetes-credentials
  • ldap
  • mailer
  • mapdb-api
  • matrix-auth
  • matrix-project
  • maven-plugin
  • metrics
  • momentjs
  • nectar-license
  • nectar-rbac
  • node-iterator-api
  • operations-center-agent
  • operations-center-client
  • operations-center-cloud
  • operations-center-context
  • pam-auth
  • parameterized-trigger
  • pipeline-build-step
  • pipeline-graph-analysis
  • pipeline-input-step
  • pipeline-milestone-step
  • pipeline-model-api
  • pipeline-model-declarative-agent
  • pipeline-model-definition
  • pipeline-model-extensions
  • pipeline-rest-api
  • pipeline-stage-step
  • pipeline-stage-tags-metadata
  • pipeline-stage-view
  • plain-credentials
  • promoted-builds
  • rebuild
  • resource-disposer
  • run-condition
  • scm-api
  • script-security
  • ssh-agent
  • ssh-credentials
  • ssh-slaves
  • structs
  • support-core
  • timestamper
  • token-macro
  • unique-id
  • variant
  • wikitext
  • windows-slaves
  • workflow-aggregator
  • workflow-api
  • workflow-basic-steps
  • workflow-cps
  • workflow-cps-checkpoint
  • workflow-cps-global-lib
  • workflow-durable-task-step
  • workflow-job
  • workflow-multibranch
  • workflow-scm-step
  • workflow-step-api
  • workflow-support
  • ws-cleanup
  • xvnc

Default plugins - Jiro (ci-staging.eclipse.org/xxx and ci.eclipse.org/xxx)

  • ace-editor
  • analysis-core
  • ant
  • antisamy-markup-formatter
  • apache-httpcomponents-client-4-api
  • authentication-tokens
  • bouncycastle-api
  • branch-api
  • build-timeout
  • cloudbees-folder
  • command-launcher
  • conditional-buildstep
  • config-file-provider
  • configuration-as-code
  • configuration-as-code-support
  • credentials
  • credentials-binding
  • display-url-api
  • docker-commons
  • docker-workflow
  • durable-task
  • email-ext
  • extended-read-permission
  • extra-columns
  • findbugs
  • gerrit-trigger
  • ghprb
  • git
  • git-client
  • git-parameter
  • git-server
  • github
  • github-api
  • greenballs
  • handlebars
  • jackson2-api
  • javadoc
  • jdk-tool
  • jobConfigHistory
  • jquery
  • jquery-detached
  • jsch
  • junit
  • kubernetes
  • kubernetes-credentials
  • ldap
  • lockable-resources
  • mailer
  • matrix-auth
  • matrix-project
  • maven-plugin
  • momentjs
  • parameterized-trigger
  • pipeline-build-step
  • pipeline-graph-analysis
  • pipeline-input-step
  • pipeline-maven
  • pipeline-milestone-step
  • pipeline-model-api
  • pipeline-model-declarative-agent
  • pipeline-model-definition
  • pipeline-model-extensions
  • pipeline-rest-api
  • pipeline-stage-step
  • pipeline-stage-tags-metadata
  • pipeline-stage-view
  • plain-credentials
  • promoted-builds
  • rebuild
  • resource-disposer
  • run-condition
  • scm-api
  • script-security
  • simple-theme-plugin
  • sonar
  • ssh-agent
  • ssh-credentials
  • ssh-slaves
  • structs
  • timestamper
  • token-macro
  • variant
  • windows-slaves
  • workflow-aggregator
  • workflow-api
  • workflow-basic-steps
  • workflow-cps
  • workflow-cps-global-lib
  • workflow-durable-task-step
  • workflow-job
  • workflow-multibranch
  • workflow-scm-step
  • workflow-step-api
  • workflow-support
  • ws-cleanup
  • xvnc

FAQ

Important.png
FAQ
The following FAQ only applies to Eclipse projects running their builds on the new cluster-based infrastructure. It does not apply to projects running on the old infrastructure.


Is my project's JIPP running on the old infra, CJE or Jiro?

If your JIPP is hosted on ci.eclipse.org/<name of your project>, you can check the list of JIPPs on https://ci.eclipse.org:

  • if it says <project-name> (hipp1-10), it's on the old infra
  • if it says <project-name> (openshift), it's on the new infra (Jiro)


If your JIPP is hosted on jenkins.eclipse.org/<name of your project>, it's hosted on the new infra (CJE).

If your JIPP is currently hosted on ci-staging.eclipse.org, it's in the migration process and will be hosted at ci.eclipse.org, once the migration is done.
See also here: https://wiki.eclipse.org/CBI/Jenkins_Migration_FAQ#What.E2.80.99s_the_plan.3F.

All projects on CJE will be eventually migrated to Jiro.

For migration specific question, please see also https://wiki.eclipse.org/CBI/Jenkins_Migration_FAQ.

How do I run a Java build on the new infra?

The most simple way is to create a Jenkinsfile in your git repo and create a multi branch pipeline job in your Jenkins instance. See https://jenkins.io/doc/pipeline/tour/hello-world/ for more information. See below a simple Jenkinsfile. Note that the full list of available tools name can be found in the tools section of this page.

pipeline {
    agent any
    tools {
        maven 'apache-maven-latest'
        jdk 'adoptopenjdk-hotspot-jdk8-latest'
    }
    stages {
        stage('Build') {
            steps {
                sh '''
                    java -version
                    mvn -v
                '''
            }
        }
    }
    post {
        // send a mail on unsuccessful and fixed builds
        unsuccessful { // means unstable || failure || aborted
            emailext subject: 'Build $BUILD_STATUS $PROJECT_NAME #$BUILD_NUMBER!', 
            body: '''Check console output at $BUILD_URL to view the results.''',
            recipientProviders: [culprits(), requestor()], 
            to: 'other.recipient@domain.org'
        }
        fixed { // back to normal
            emailext subject: 'Build $BUILD_STATUS $PROJECT_NAME #$BUILD_NUMBER!', 
            body: '''Check console output at $BUILD_URL to view the results.''',
            recipientProviders: [culprits(), requestor()], 
            to: 'other.recipient@domain.org'
        }
    }
}

How do I run UI-tests on the new infra?

For UI tests, there are two scenarios:

  • In general, you can use a custom docker image and Jenkins pipelines, see https://wiki.eclipse.org/Jenkins#How_do_I_run_my_build_in_a_custom_container.3F.
  • To ease the migration/transition, we provide two pod templates:
    • a basic UI-test pod template (label: "ui-test"). This image only provides a minimal UI test environment.
    • a migration pod template (label: "migration"). This docker image should be quite close to the environment on the old infrastructure.
  • If your project requires specific dependencies, in most cases, you will need to use the migration pod template or roll your own.


For freestyle jobs the label can be specified in the job configuration under "Restrict where this project can be run".


Example for pipeline jobs:

pipeline {
    agent {
      kubernetes {
        label 'ui-test'
      }
    }
    tools {
        maven 'apache-maven-latest'
        jdk 'adoptopenjdk-hotspot-jdk8-latest'
    }
    stages {
        stage('Build') {
            steps {
                wrap([$class: 'Xvnc', takeScreenshot: false, useXauthority: true]) {
		    sh 'mvn clean verify'
		}
            }
        }
    }
}


My builds fails with: 'FATAL: Cannot run program "Xvnc"', what do I need to do?

You are most likely running UI tests that require a desktop environment and a VNC server. The default pod template does not provide such an environment. Therefore you will need to use a different pod template or a custom docker image.

The migration pod template (label: "migration") can be used for UI tests. This docker image should be quite close to the environment on the old infrastructure. If you don't have any other dependencies on the environment, you can also try the much smaller & leaner UI test pod template (label: ui-test).

See https://wiki.eclipse.org/Jenkins#How_do_I_run_UI-tests_on_the_new_infra.3F on how the pod template can be used with freestyle or pipeline jobs.

How do I run my build in a custom container?

You need to use a Jenkins pipeline to do so. Then you can specify a Kubernetes pod template. See an example below.

You can either use already existing "official" docker images, for example the maven:<version>-alpine images or create your own custom docker image. Docker images need to be hosted on https://hub.docker.com or another public container registry (e.g. https://quay.io).

Please note: Currently, docker images can not be built (as in created) on our Jenkins CI instances.

pipeline {
  agent {
    kubernetes {
      label 'my-agent-pod'
      yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven:alpine
    command:
    - cat
    tty: true
  - name: php
    image: php:7.2.10-alpine
    command:
    - cat
    tty: true
  - name: hugo
    image: eclipsecbi/hugo:0.42.1
    command:
    - cat
    tty: true
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('maven') {
          sh 'mvn -version'
        }
        container('php') {
          sh 'php -version'
        }
        container('hugo') {
          sh 'hugo -version'
        }
      }
    }
  }
}

See the Kubernetes Jenkins plugin for more documentation.

Why are there no build executors? Why are build executors offline/suspended and/or builds never start?

On our cluster-based infrastructure all build executors/agents/pods (except on dedicated agents) are dynamically spun up. This usually takes a little while. Therefore the build executor status panel might show something like

  • "(pending—Waiting for next available executor)" or
  • "my-agent-abcd123 is offline or suspended"

for a few seconds, before the build starts. We've opened a ticket with a suggestion on the Jenkins project to speed up the provisioning process. Feel free to vote on this issue if you feel it is important for you.

If such a message is shown for more than ~5 minutes, you can safely assume that something is wrong with the pod/container config. For example: a docker image can't be found because there is a typo in the name or the tag was wrong.

What is killing my build? I'm using custom containers!

First, please get familiarized with how Kubernetes assigns memory resources to containers and pods

Then, you should know that, as soon as you run your build in a custom Kubernetes agent, Jenkins adds a container named "jnlp" that will handle the connection between the pod agent and the master. The resources assigned to this "jnlp" container come from a default values we set for you. Because we know the "jnlp" container does not use need of cpu and memory, we set the default values for all containers to low values (about 512MiB and 0.25 vCPU). This way, we're sure that the "jnlp" container won't uselessly consume too much resources that are allocated to your project. But this has the negative effect that, if you don't effectively specify the resources requests and limits in your pod template, your custom containers will also inherit those values (which are probably too low for you). To overcome the issue, you need to specify those values in your pod template like:

pipeline {
  agent {
    kubernetes {
      label 'my-agent-pod'
      yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven:alpine
    command:
    - cat
    tty: true
    resources:
      limits:
        memory: "2Gi"
        cpu: "1"
      requests:
        memory: "2Gi"
        cpu: "1"
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('maven') {
          sh 'mvn -version'
        }
      }
    }
  }
}

Note that if you run multiple containers, you need to specify the limits for each.

We plan to develop some tooling to automatically inject correct default values for your custom containers depending on the resource quotas and the concurrency level (i.e. how many agent can run at once) assigned to your project GitHub Issue #20.

How do I deploy artifacts to download.eclipse.org?

You cannot just cp stuff to a folder. You need to do that with ssh. The first thing to do is to request write access to the projects storage service for your Jenkins instance. This service provide access to the Eclipse Foundation file servers storage:

  • /home/data/httpd/download.eclipse.org
  • /home/data/httpd/archive.eclipse.org
  • /home/data/httpd/download.polarsys.org
  • /home/data/httpd/download.locationtech.org

Then, we will configure your Jenkins instance with SSH credentials. Depending on how you run your build, the way you will use them are different. See the different cases below.

Freestyle job

You need to activate the SSH Agent plugin in your job configuration and select the proper credentials genie.projectname (ssh://projects-storage.eclipse.org).

Project-storage-ssh-agent.png

Then you use ssh, scp and sftp commands to deploy artifacts to the server, e.g.,

scp target/my_artifact.jar genie.<projectname>@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/<projectname>/
ssh genie.<projectname>@projects-storage.eclipse.org ls -al /home/data/httpd/download.eclipse.org/<projectname</

Pipeline job without custom pod template

pipeline {
  agent any
 
  stages {
    stage('stage 1') {
      ...
    }
    stage('Deploy') {
      steps {
        sshagent ( ['projects-storage.eclipse.org-bot-ssh']) {
          sh '''
            ssh genie.projectname@projects-storage.eclipse.org rm -rf /home/data/httpd/download.eclipse.org/projectname/snapshots
            ssh genie.projectname@projects-storage.eclipse.org mkdir -p /home/data/httpd/download.eclipse.org/projectname/snapshots
            scp -r repository/target/repository/* genie.projectname@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/projectname/snapshots
          '''
        }
      }
    }
  }
}

Pipeline job with custom pod template

Important.png
JNLP container
A 'jnlp' container is automatically added, when a custom pod template is used to ensure connectivity between the Jenkins master and the pod. If you want to deploy files to download.eclipse.org, you only need to specify the known-hosts volume for the JNLP container (as seen below) to avoid "host verification failed" errors.


pipeline {
  agent {
    kubernetes {
      label 'my-pod'
      yaml '''
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven:alpine
    command:
    - cat
    tty: true
  - name: jnlp
    volumeMounts:
    - name: volume-known-hosts
      mountPath: /home/jenkins/.ssh
  volumes:
  - name: volume-known-hosts
    configMap:
      name: known-hosts
'''
    }
  }
  stages {
    stage('Build') {
      steps {
        container('maven') {
            sh 'mvn clean verify'
        }
      }
    }
    stage('Deploy') {
      steps {
        container('jnlp') {
          sshagent ( ['projects-storage.eclipse.org-bot-ssh']) {
            sh '''
              ssh genie.projectname@projects-storage.eclipse.org rm -rf /home/data/httpd/download.eclipse.org/projectname/snapshots
              ssh genie.projectname@projects-storage.eclipse.org mkdir -p /home/data/httpd/download.eclipse.org/projectname/snapshots
              scp -r repository/target/repository/* genie.projectname@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/projectname/snapshots
            '''
          }
        }
      }
    }
  }
}

My build fails with "No user exists for uid 1000100000", what's the issue?

First, you need to know that we run containers using an arbitrarily assigned user ID (1000100000) in our OpenShift cluster. This is for security reasons.

Unfortunately, most of images you can find on DockerHub (including official images) do not support running as an arbitrary user. Actually, most of them expect to run as root, which is definitely a bad practice. See also question below about how to run a container as root.

Moreover, some programs like ssh search for a mapping between the user ID (1000100000) and a user name on the system (here a container). It's very rare that any container anticipate this need and actually created a user with ID=1000100000. To avoid this error, you need to customize the image. OpenShift publishes guidelines with best practices about how to create Docker images. More specifically, see the section about how to support running with arbitrary user ID.

In order to make your image call the uid_entrypoint as listed in the link above, you will need to add it to the command directive in the pod template, e.g.:

pipeline {
  agent {
    kubernetes {
      label 'my-pod'
      yaml '''
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: custom-container
    image: 'custom/image'
    command: [ "/usr/local/bin/uid_entrypoint" ]
    args: [ "cat" ]
    tty: true
'''
    }
  }
  stages {
    stage('Stage 1') {
      steps {
        container('custom-container') {
          sh 'whoami'
        }
      }
    }
  }
}

If you want to see in practice, have a look at some images we've defined to run in the cluster on this GitHub repository.

How can I run my build in a container with root privileges?

Unfortunately, for security reasons, you cannot do that. We run an infrastructure open to the internet, which potentially runs stuff from non-trusted code (e.g., PR) so we need to follow a strict policy to protect the common good.

More specifically, we run containers using an arbitrarily assigned user ID (e.g. 1000100000) in our OpenShift cluster. The group ID is always root (0) though. The security context constraints we use for running projects' containers is "restricted". You cannot change this level from your podTemplate.

Unfortunately, most of images you can find on DockerHub (including official images) do not support running as an arbitrary user. Actually, most of them expect to run as root, which is definitely a bad practice.

OpenShift publishes guidelines with best practices about how to create Docker images. More specifically, see the section about how to support running with arbitrary user ID.

To test if an image is ready to be run with an arbitrarily assigned user ID, you can try to start it with the following command line:

  $ docker run -it --rm -u $((1000100000 + RANDOM % 100000)):0 image/name:tag

I want to build a custom Docker image (with docker build), but it does not work. What should I do?

You cannot currently build images on the cluster (i.e. docker build does not work). We plan to address this shortcoming shortly.

My build fails with "Host key verification failed", what should I do?

As long as you stay in the default jnlp docker image (i.e. use a Freestyle Job or a Pipeline job without custom pod template), you'll benefit from our existing configuration where we mount a known_hosts file in the ~/.ssh folder of all containers.

If you define a custom pod template, you need to add some configuration to mount this config map in your containers. The only thing that you have to know is the config map name known-hosts and mount it at the proper location /home/jenkins/.ssh.

pipeline {
  agent {
    kubernetes {
      label 'my-agent-pod'
      yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven:alpine
    command:
    - cat
    tty: true
    volumeMounts:
    - name: volume-known-hosts
      mountPath: /home/jenkins/.ssh
  volumes:
  - name: volume-known-hosts
    configMap:
      name: known-hosts
"""
    }
  }
  stages {
    ...
  }
}

Currently, the known_hosts file we provide as the key for the following sites:

  • git.eclipse.org:22
  • git.eclipse.org:29418
  • build.eclipse.org
  • github.com

If you need any other site to be added, feel free to open a request.

How do I use the local Nexus server as proxy for Maven Central (artifact caching)?

Every JIPP on the new infra already has a Maven settings file set up that specifies our local Nexus instance as cache for Maven Central.

Important.png
Jiro
In Jiro this works out of the box for the default pod templates (labels: jnlp, migration, ui-test). No additional configuration is required for Freestyle and Pipeline jobs. For custom containers, see below: Custom container on Jiro


Unfortunately, for instances on CJE (CloudBees Core) we have not found a way to make this the default Maven settings file for all Maven builds. That's why it needs to be specified manually.

The name of the settings file has the format settings-<projectshortname>.xml, e.g. settings-metro.xml for the Eclipse Metro project.

Freestyle job

In a freestyle job the settings file needs to be injected. It can either be used with a variable (e.g. MVNSETTINGS), which you can reference later with mvn -s $MVNSETTINGS clean verify [...] in a shell build step.

ProvideConfigFiles variable .png

Or you can set the target to /home/jenkins/.m2/settings.xml. The settings.xml will then be picked up automatically by Maven builds.

ProvideConfigFiles target.png

Pipeline job

In a pipeline job you have to specify the fileId of the Maven settings file as an UUID. You can find out the correct fileId by using the pipeline syntax generator and choosing "configFileProvider: Provide Configuration files" as the sample step.

[...]
configFileProvider([configFile(fileId: '5f77ec66-dc5e-4d29-999f-311501789ba0', variable: 'MVN_SETTINGS')]) {
  sh "${mvnHome}/bin/mvn -s $MVN_SETTINGS clean verify"
}
[...]

Custom container on Jiro

You need to add the settings-xml volume, like shown below. Please note, the m2-repo volume is required as well, otherwise /home/jenkins/.m2/repository is not writable.

pipeline {
  agent {
    kubernetes {
      label 'my-agent-pod'
      yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven:alpine
    tty: true
    command:
    - cat
    volumeMounts:
    - name: settings-xml
      mountPath: /home/jenkins/.m2/settings.xml
      subPath: settings.xml
      readOnly: true
    - name: m2-repo
      mountPath: /home/jenkins/.m2/repository
  volumes:
  - name: settings-xml
    configMap: 
      name: m2-dir
      items:
      - key: settings.xml
        path: settings.xml
  - name: m2-repo
    emptyDir: {}
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('maven') {
          sh 'mvn -version'
        }
      }
    }
  }
}

How can artifacts be deployed to Nexus OSS (repo.eclipse.org)?

If your project does not have it's own repo on Nexus yet, then simply file a bug and specify what project you'd like a Nexus repo for.

If your project does have it's own repo on Nexus already, then you can use Maven (or Gradle) to deploy your artifacts. This is also described here: https://wiki.eclipse.org/Services/Nexus#Deploying_artifacts_to_repo.eclipse.org.

Note.png
No separate Maven settings file required
Please note, on our new infra (Jiro), you don't need to specify a separate Maven settings file for deployment to Nexus. All information is contained in the default Maven settings file located at /home/jenkins/.m2/settings.xml.


How can artifacts be deployed to OSSRH / Maven Central?

Deploying artifacts to OSSRH (OSS Repository Hosting provided by Sonatype) requires an account at OSSRH. It is also required to sign all artifacts with GPG. The Eclipse IT team will set this up for the project.
Please file a bug for this first.

Required steps for a freestyle job

Note.png
Note
Please note, the following instructions only work on the new infra (Jiro!). They do not work on CJE (jenkins.eclipse.org). Instructions for CJE can be found here: EE4J_Build.


1. Insert secret-subkeys.asc as secret file in job InjectSecretFile2.png
2. Import GPG keyring with --batch and trust the keys non-interactively in a shell build step (before the Maven call)
gpg --batch --import "${KEYRING}"
for fpr in $(gpg --list-keys --with-colons  | awk -F: '/fpr:/ {print $10}' | sort -u);
do
  echo -e "5\ny\n" |  gpg --batch --command-fd 0 --expert --edit-key $fpr trust;
done

GpgImport.png
3. Since a newer GPG version (> 2.1+) is used on the new infra, it's required to add --pinentry-mode loopback as gpg argument in the pom.xml:
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-gpg-plugin</artifactId>
  <version>1.6</version>
  <executions>
    <execution>
      <id>sign-artifacts</id>
        <phase>verify</phase>
        <goals>
          <goal>sign</goal>
        </goals>
        <configuration>
          <gpgArguments>
            <arg>--pinentry-mode</arg>
            <arg>loopback</arg>
          </gpgArguments>
        </configuration>
    </execution>
  </executions>
</plugin>

Required steps for a pipeline job

This is a simple pipeline job, that allows to test the GPG signing.

Note.png
Note
Please note, the following script only works on the new infra (Jiro!). It does not work on CJE (jenkins.eclipse.org). Instructions for CJE can be found here: EE4J_Build.


pipeline {
    agent any
    tools {
        maven 'apache-maven-latest'
        jdk 'adoptopenjdk-hotspot-jdk8-latest'
    }
    stages {
        stage('Build') {
            steps {
                sh "mvn -B -U archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false"
                sh '''cat >my-app/pom.xml <<EOL
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.mycompany.app</groupId>
  <artifactId>my-app</artifactId>
  <packaging>jar</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>my-app</name>
  <url>http://maven.apache.org</url>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-gpg-plugin</artifactId>
        <version>1.6</version>
        <executions>
          <execution>
            <id>sign-artifacts</id>
            <phase>verify</phase>
            <goals>
              <goal>sign</goal>
            </goals>
            <configuration>
              <gpgArguments>
                <arg>--pinentry-mode</arg>
                <arg>loopback</arg>
              </gpgArguments>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>
EOL'''
                withCredentials([file(credentialsId: 'secret-subkeys.asc', variable: 'KEYRING')]) {
                    sh 'gpg --batch --import "${KEYRING}"'
                    sh 'for fpr in $(gpg --list-keys --with-colons  | awk -F: \'/fpr:/ {print $10}\' | sort -u); do echo -e "5\ny\n" |  gpg --batch --command-fd 0 --expert --edit-key ${fpr} trust; done'
                    sh "mvn -B -f my-app/pom.xml clean verify"
                }
                sh 'gpg --verify my-app/target/my-app-1.0-SNAPSHOT.jar.asc'
            }
        }
    }
}

Custom container on Jiro

When you are using a custom container on Jiro, you will need to add the settings-xml and settings-security-xml volume, like shown below.

Please note:

pipeline {
  agent {
    kubernetes {
      label 'my-agent-pod'
      yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: maven
    image: maven:alpine
    tty: true
    command:
    - cat
    volumeMounts:
    - name: settings-xml
      mountPath: /home/jenkins/.m2/settings.xml
      subPath: settings.xml
      readOnly: true
    - name: toolchains-xml
      mountPath: /home/jenkins/.m2/toolchains.xml
      subPath: toolchains.xml
      readOnly: true
    - name: settings-security-xml
      mountPath: /home/jenkins/.m2/settings-security.xml
      subPath: settings-security.xml
      readOnly: true
    - name: m2-repo
      mountPath: /home/jenkins/.m2/repository
  volumes:
  - name: settings-xml
    secret:
      secretName: m2-secret-dir
      items:
      - key: settings.xml
        path: settings.xml
  - name: toolchains-xml
    configMap:
      name: m2-dir
      items:
      - key: toolchains.xml
        path: toolchains.xml
  - name: settings-security-xml
    secret:
      secretName: m2-secret-dir
      items:
      - key: settings-security.xml
        path: settings-security.xml
  - name: m2-repo
    emptyDir: {}
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('maven') {
          sh 'mvn -version'
        }
      }
    }
  }
}

How do I use /opt/tools in a custom container?

You need to specify the tools persistence volume.

pipeline {
  agent {
    kubernetes {
      label 'my-agent-pod'
      yaml """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: custom-name
    image: my-custom-image:latest
    tty: true
    command:
    - cat
    volumeMounts:
    - name: tools
      mountPath: /opt/tools
  volumes:
  - name: tools
    persistentVolumeClaim:
      claimName: tools-claim
"""
    }
  }
  stages {
    stage('Run maven') {
      steps {
        container('custom-name') {
          sh '/opt/tools/apache-maven/latest/bin/mvn -version'
        }
      }
    }
  }
}
Important.png
Tools claim
On CJE (jenkins.eclipse.org) the claimName of the persistentVolumeClaim is "tools-claim", on Jiro the claimName is tools-claim-jiro-<project_shortname> (e.g. tools-claim-jiro-cbi for the CBI project).


Can projects get admin access on their Jiro JIPP?

In general, we want to avoid handing out admin rights in the future. At the same time, - in the spirit of configuration as code - we are planning to allow projects to submit pull requests to our Jiro repo and change the configuration of their CI instance. E.g. adding plugins, etc. This will allow better tracking of configuration changes and rollback in case of issues.

We understand that some projects heavily rely on their admin permissions. We will make sure to find an amicable solution in those cases.

How can a new plugin be added to a Jiro JIPP?

The preferred way is to open a pull request in the Jiro GitHub repo. For example, to add a new plugin to the CBI instance, one would need to edit https://github.com/eclipse-cbi/jiro/blob/master/instances/technology.cbi/jenkins/plugins-list and add the ID of the plugin. If the file ../instances/<project_name>/jenkins/plugins-list does not exist yet, it needs to be created first.

Only additional plugins that are not listed in https://wiki.eclipse.org/Jenkins#Default_plugins_-_Jiro_.28ci-staging.eclipse.org.2Fxxx_and_ci.eclipse.org.2Fxxx.29 need to be added to the plugins-list file.

The ID of a Jenkins plugin can be found here: https://plugins.jenkins.io/

If this sounds to complicated, you can also open a bug on Bugzilla.

Jenkins configuration and tools (bare-metal infra)

Check CI best practices for general recommendations how to setup Jenkins.

Tools (and locations)

Build tools like JDK, Maven, Ant and Gradle are already configured in every Jenkins instance.

  • JDK

Tools labeled jdk are Oracle JDK licensed under the Oracle Binary Code License (BCL) for Java SE. Tools labeled openjdk are Oracle builds of OpenJDK under the GPL+CE license. Starting with JDK 11, the Eclipse Foundation won't provide any version of the JDK from Oracle licensed under the -commercial- Oracle Technology Network (OTN) terms. See OpenJDK and Oracle JDK sections above to learn more.

    • openjdk-jdk12-latest (/shared/common/java/openjdk/jdk-12_x64-latest)
    • openjdk-jdk11-latest (/shared/common/java/openjdk/jdk-11_x64-latest)
    • jdk10-latest (/shared/common/java/oracle/jdk-10_x64-latest)
    • jdk9-latest (/shared/common/jdk-9_x64-latest)
    • jdk1.8.0-latest (/shared/common/jdk1.8.0_x64-latest)
    • jdk1.7.0-latest (/shared/common/jdk1.7.0-latest)
    • jdk1.6.0-latest (/shared/common/jdk1.6.0-latest)
    • jdk1.5.0-latest (/shared/common/jdk1.5.0-latest)
  • Maven
    • apache-maven-latest (/shared/common/apache-maven-latest)
    • apache-maven-3.0.5 (/shared/common/apache-maven-3.0.5)
  • Ant
    • apache-ant-1.9.6 (/shared/common/apache-ant-1.9.6)
  • Gradle
    • gradle-latest (/shared/common/gradle-latest)
    • gradle-3.1 (/shared/common/gradle-3.1)

More generally, all tools listed on http://build.eclipse.org/common/ are available from /shared/common/.

If you need tools that are not general purpose installed, project members can install them in your project's home directory, for example ~/buildtools. See email on cbi-dev

Default plugins

The following plugins are installed by default. Additional plugins can be installed on request.

  • ace-editor
  • analysis-core
  • ant
  • antisamy-markup-formatter
  • authentication-tokens
  • bouncycastle-api
  • branch-api
  • build-timeout
  • cloudbees-folder
  • conditional-buildstep
  • credentials
  • credentials-binding
  • dashboard-view
  • disk-usage
  • display-url-api
  • docker-commons
  • docker-workflow
  • durable-task
  • external-monitor-job
  • extra-columns
  • find-bugs
  • gerrit-trigger
  • git
  • git-client
  • git-parameter
  • git-server
  • gradle
  • greenballs
  • handlebars
  • icon-shim
  • javadoc
  • jobConfigHistory
  • jquery
  • jquery-detached
  • junit
  • ldap
  • mailer
  • matrix-auth
  • matrix-project
  • maven-plugin
  • momentjs
  • pam-auth
  • parameterized-trigger
  • pipeline-build-step
  • pipeline-graph-analysis
  • pipeline-input-step
  • pipeline-milestone-step
  • pipeline-model-api
  • pipeline-model-declarative-agent
  • pipeline-model-definition
  • pipeline-model-extensions
  • pipeline-rest-api
  • pipeline-stage-step
  • pipeline-stage-tags-metadata
  • pipeline-stage-view
  • plain-credentials
  • promoted-builds
  • rebuild
  • resource-disposer
  • scm-api
  • script-security
  • sonar
  • ssh-credentials
  • ssh-slaves
  • structs
  • timestamper
  • token-macro
  • windows-slaves
  • workflow-aggregator
  • workflow-api
  • workflow-basic-steps
  • workflow-cps
  • workflow-cps-global-lib
  • workflow-durable-task-step
  • workflow-job
  • workflow-multibranch
  • workflow-scm-step
  • workflow-step-api
  • workflow-support
  • ws-cleanup
  • xvnc

Setup for specific plugins

GitHub Pull Request Builder Plugin

The GitHub Pull Request Builder Plugin (GHPRB) allows to build/test pull requests and provide immediate feedback in the pull request on GitHub.

To set this up, please open a Bugzilla issue against the CI-Jenkins component (Product: Community) and request this feature.

Here are some details about what happens during the setup process:

  • The GHPRB plugin is installed in the JIPP.
  • Webmaster creates a GitHub bot user and adds it to the respective team on GitHub.
  • The credentials of the GitHub bot user are added to the JIPP (with user name and password, because SSH keys are not recommended/supported by the plugin).
  • The GHPRB plugin's main config is set up.

Once the ticket is resolved you should be able to configure and use the GHPRB plugin in your jobs.

Instructions how to set up GHPRB plugin in jobs can be found here: https://github.com/jenkinsci/ghprb-plugin/blob/master/README.md

Please note: the 'Use github hooks for build triggering' option has to be disabled, since it requires admin permissions for the GitHub bot user, which we don't allow. With this option turned off, Jenkins is polling GitHub instead. Which should work just fine in most cases.

More info can be found in the GitHub readme: https://github.com/jenkinsci/ghprb-plugin/blob/master/README.md

Jenkins Pipeline (aka configuration in code)

An example how Eclipse plugins can be build with Tycho using a Jenkins pipeline can be found here (Thanks to Mickael Istria!):
https://github.com/eclipse/aCute/blob/master/Jenkinsfile

More info about Jenkins Pipeline can be found here:
https://jenkins.io/doc/book/pipeline/
https://jenkins.io/doc/book/pipeline/shared-libraries/

Gerrit Trigger Plugin

You may use the Jenkins Gerrit Trigger Plugin in order to run a Jenkins job to verify each new patchset uploaded to Gerrit for code review. Jenkins (named "CI Bot") will then also vote on these changes using the "Verify" voting category.

Jgit.gerrit-reviewer.png

Jgit.gerrit-vote.png

Below, the configuration sections for the Git plugin and the Gerrit trigger plugin of the verification job used by the EGit project may serve as an example.

General configuration settings

  1. Check This project is parameterized. Click the Add button and select String Parameter. Set the parameter Name to GERRIT_REFSPEC and Default Value to refs/heads/master.

Gerrit-refspec-param.png

Configuration of Source Code Management

  1. Under Source Code Management select Git.
  1. Under Repositories, click on Advanced and change the Refspec to ${GERRIT_REFSPEC}.
  1. Under Additional Behaviours, add Strategy for choosing what to build and select Gerrit Trigger as a strategy.

Jgit.gerrit-git-config.png

Note that the section Branches to build won't be used and may be deleted.

Configuration of Build Triggers

  1. Under Build Triggers, select Gerrit event.

Jgit.gerrit-gerrit-config.png

  1. Under Trigger on, click on Add and select at least Patchset Created. This will configure the job to run on each new patchset. You can also add additional trigger, like Comment Added Contains Regular Expression. In the example below, a build will be triggered for the latest patch set if the comment is exactly CI Bot, run a build please.

Gerrit-trigger-events.png

  1. Finally, configure at least one Gerrit Project. The pattern is the name of project (i.e. if your repository is git.eclipse.org/<xx>/<yy>.git, then fill the pattern xx/yy). The Branches section is the list of branch to listen for events as configured above. Generally, you want one, named master to build patches submitted for the master branche, or ** to build patches submitted to each and every branches. Set the type to Path.

Gerrit-trigger-project.png

Configuration of the build action

Under "Build" click the "Add a build step" button, and select the appropriate action. The actual action depends on what you want Hudson to do. A typical example, for projects build with Maven is to select "Invoke Maven 3" and set "Maven 3" to "apache-maven-latest" and "Goals" to "clean verify".

Howto

How to build my project's website with Jenkins?

The preferred static website generator for build Eclipse project websites is Hugo. You should first put your Hugo sources in a dedicated Git repository, either at GitHub if your source code is already hosted at GitHub or at git.eclipse.org. If you don't have such a repository already, feel free to open a request, the Eclipse IT team will create one for you.

Note that each and every Eclipse project automatically gets a Git repository with git.eclipse.org/www.eclipse.org/<project_name> (see this repository index for complete list). This is not where you want to push your Hugo sources. This repository contains the webpages that are automatically and regularly pulled and published on the www.eclipse.org HTTP server. All the content from the master branch will eventually be available at the URL https://www.eclipse.org/<project_name/>.

Once your Hugo sources are in the proper repository, create a file named Jenkinsfile at the root of the repository with the following content (don't forget to specify the proper value for PROJECT_NAME and PROJECT_BOT_NAME environment variable):

pipeline {
 
  agent {
    kubernetes {
      label 'hugo-agent'
      yaml """
apiVersion: v1
metadata:
  labels:
    run: hugo
  name: hugo-pod
spec:
  containers:
    - name: jnlp
      volumeMounts:
      - mountPath: /home/jenkins/.ssh
        name: volume-known-hosts
    - name: hugo
      image: eclipsecbi/hugo:0.42.1
      command:
      - cat
      tty: true
  volumes:
  - configMap:
      name: known-hosts
    name: volume-known-hosts
"""
    }
  }
 
  environment {
    PROJECT_NAME = "<project_name>" // must be all lowercase.
    PROJECT_BOT_NAME = "<Project_name> Bot" // Capitalize the name
  }
 
  triggers { pollSCM('H/10 * * * *') 
 
 }
 
  options {
    buildDiscarder(logRotator(numToKeepStr: '5'))
    checkoutToSubdirectory('hugo')
  }
 
  stages {
    stage('Checkout www repo') {
      steps {
        dir('www') {
            sshagent(['git.eclipse.org-bot-ssh']) {
                sh '''
                    git clone ssh://genie.${PROJECT_NAME}@git.eclipse.org:29418/www.eclipse.org/${PROJECT_NAME}.git .
                    git checkout ${BRANCH_NAME}
                '''
            }
        }
      }
    }
    stage('Build website (master) with Hugo') {
      when {
        branch 'master'
      }
      steps {
        container('hugo') {
            dir('hugo') {
                sh 'hugo -b https://www.eclipse.org/${PROJECT_NAME}/'
            }
        }
      }
    }
    stage('Build website (staging) with Hugo') {
      when {
        branch 'staging'
      }
      steps {
        container('hugo') {
            dir('hugo') {
                sh 'hugo -b https://staging.eclipse.org/${PROJECT_NAME}/'
            }
        }
      }
    }
    stage('Push to $env.BRANCH_NAME branch') {
      when {
        anyOf {
          branch "master"
          branch "staging"
        }
      }
      steps {
        sh 'rm -rf www/* && cp -Rvf hugo/public/* www/'
        dir('www') {
            sshagent(['git.eclipse.org-bot-ssh']) {
                sh '''
                git add -A
                if ! git diff --cached --exit-code; then
                  echo "Changes have been detected, publishing to repo 'www.eclipse.org/${PROJECT_NAME}'"
                  git config --global user.email "${PROJECT_NAME}-bot@eclipse.org"
                  git config --global user.name "${PROJECT_BOT_NAME}"
                  git commit -m "Website build ${JOB_NAME}-${BUILD_NUMBER}"
                  git log --graph --abbrev-commit --date=relative -n 5
                  git push origin HEAD:${BRANCH_NAME}
                else
                  echo "No change have been detected since last build, nothing to publish"
                fi
                '''
            }
        }
      }
    }
  }
}


Finally, you can create a multibranch pipeline job on your project's Jenkins instance. It will automatically be triggered on every new push to your Hugo source repository, build the website and push it to the git.eclipse.org/www.eclipse.org/<project_name>.git repository. As mentioned above, the Eclipse Foundation website's infrastructure will eventually pull the content of the latter and your website will be published and available on http://www.eclipse.org/<project_name>.

If you don't have a Jenkins instance already, ask for one. If you need assistance with the process, open a ticket.

How to connect a JIPP to an external slave?

The Jenkins (JIPP) instances hosted by the Eclipse Foundation are located on a private, non-routable network that can only establish HTTP/S connections to the Internet. To connect your JIPP to an external slave, you'll need to perform some SSH wizardry:

*NIX and SSH slaves
  • On your JIPP, create a job that will only run this shell script once. Edit as required.
ssh-keygen -t rsa -b 4096 -C "CBI genie" -f ~/.ssh/id_rsa.cbi -N ""
chmod 600 ~/.ssh/id_rsa.cbi.pub
echo "Adding pub key to authorized_keys file to log into build.eclipse.org"
cat ~/.ssh/id_rsa.cbi.pub >> ~/.ssh/authorized_keys
echo "Copy this key to your remote server's ~/.ssh/authorized_keys file"
cat ~/.ssh/id_rsa.cbi.pub
echo "Creating config"
echo "" >> ~/.ssh/config
echo "Host build.eclipse.org" >> ~/.ssh/config
echo "StrictHostKeyChecking no" >> ~/.ssh/config
echo "Host external-slave" >> ~/.ssh/config
echo "StrictHostKeyChecking no" >> ~/.ssh/config
echo "IdentityFile ~/.ssh/id_rsa.cbi"  >> ~/.ssh/config
echo "PubkeyAuthentication yes"  >> ~/.ssh/config
echo "ServerAliveInterval 60"  >> ~/.ssh/config
echo "ProxyCommand ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa.cbi build.eclipse.org netcat %h 22"  >> ~/.ssh/config
chmod 600 ~/.ssh/config
echo 
echo "Finished setting up SSH config"
cat ~/.ssh/config
  • Copy the public key that appears in the console to the authorized_keys file of the remote server, as instructed.
  • In Jenkins' management, create a new slave node. Choose the option to "Launch slave via execution of command on Master". If you don't have administration rights on your JIPP, please open a ticket.
  • Use this command, replacing (projectname) with the name of your project and username with the external username. Sadly, I cannot get bash ~ or $HOME here. If you want to make sure the slave jar is always the same version as the slave, make sure you use the slave jar available from your JIPP instance.
ssh -C -i /opt/public/hipp/homes/genie.(projectname)/.ssh/id_rsa.cbi username@external-slave "wget -qO slave.jar https://ci.eclipse.org/cbi/jnlpJars/slave.jar ; java -jar slave.jar"
Windows via jnlp

To be determined, see bug 490701. SSH also works if OpenSSH/Cygwin is installed on the Windows host.

HIPP to JIPP migration (HIPP2JIPP)

As of March 1st, 2018 all Hudson masters have been migrated to Jenkins. See https://ci.eclipse.org

  • The following tool was used to convert the configuration XMLs to a format that Jenkins understands: https://github.com/eclipse/hipp2jipp
  • Some know issues are listed here: https://github.com/eclipse/hipp2jipp#known-issues
  • Also some custom scripts (specific to the Eclipse Foundation's build infrastructure) have been used.
  • In most cases the migration worked straight-forward and configurations were converted without any problems. Backups were created and can be made available on request if configurations are lost.

Cluster migration

See CBI/Jenkins_Migration_FAQ

Back to the top