Concourse

3 Install

Concourse is distributed as a single concourse binary, which contains the logic for running both a web node and a worker node. The binary is fairly self-contained, making it ideal for tossing onto a VM by hand or orchestrating it with Docker, Kubernetes, or other ops tooling.

For the sake of brevity and clarity, this document will focus solely on the concourse binary. Documentation for other platforms is available localized to their GitHub repository, as linked to by the Download page.

Note: this document is not an exhaustive reference for the concourse CLI! Consult the --help output if you're looking for a knob to turn.

3.1 Prerequisites

  • Grab the appropriate binary for your platform from the downloads section.

  • On Linux you'll need kernel v3.19 or later, with user namespace support enabled. Windows and Darwin don't really need anything special.

  • A PostgresSQL 9.5+ server running somewhere with an empty database created. If you're going to run a server yourself, refer to your platform or Linux distribution's installation instructions; we can't feasibly maintain the docs for this subject ourselves.

3.2 Quick Start

Before you spend time getting a cluster up and running, you might want to just kick the tires a bit and run everything at once. This can be achieved with the quickstart command:

concourse quickstart \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --external-url http://my-ci.example.com \
  --worker-work-dir /opt/concourse/worker

This command is shorthand for running a single web node and worker node on the same machine, auto-wired to trust each other. We've also configured some silly basic auth for the main team - you may want to change that (see Configuring Auth).

So far we've assumed that you have a local PostgreSQL server running on the default port (5432) with an atc database, accessible by the current UNIX user. If your database lives elsewhere, just specify the --postgres-* flags (consult concourse quickstart --help for more information).

The addition of the --external-url flag is not technically necessary, so if you're just testing things it's safe to omit it. Concourse uses it as a base when generating URLs to itself, so you won't want those to be the default http://127.0.0.1:8080 URL when you're developing on a different machine than the server.

3.3 Multi-node Cluster

Beyond quickstart, the Concourse binary includes separate commands for running a multi-node cluster. This is necessary for either high-availability or just being able to run things across more than one worker.

3.3.1 Generating Keys

First, you'll need to generate 3 private keys (well, 2, plus 1 for each worker):

session_signing_key (currently must be RSA)

Used for signing user session tokens, and by the TSA to sign its own tokens in the requests it makes to the ATC.

tsa_host_key

Used for the TSA's SSH server. This is the key whose fingerprint you see when the ssh command warns you when connecting to a host it hasn't seen before.

worker_key (one per worker)

Used for authorizing worker registration. There can actually be an arbitrary number of these keys; they are just listed to authorize worker SSH access.

To generate these keys, run:

ssh-keygen -t rsa -f tsa_host_key -N ''
ssh-keygen -t rsa -f worker_key -N ''
ssh-keygen -t rsa -f session_signing_key -N ''

...and we'll also start on an authorized_keys file, currently listing this initial worker key:

cp worker_key.pub authorized_worker_keys

3.3.2 Running web Nodes

The concourse binary can run a web node via the web subcommand, like so:

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --external-url http://my-ci.example.com

Just as with Quick Start, this example is configuring basic auth for the main team, and assumes a local PostgreSQL server. You'll want to consult concourse web --help for more configuration options.

The web node can be scaled up for high availability, and they'll also roughly share their scheduling workloads, using the database to synchronize. This is done by just running more web commands on different machines, and optionally putting them behind a load balancer.

To run a cluster of web nodes, you'll just need to pass the following flags:

  • The --postgres-* flags must all be set to the same database.

  • The --peer-url flag must be specified as a URL used to reach the individual web node, from other web nodes. So this just has to be a URL reachable within their private network, e.g. a 10.x.x.x address.

  • The --external-url should be the URL used to reach any ATC, i.e. the URL pointing to your load balancer.

For example:

Node 0:

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --postgres-host 10.0.32.0 \
  --postgres-user user \
  --postgres-password pass \
  --postgres-database concourse \
  --external-url https://ci.example.com \
  --peer-url http://10.0.16.10:8080

Node 1 (only difference is --peer-url):

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --postgres-host 10.0.32.0 \
  --postgres-user user \
  --postgres-password pass \
  --postgres-database concourse \
  --external-url https://ci.example.com \
  --peer-url http://10.0.16.11:8080

3.3.3 Running worker Nodes

The concourse binary can run a worker node via the worker subcommand, like so:

sudo concourse worker \
  --work-dir /opt/concourse/worker \
  --tsa-host 127.0.0.1:2222 \
  --tsa-public-key tsa_host_key.pub \
  --tsa-worker-private-key worker_key

Note that the worker must be run as root, as it orchestrates containers.

You may want a few workers, depending on the resource usage of your pipeline. There should be one per machine; running multiple on one box doesn't really make sense, as each worker runs as many containers as Concourse requests of it.

The --work-dir flag specifies where container data should be placed. Make sure it has plenty of disk space available, as it's where all the disk usage across your builds and resources will end up.

The --tsa-host refers to wherever the TSA on your web node is listening. This may be an address to a load balancer if you're running multiple web nodes, or just a local address like 127.0.0.1:2222 if you're running everything on one box.

The --tsa-public-key flag is used to ensure we're connecting to the TSA we should be connecting to, and is used like known_hosts with the ssh command. Refer to Generating Keys if you're not sure what this means.

The --tsa-worker-private-key flag specifies the key to use when authenticating to the TSA. Refer to Generating Keys if you're not sure what this means.

Workers have a statically configured platform and a set of tags, both of which determine where steps in a Build Plan are scheduled.

The Linux concourse binary comes with a set of core resource types baked in. If you are planning to use them you need to have at least one Linux worker.