Concourse is distributed as a single
concourse binary, which contains the logic for running both a
web node and a
worker node. The binary is fairly self-contained, making it ideal for tossing onto a VM by hand or orchestrating it with Docker, Kubernetes, or other ops tooling.
For the sake of brevity and clarity, this document will focus solely on the
concourse binary. Documentation for other platforms is available localized to their GitHub repository, as linked to by the Download page.
Note: this document is not an exhaustive reference for the
concourse CLI! Consult the
--help output if you're looking for a knob to turn.
Grab the appropriate binary for your platform from the downloads section.
On Linux you'll need kernel v3.19 or later, with user namespace support enabled. Windows and Darwin don't really need anything special.
A PostgresSQL 9.5+ server running somewhere with an empty database created. If you're going to run a server yourself, refer to your platform or Linux distribution's installation instructions; we can't feasibly maintain the docs for this subject ourselves.
Before you spend time getting a cluster up and running, you might want to just kick the tires a bit and run everything at once. This can be achieved with the
concourse quickstart \ --basic-auth-username myuser \ --basic-auth-password mypass \ --external-url http://my-ci.example.com \ --worker-work-dir /opt/concourse/worker
This command is shorthand for running a single
web node and
worker node on the same machine, auto-wired to trust each other. We've also configured some silly basic auth for the
main team - you may want to change that (see Configuring Auth).
So far we've assumed that you have a local PostgreSQL server running on the default port (
5432) with an
atc database, accessible by the current UNIX user. If your database lives elsewhere, just specify the
--postgres-* flags (consult
concourse quickstart --help for more information).
The addition of the
--external-url flag is not technically necessary, so if you're just testing things it's safe to omit it. Concourse uses it as a base when generating URLs to itself, so you won't want those to be the default
http://127.0.0.1:8080 URL when you're developing on a different machine than the server.
quickstart, the Concourse binary includes separate commands for running a multi-node cluster. This is necessary for either high-availability or just being able to run things across more than one worker.
First, you'll need to generate 3 private keys (well, 2, plus 1 for each worker):
session_signing_key(currently must be RSA)
Used for signing user session tokens, and by the TSA to sign its own tokens in the requests it makes to the ATC.
Used for the TSA's SSH server. This is the key whose fingerprint you see when the
sshcommand warns you when connecting to a host it hasn't seen before.
worker_key(one per worker)
Used for authorizing worker registration. There can actually be an arbitrary number of these keys; they are just listed to authorize worker SSH access.
To generate these keys, run:
ssh-keygen -t rsa -f tsa_host_key -N '' ssh-keygen -t rsa -f worker_key -N '' ssh-keygen -t rsa -f session_signing_key -N ''
...and we'll also start on an
authorized_keys file, currently listing this initial worker key:
cp worker_key.pub authorized_worker_keys
concourse binary can run a
web node via the
web subcommand, like so:
concourse web \ --basic-auth-username myuser \ --basic-auth-password mypass \ --session-signing-key session_signing_key \ --tsa-host-key tsa_host_key \ --tsa-authorized-keys authorized_worker_keys \ --external-url http://my-ci.example.com
web node can be scaled up for high availability, and they'll also roughly share their scheduling workloads, using the database to synchronize. This is done by just running more
web commands on different machines, and optionally putting them behind a load balancer.
To run a cluster of
web nodes, you'll just need to pass the following flags:
--postgres-*flags must all be set to the same database.
--peer-urlflag must be specified as a URL used to reach the individual
webnode, from other
webnodes. So this just has to be a URL reachable within their private network, e.g. a
--external-urlshould be the URL used to reach any ATC, i.e. the URL pointing to your load balancer.
concourse web \ --basic-auth-username myuser \ --basic-auth-password mypass \ --session-signing-key session_signing_key \ --tsa-host-key tsa_host_key \ --tsa-authorized-keys authorized_worker_keys \ --postgres-host 10.0.32.0 \ --postgres-user user \ --postgres-password pass \ --postgres-database concourse \ --external-url https://ci.example.com \ --peer-url http://10.0.16.10:8080
Node 1 (only difference is
concourse web \ --basic-auth-username myuser \ --basic-auth-password mypass \ --session-signing-key session_signing_key \ --tsa-host-key tsa_host_key \ --tsa-authorized-keys authorized_worker_keys \ --postgres-host 10.0.32.0 \ --postgres-user user \ --postgres-password pass \ --postgres-database concourse \ --external-url https://ci.example.com \ --peer-url http://10.0.16.11:8080
concourse binary can run a
worker node via the
worker subcommand, like so:
sudo concourse worker \ --work-dir /opt/concourse/worker \ --tsa-host 127.0.0.1:2222 \ --tsa-public-key tsa_host_key.pub \ --tsa-worker-private-key worker_key
Note that the worker must be run as
root, as it orchestrates containers.
You may want a few workers, depending on the resource usage of your pipeline. There should be one per machine; running multiple on one box doesn't really make sense, as each worker runs as many containers as Concourse requests of it.
--work-dir flag specifies where container data should be placed. Make sure it has plenty of disk space available, as it's where all the disk usage across your builds and resources will end up.
--tsa-host refers to wherever the TSA on your
web node is listening. This may be an address to a load balancer if you're running multiple
web nodes, or just a local address like
127.0.0.1:2222 if you're running everything on one box.
--tsa-public-key flag is used to ensure we're connecting to the TSA we should be connecting to, and is used like
known_hosts with the
ssh command. Refer to Generating Keys if you're not sure what this means.
--tsa-worker-private-key flag specifies the key to use when authenticating to the TSA. Refer to Generating Keys if you're not sure what this means.
Workers have a statically configured
platform and a set of
tags, both of which determine where steps in a Build Plan are scheduled.
concourse binary comes with a set of core resource types baked in. If you are planning to use them you need to have at least one Linux worker.