At some point you may want to start putting Concourse on to real hardware. A binary distribution is available in the downloads section.
The binary is fairly self-contained, making it ideal for tossing onto a VM by hand or orchestrating it with Docker, Chef, or other ops tooling.
Grab the appropriate binary for your platform from the downloads section.
For Linux you'll need kernel v3.19 or later, with user namespace support enabled. Windows and Darwin don't really need anything special.
To run Concourse securely you'll need to generate 3 private keys (well, 2, plus 1 for each worker):
session_signing_key(currently must be RSA)
Used for signing user session tokens, and by the TSA to sign its own tokens in the requests it makes to the ATC.
Used for the TSA's SSH server. This is the key whose fingerprint you see when the
sshcommand warns you when connecting to a host it hasn't seen before.
worker_key(one per worker)
Used for authorizing worker registration. There can actually be an arbitrary number of these keys; they are just listed to authorize worker SSH access.
To generate these keys, run:
ssh-keygen -t rsa -f tsa_host_key -N '' ssh-keygen -t rsa -f worker_key -N '' ssh-keygen -t rsa -f session_signing_key -N ''
...and we'll also start on an
authorized_keys file, currently listing this initial worker key:
cp worker_key.pub authorized_worker_keys
The ATC is the component responsible for scheduling builds, and also serves as the web UI and API.
The TSA provides a SSH interface for securely registering workers, even if they live in their own private network.
The following command will spin up the ATC, listening on port
8080, with some basic auth configured, and a TSA listening on port
concourse web \ --basic-auth-username myuser \ --basic-auth-password mypass \ --session-signing-key session_signing_key \ --tsa-host-key tsa_host_key \ --tsa-authorized-keys authorized_worker_keys \ --external-url http://my-ci.example.com
This assumes you have a local Postgres server running on the default port (
5432) with an
atc database, accessible by the current user. If your database lives elsewhere, just specify the
--postgres-data-source flag, which is also demonstrated below.
Be sure to replace the
--external-url flag with the URI you expect to use to reach your Concourse server.
The ATC can be scaled up for high availability, and they'll also roughly share their scheduling workloads, using the database to synchronize.
The TSA can also be scaled up, and requires no database as there's no state to synchronize (it just talks to the ATC).
A typical configuration with multiple ATC+TSA nodes would have them sitting behind a load balancer, forwarding port
4443 (if you've enabled TLS), and
To run multiple
web nodes, you'll need to pass the following flags:
--postgres-data-sourceshould all refer to the same database
--peer-urlshould be a URL used to reach the individual ATC, from other ATCs, i.e. a URL usable within their private network
--external-urlshould be the URL used to reach any ATC, i.e. the URL to your load balancer
concourse web \ --basic-auth-username myuser \ --basic-auth-password mypass \ --session-signing-key session_signing_key \ --tsa-host-key tsa_host_key \ --tsa-authorized-keys authorized_worker_keys \ --postgres-data-source postgres://user:firstname.lastname@example.org/atc \ --external-url https://ci.example.com \ --peer-url http://10.0.16.10:8080
Node 1 (only difference is
concourse web \ --basic-auth-username myuser \ --basic-auth-password mypass \ --session-signing-key session_signing_key \ --tsa-host-key tsa_host_key \ --tsa-authorized-keys authorized_worker_keys \ --postgres-data-source postgres://user:email@example.com/atc \ --external-url https://ci.example.com \ --peer-url http://10.0.16.11:8080
Workers are Garden servers, continuously heartbeating their presence to the Concourse API. Workers have a statically configured
platform and a set of
tags, both of which determine where steps in a Build Plan are scheduled.
Linux workers come with a set of base resource types. If you are planning to use them you need to have at least one Linux worker.
You may want a few workers, depending on the resource usage of your pipeline. There should be one per machine; running multiple on one box doesn't really make sense, as each worker runs as many containers as Concourse requests of it.
To spin up a worker and register it with your Concourse cluster running locally, run:
sudo concourse worker \ --work-dir /opt/concourse/worker \ --tsa-host 127.0.0.1 \ --tsa-public-key tsa_host_key.pub \ --tsa-worker-private-key worker_key
Note that the worker must be run as
root, as it orchestrates containers.
--work-dir flag specifies where container data should be placed; make sure it has plenty of disk space available, as it's where all the disk usage across your builds and resources will end up.
--tsa-host refers to wherever your TSA node is listening, by default on port
--tsa-port if you've configured it differently). This may be an address to a load balancer if you're running multiple
web nodes, or just an IP, perhaps
127.0.0.1 if you're running everything on one box.
--tsa-public-key flag is used to ensure we're connecting to the TSA we should be connecting to, and is used like
known_hosts with the
ssh command. Refer to Generating Keys if you're not sure what this means.
--tsa-worker-private-key flag specifies the key to use when authenticating to the TSA. Refer to Generating Keys if you're not sure what this means.