1.7 Vars
Concourse supports value substitution in YAML configuration by way of ((vars))
.
Automation entails the use of all kinds of credentials. It's important to keep these values separate from the rest of your configuration by using vars instead of hardcoding values. This allows your configuration to be placed under source control and allows credentials to be tucked safely away into a secure credential manager like Vault instead of the Concourse database.
Aside from credentials, vars may also be used for generic parameterization of pipeline configuration templates, allowing a single pipeline config file to be configured multiple times with different parameters - e.g. ((branch_name))
.
-
1.7.1
((var))
syntax -
1.7.2
The "
.
" var source - 1.7.3 Interpolation
- 1.7.4 Static vars
-
1.7.5
Dynamic vars
- 1.7.5.1 Across Step & Dynamic Vars
- 1.7.5.2 Var sources (experimental)
- 1.7.5.3 The cluster-wide credential manager
((var))
syntax
The full syntax for vars is ((source-name:secret-path.secret-field))
.
The optional source-name identifies the var source from which the value will be read. If omitted (along with the :
delimiter), the cluster-wide credential manager will be used, or the value may be provided statically. The special name .
refers to the local var source, while any other name refers to a var source.
The required secret-path identifies the location of the credential. The interpretation of this value depends on the var source type. For example, with Vault this may be a path like path/to/cred
. For the Kubernetes secret manager this may just be the name of a secret. For credential managers which support path-based lookup, a secret-path without a leading / may be queried relative to a predefined set of path prefixes. This is how the Vault credential manager currently works; foo
will be queried under /concourse/(team name)/(pipeline name)/foo
.
The optional secret-field specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists. For example, the Vault credential manager will return the value of the value
field if present. This is useful for simple single-value credentials where typing ((foo.value))
would feel verbose.
The secret-path and secret-field may be surrounded by double quotes "..."
if they contain special characters like .
and :
. For instance, ((source:"my.secret"."field:1"))
will set the secret-path to my.secret
and the secret-field to field:1
.
The ".
" var source
The special var source name .
refers to a "local var source."
The precise scope for these "local vars" depends on where they're being used. Currently the only mechanism that uses the local var source is the load_var
step, which sets a var in a local var source provided to all steps executed in the build.
Interpolation
Values for vars are substituted structurally. That is, if you have foo: ((bar))
, whatever value ((bar))
resolves to will become the value of the foo
field in the object. This can be a value of any type and structure: a boolean, a simple string, a multiline credential like a certificate, or a complicated data structure like an array of objects.
This differs from text-based substitution in that it's impossible for a value to result in broken YAML syntax, and it relieves the template author from having to worry about things like whitespace alignment.
When a ((var))
appears adjacent to additional string content, e.g. foo: hello-((bar))-goodbye
, its value will be concatenated with the surrounding content. If the ((var))
resolves to a non-string value, an error will be raised.
If you are using the YAML operator for merging <<
, you will need to wrap it in double quotes like so "<<": ((foobars))
, to avoid a cryptic error message such as "error: yaml: map merge requires map or sequence of maps as the value". This will allow you to merge in values from various vars. See YAML merge specification for more information on how this normally works.
Static vars
Var values may also be specified statically using the set_pipeline
step and task
step.
When running the fly
CLI equivalent commands (fly set-pipeline
and fly execute
), var values may be provided using the following flags:
-v
or--var
NAME=VALUE
sets the stringVALUE
as the value for the varNAME
.-y
or--yaml-var
NAME=VALUE
parsesVALUE
as YAML and sets it as the value for the varNAME
.-i
or--instance-var
NAME=VALUE
parsesVALUE
as YAML and sets it as the value for the instance varNAME
. See Grouping Pipelines to learn more about instance vars.-l
or--load-vars-from
FILE
loadsFILE
, a YAML document containing mapping var names to values, and sets them all.
When used in combination with -l
, the -y
and -v
flags take precedence. This way a vars file may be re-used, overriding individual values by hand.
Let's say we have a task config like so:
platform: linux
image_resource:
type: registry-image
source:
repository: golang
tag: ((tag))
inputs:
- name: booklit
run:
path: booklit/ci/unit
We could use vars
to run this task against different versions of Go:
jobs:
- name: unit
plan:
- get: booklit
trigger: true
- task: unit-1.13
file: booklit/ci/unit.yml
vars: {tag: 1.13}
- task: unit-1.8
file: booklit/ci/unit.yml
vars: {tag: 1.8}
With a pipeline template like so:
resources:
- name: booklit
type: booklit
source:
uri: https://github.com/concourse/booklit
branch: ((branch))
private_key: (("github.com".private_key))
jobs:
- name: unit
plan:
- get: booklit
trigger: ((trigger))
- task: unit
file: booklit/ci/unit.yml
Let's say we have a private key in a file called private_key
.
The fly validate-pipeline
command may be used to test how interpolation is applied, by passing the --output
flag.
$ fly validate-pipeline \
-c pipeline.yml \
-y trigger=true \
-v \"github.com\".private_key="$(cat private_key)" \
-v branch=master \
--output
The above incantation should print the following:
jobs:
- name: unit
plan:
- get: booklit
trigger: true
- file: booklit/ci/unit.yml
task: unit
resources:
- name: booklit
type: booklit
source:
branch: master
private_key: |
-----BEGIN RSA PRIVATE KEY-----
# ... snipped ...
-----END RSA PRIVATE KEY-----
uri: https://github.com/concourse/booklit
Note that we had to use -y
so that the trigger: true
ends up with a boolean value instead of the string "true"
.
With a pipeline template like so:
resources:
- name: booklit
type: booklit
source:
uri: https://github.com/concourse/booklit
branch: ((branch))
private_key: (("github.com".private_key))
jobs:
- name: unit
plan:
- get: booklit
trigger: ((trigger))
- task: unit
file: booklit/ci/unit.yml
Let's say I've put the private_key
var in a file called vars.yml
, since it's quite large and hard to pass through flags:
github.com:
private_key: |
-----BEGIN RSA PRIVATE KEY-----
# ... snipped ...
-----END RSA PRIVATE KEY-----
The fly validate-pipeline
command may be used to test how interpolation is applied, by passing the --output
flag.
$ fly validate-pipeline \
-c pipeline.yml \
-l vars.yml \
-y trigger=true \
-v branch=master \
--output
The above incantation should print the following:
jobs:
- name: unit
plan:
- get: booklit
trigger: true
- task: unit
file: booklit/ci/unit.yml
resources:
- name: booklit
type: booklit
source:
branch: master
private_key: |
-----BEGIN RSA PRIVATE KEY-----
# ... snipped ...
-----END RSA PRIVATE KEY-----
uri: https://github.com/concourse/booklit
Note that we had to use -y
so that the trigger: true
ends up with a boolean value instead of the string "true"
.
Dynamic vars
Concourse can read values from "var sources" - typically credential managers like Vault - at runtime. This keeps them out of your configuration and prevents sensitive values from being stored in your database. Values will be read from the var source and optionally cached to reduce load on the var source.
The following attributes can be parameterized through a var source:
task
stepparams
on a task step in a pipelinetasks configuration in their entirety - whether from
task
stepfile
ortask
stepconfig
in a pipeline, or a config executed withfly execute
Concourse will fetch values for vars as late as possible - i.e. when a step using them is about to execute. This allows the credentials to have limited lifetime and rapid rotation policies.
Across Step & Dynamic Vars
For the
more fields can be dynamically interpolated during runtime:
set_pipeline
step identifier andfile
fieldtask
step identifier,input_mapping
, andoutput_mapping
, in addition to the all other fields mentioned above for the task step
Var sources (experimental)
var_sources
was introduced in Concourse v5.8.0. It is considered an experimental feature until its associated RFC is resolved.
Var sources can be configured for a pipeline via
.
Each var source has a name which is then referenced as the source-name in var syntax, e.g. ((my-vault:test-user.username))
to fetch the test-user
var from the my-vault
var source. See ((var))
syntax for a detailed explanation of this syntax.
Currently, only these types are supported:
dummy
secretmanager
(since v7.7.0)
In the future we want to make use of something like the Prototypes (RFC #37) so that third-party credential managers can be used just like resource types.
var_source
schema
The name of the ((var))
source. This should be short and simple. This name will be referenced ((var))
syntax throughout the config.
The vault
type supports configuring a Vault server as a ((var))
source.
Configuration for the Vault server has the following schema:
vault_config
schema
The URL of the Vault API.
The PEM encoded contents of a CA certificate to use when connecting to the API.
Default /concourse
. A prefix under which to look for all credential values.
See Changing the path prefix for more information.
Default ["/{{.Team}}/{{.Pipeline}}/{{.Secret}}", "/{{.Team}}/{{.Secret}}"]
.
A list of path templates to be expanded in a team and pipeline context subject to the path_prefix
and namespace
.
See Changing the path templates for more information.
An additional path under which credentials will be looked up.
See Configuring a shared path for more information.
A Vault namespace to operate under.
A PEM encoded client certificate, for use with TLS based auth.
See Using the cert
auth backend for more information.
A PEM encoded client key, for use with TLS based auth.
See Using the cert
auth backend for more information.
The expected name of the server when connecting through TLS.
Skip TLS validation. Not recommended. Don't do it. No really, don't.
Authenticate via a periodic client token.
See Using a periodic token for more information.
Authenticate using an auth backend, e.g. cert
or approle
.
See Using the approle
auth backend or Using the cert
auth backend for more information.
A key-value map of parameters to pass during authentication.
See Using the approle
auth backend for more information.
Maximum duration to elapse before forcing the client to log in again.
When failing to authenticate, give up after this amount of time.
When retrying during authentication, start with this retry interval. The interval will increase exponentially until auth_retry_max
is reached.
The SSM
type supports configuring an AWS Secrets Manager in a single region as a ((var))
source.
The dummy
type supports configuring a static map of vars to values.
This is really only useful if you have no better alternative for credential management but still have sensitive values that you would like to redact them from build output.
The cluster-wide credential manager
Concourse can be configured with a single cluster-wide credential manager, which acts as a source for any vars which do not specify a source name.
See Credential Management for more information.
In the future we would like to introduce support for multiple cluster-wide var sources, configured using the
var_source
schema schema, and begin deprecating the cluster-wide credential manager.