Vars
Concourse supports value substitution in YAML configuration by way of ((vars)).
Automation entails the use of all kinds of credentials. It's important to keep these values separate from the rest of your configuration by using vars instead of hardcoding values. This allows your configuration to be placed under source control and allows credentials to be tucked safely away into a secure credential manager like Vault instead of the Concourse database.
Aside from credentials, vars may also be used for generic parameterization of pipeline configuration templates, allowing
a single pipeline config file to be configured multiple times with different parameters - e.g. ((branch_name)).
((var)) syntax
The full syntax for vars is ((source-name:secret-path.secret-field)).
The optional source-name identifies the var source from which the value will be read. If
omitted (along with the : delimiter), the cluster-wide credential manager will
be used, or the value may be provided statically. The special name . refers to
the local var source, while any other name refers to a var source.
The required secret-path identifies the location of the credential. The interpretation of this value depends on the
var source type. For example, with Vault this may be a path like path/to/cred. For the Kubernetes secret manager this
may just be the name of a secret. For credential managers which support path-based lookup, a secret-path without a
leading / may be queried relative to a predefined set of path prefixes. This is how the Vault credential manager
currently works; foo will be queried under /concourse/(team name)/(pipeline name)/foo.
The optional secret-field specifies a field on the fetched secret to read. If omitted, the credential manager may
choose to read a 'default field' from the fetched credential if the field exists. For example, the Vault credential
manager will return the value of the value field if present. This is useful for simple single-value credentials where
typing ((foo.value)) would feel verbose.
The secret-path and secret-field may be surrounded by double quotes "..." if they contain special characters
like . and :. For instance, ((source:"my.secret"."field:1")) will set the secret-path to my.secret and the
secret-field to field:1.
The "." var source
The special var source name . refers to a "local var source."
The precise scope for these "local vars" depends on where they're being used. Currently, the only mechanism that uses
the local var source is the load_var step, which sets a var in a local var source provided to all
steps executed in the build.
Interpolation
Values for vars are substituted structurally. That is, if you have foo: ((bar)), whatever value ((bar)) resolves to
will become the value of the foo field in the object. This can be a value of any type and structure: a boolean, a
simple string, a multiline credential like a certificate, or a complicated data structure like an array of objects.
This differs from text-based substitution in that it's impossible for a value to result in broken YAML syntax, and it relieves the template author from having to worry about things like whitespace alignment.
When a ((var)) appears adjacent to additional string content, e.g. foo: hello-((bar))-goodbye, its value will be
concatenated with the surrounding content. If the ((var)) resolves to a non-string value, an error will be raised.
If you are using the YAML operator for merging <<, you will need to wrap it in double quotes like
so "<<": ((foobars)), to avoid a cryptic error message such as "error: yaml: map merge requires map or sequence of
maps as the value". This will allow you to merge in values from various vars.
See YAML merge specification for more information on how this normally works.
Static vars
Var values may also be specified statically using the set_pipeline step
and task step.
When running the fly CLI equivalent
commands (fly set-pipeline
and fly execute), var values may be provided using the following flags:
-vor--var NAME=VALUEsets the stringVALUEas the value for the varNAME.-yor--yaml-var NAME=VALUEparsesVALUEas YAML and sets it as the value for the varNAME.-ior--instance-var NAME=VALUEparsesVALUEas YAML and sets it as the value for the instance varNAME. See Grouping Pipelines to learn more about instance vars.-lor--load-vars-from FILEloadsFILE, a YAML document containing mapping var names to values, and sets them all.
When used in combination with -l, the -y and -v flags take precedence. This way a vars file may be re-used,
overriding individual values by hand.
Setting values with the task step
Let's say we have a task config like so:
platform: linux
image_resource:
type: registry-image
source:
repository: golang
tag: ((tag))
inputs:
- name: booklit
run:
path: booklit/ci/unit
We could use vars to run this task against different versions of Go:
Setting values with -v and -y
With a pipeline template like so:
resources:
- name: booklit
type: booklit
source:
uri: https://github.com/concourse/booklit
branch: ((branch))
private_key: (("github.com".private_key))
jobs:
- name: unit
plan:
- get: booklit
trigger: ((trigger))
- task: unit
file: booklit/ci/unit.yml
Let's say we have a private key in a file called private_key.
The fly validate-pipeline command may be used to test how
interpolation is applied, by passing the --output flag.
fly validate-pipeline \
-c pipeline.yml \
-y trigger=true \
-v \"github.com\".private_key="$(cat private_key)" \
-v branch=master \
--output
The above incantation should print the following:
jobs:
- name: unit
plan:
- get: booklit
trigger: true
- file: booklit/ci/unit.yml
task: unit
resources:
- name: booklit
type: booklit
source:
branch: master
private_key: |
-----BEGIN RSA PRIVATE KEY-----
# ... snipped ...
-----END RSA PRIVATE KEY-----
uri: https://github.com/concourse/booklit
Note that we had to use -y so that the trigger: true ends up with a boolean value instead of the
string "true".
Loading values from files with -l
With a pipeline template like so:
resources:
- name: booklit
type: booklit
source:
uri: https://github.com/concourse/booklit
branch: ((branch))
private_key: (("github.com".private_key))
jobs:
- name: unit
plan:
- get: booklit
trigger: ((trigger))
- task: unit
file: booklit/ci/unit.yml
Let's say I've put the private_key var in a file called vars.yml, since it's quite large and hard to pass
through flags:
github.com:
private_key: |
-----BEGIN RSA PRIVATE KEY-----
# ... snipped ...
-----END RSA PRIVATE KEY-----
The fly validate-pipeline command may be used to test how
interpolation is applied, by passing the --output flag.
fly validate-pipeline \
-c pipeline.yml \
-l vars.yml \
-y trigger=true \
-v branch=master \
--output
The above incantation should print the following:
jobs:
- name: unit
plan:
- get: booklit
trigger: true
- task: unit
file: booklit/ci/unit.yml
resources:
- name: booklit
type: booklit
source:
branch: master
private_key: |
-----BEGIN RSA PRIVATE KEY-----
# ... snipped ...
-----END RSA PRIVATE KEY-----
uri: https://github.com/concourse/booklit
Note that we had to use -y so that the trigger: true ends up with a boolean value instead of the
string "true".
Dynamic vars
Concourse can read values from "var sources" - typically credential managers like Vault - at runtime. This keeps them out of your configuration and prevents sensitive values from being stored in your database. Values will be read from the var source and optionally cached to reduce load on the var source.
The following attributes can be parameterized through a var source:
- resource.source under pipeline.resources
- resource_type.source under pipeline.resources
- resource.webhook_token under pipeline.resources
- task step params on a task step in a pipeline
- tasks configuration in their entirety - whether from task step file or task step config in a pipeline, or
a config executed with
fly execute
Concourse will fetch values for vars as late as possible - i.e. when a step using them is about to execute. This allows the credentials to have limited lifetime and rapid rotation policies.
Across Step & Dynamic Vars
For the across step, more fields can be dynamically interpolated during runtime:
- set_pipeline step identifier and file field
- task step identifier, input_mapping, and output_mapping, in addition to the all other fields mentioned above for the task step
Var sources (experimental)
Experimental Feature
var_sources was introduced in Concourse v5.8.0. It is considered an experimental feature until its associated
RFC is resolved.
Var sources can be configured for a pipeline via pipeline.var_sources.
Each var source has a name which is then referenced as the source-name in var syntax,
e.g. ((my-vault:test-user.username)) to fetch the test-user var from the my-vault var source.
See ((var)) syntax for a detailed explanation of this syntax.
Currently, only these types are supported:
vaultdummyssmsecretmanager(since v7.7.0)idtoken(since v7.14.0)
In the future we want to make use of something like the Prototypes (RFC #37) so that third-party credential managers can be used just like resource types.
var_source schema
name: string
The name of the ((var)) source. This should be short and simple. This name will be referenced
((var)) syntax throughout the config.
one of ...
type: vault
The vault type supports configuring a Vault server as a ((var)) source.
config: vault_config
vault_config schema
url: string
The URL of the Vault API.
ca_cert: string
The PEM encoded contents of a CA certificate to use when connecting to the API.
path_prefix: string
Default /concourse. A prefix under which to look for all credential values.
See Changing the path prefix for more information.
lookup_templates: [string]
Default ["/{{.Team}}/{{.Pipeline}}/{{.Secret}}", "/{{.Team}}/{{.Secret}}"].
A list of path templates to be expanded in a team and pipeline context subject to the path_prefix and
namespace.
See Changing the path templates for more information.
shared_path: string
An additional path under which credentials will be looked up.
See Configuring a shared path for more information.
namespace: string
A Vault namespace to operate under.
client_cert: string
A PEM encoded client certificate, for use with TLS based auth.
See Using the cert auth backend for more
information.
client_key: string
A PEM encoded client key, for use with TLS based auth.
See Using the cert auth backend for more
information.
server_name: string
The expected name of the server when connecting through TLS.
insecure_skip_verify: boolean
Skip TLS validation. Not recommended. Don't do it. No really, don't.
client_token: string
Authenticate via a periodic client token.
See Using a periodic token for more information.
auth_backend: string
Authenticate using an auth backend, e.g. cert or approle.
See Using the approle auth backend or
Using the cert auth backend for more
information.
auth_params: env-vars
A key-value map of parameters to pass during authentication.
See Using the approle auth backend for more
information.
auth_max_ttl: duration
Maximum duration to elapse before forcing the client to log in again.
auth_retry_max: duration
When failing to authenticate, give up after this amount of time.
auth_retry_initial: duration
When retrying during authentication, start with this retry interval. The interval will increase
exponentially until auth_retry_max is reached.
type: dummy
The dummy type supports configuring a static map of vars to values.
This is really only useful if you have no better alternative for credential management but still have sensitive values that you would like to redact them from build output.
type: ssm
The SSM type supports configuring an AWS Systems Manager
in a single region as a ((var)) source.
type: secretsmanager
The secretsmanager type supports configuring an AWS Secrets
Manager in a single region as a ((var)) source.
config: secretsmanager_config
secretsmanager_config schema
region: string
The AWS region to read secrets from.
type: idtoken
The idtoken type issues JWTs which are signed by concourse and contain information about the currently
running pipeline/job.
These JWTs can be used to authenticate with external services.
config: idtoken_config
idtoken_config schema
audience: [string]
A list of audience-values to place into the token's aud-claim.
subject_scope: team | pipeline | instance | job | string
Default pipeline.
Determines what is put into the token's sub-claim. See Subject Scope for a detailed explanation.
The cluster-wide credential manager
Concourse can be configured with a single cluster-wide credential manager, which acts as a source for any vars which do not specify a source name.
See Credential Management for more information.
Note
In the future we would like to introduce support for multiple cluster-wide var sources, configured using the
var_source schema, and begin deprecating the cluster-wide credential
manager.