Jobs
Jobs determine the actions of your pipeline. They determine how resources progress through it, and how the pipeline is visualized.
The most important attribute of a job is its build plan, configured as job.plan. This determines the sequence of Steps to execute in any builds of the job.
A pipeline's jobs are listed under pipeline.jobs with the following schema:
job schema
name: identifier (required)
The name of the job. This should be short; it will show up in URLs. If you want to rename a job, use job.old_name.
old_name: identifier
The old name of the job. If configured, the history of old job will be inherited to the new one. Once the pipeline is set, this field can be removed as the builds have been transfered.
serial: boolean
Default false. If set to true, builds will queue up and execute one-by-one, rather than executing in parallel.
serial_groups: [identifier]
Default []. When set to an array of arbitrary tag-like strings, builds of this job and other jobs referencing
the same tags will be serialized.
Limiting parallelism
This can be used to ensure that certain jobs do not run at the same time, like so:
jobs:
- name: job-a
serial_groups:
- some-tag
- name: job-b
serial_groups:
- some-tag
- some-other-tag
- name: job-c
serial_groups:
- some-other-tag
In this example, job-a and job-c can run concurrently, but neither job can run builds at the same time as
job-b.
The builds are executed in their order of creation, across all jobs with common tags.
max_in_flight: number
If set, specifies a maximum number of builds to run at a time. If serial or serial_groups are set, they take
precedence and force this value to be 1.
build_log_retention: build_log_retention_policy
Configures the retention policy for build logs. This is useful if you have a job that runs often but after some amount of time the logs aren't worth keeping around.
Builds which are not retained by the configured policy will have their logs reaped. If this configuration is omitted, logs are kept forever (unless Build log retention is configured globally).
A complicated example
The following example will keep logs for any builds that have completed in the last 2 days, while also keeping the last 1000 builds and at least 1 succeeded build.
jobs:
- name: smoke-tests
build_log_retention:
days: 2
builds: 1000
minimum_succeeded_builds: 1
plan:
- get: 10m
- task: smoke-tests
# ...
If more than 1000 builds finish in the past 2 days, all of them will be retained thanks to the days
configuration. Similarly, if there are 1000 builds spanning more than 2 days, they will also be kept thanks to
the builds configuration. And if they all happened to have failed, the minimum_succeeded_builds will keep
around at least one successful build. All policies operate independently.
build_log_retention_policy schema
days: number
Keep logs for builds which have finished within the specified number of days.
builds: number
Keep logs for the last specified number of builds.
minimum_succeeded_builds: number
Keep a minimum number of successful build logs that would normally be reaped.
Requires builds to be set to an integer higher than 0 in order to work. For example, if builds is set to 5,
and this attribute to 1, say a job has the following build history: 7(f), 6(f), 5(f), 4(f), 3(f), 2(f), 1(s),
where f means failed and s means succeeded, then builds 2 and 3 will be reaped, because it retains 5 build logs,
and at least 1 succeeded build log. Default is 0.
public: boolean
Default false. If set to true, the build log of this job will be viewable by unauthenticated users.
Unauthenticated users will always be able to see the inputs, outputs, and build status history of a job. This is
useful if you would like to expose your pipeline publicly without showing sensitive information in the build log.
Note
When this is set to true, any get step and put steps will show the
metadata for their resource version, regardless of whether the resource itself has set
resource.public to true.
disable_manual_trigger: boolean
Default false. If set to true, manual triggering of the job (via the web UI or
fly trigger-job) will be disabled.
interruptible: boolean
Default false. Normally, when a worker is shutting down it will wait for builds with containers running on that
worker to finish before exiting. If this value is set to true, the worker will not wait on the builds of this job.
You may want this if you have a self-deploying Concourse or long-running-but-low-importance jobs.
on_success: step
Step to execute when the job succeeds. Equivalent to the on_success
hook.
on_failure: step
Step to execute when the job fails. Equivalent to the on_failure
hook.
ensure: step
Step to execute regardless of whether the job succeeds, fails, errors, or aborts. Equivalent to the
ensure hook.
Managing Jobs
fly jobs
To list the jobs configured in a pipeline, run:
fly trigger-job
To immediately queue a new build of a job, run:
This will enqueue a new build of the my-job job in the my-pipeline pipeline.
To start watching the newly created build, append the --watch flag like so:
You can also queue new builds by clicking the + button on the job or build pages in the web UI.
fly rerun-build
To queue a new build of a job with exactly the same inputs as a given build of the same job, run:
This will enqueue a new build of the my-job job in the my-pipeline pipeline, using the same input versions as build
number 4.
To start watching the newly created build, append the --watch flag like so:
You can also rerun builds by visiting the build page for the build in question in the web UI and clicking the rerun button.
fly pause-job
To prevent scheduling and running builds of a job, run:
This will prevent pending builds of the job from being scheduled, though builds that are in-flight will still run, and pending builds will still be created as normal.
fly unpause-job
To resume scheduling of a job, run:
This will resume scheduling of builds queued for the job.
fly clear-task-cache
If you've got a task cache that you need to clear out for whatever reason, this can be done like so:
This will immediately invalidate the caches - they'll be garbage collected asynchronously and subsequent builds will run with empty caches.
You can also clear out a particular path for the given step's cache, using --cache-path:
fly -t example clear-task-cache \
--job my-pipeline/my-job \
--step my-step-name \
--cache-path go/pkg
If --cache-path is not specified, all caches for the given step will be cleared.