1.9 Jobs

Jobs determine the actions of your pipeline. They determine how resources progress through it, and how the pipeline is visualized.

The most important attribute of a job is its build plan, configured as job.plan. This determines the sequence of Steps to execute in any builds of the job.

A pipeline's jobs are listed under pipeline.jobs with the following schema:

job schema

The name of the job. This should be short; it will show up in URLs.

The sequence of steps to execute.

The old name of the job. If configured, the history of old job will be inherited to the new one. Once the pipeline is set, this field can be removed as the builds have been transfered.

This can be used to rename a job without losing its history, like so:

jobs:
- name: new-name
  old_name: current-name
  plan: [get: 10m]

After the pipeline is set, because the builds have been inherited, the job can have the field removed:

jobs:
- name: new-name
  plan: [get: 10m]

Default false. If set to true, builds will queue up and execute one-by-one, rather than executing in parallel.

Configures the retention policy for build logs. This is useful if you have a job that runs often but after some amount of time the logs aren't worth keeping around.

Builds which are not retained by the configured policy will have their logs reaped. If this configuration is omitted, logs are kept forever.

The following example will keep logs for any builds that have completed in the last 2 days, while also keeping the last 1000 builds and at least 1 succeeded build.

jobs:
- name: smoke-tests
  build_log_retention:
    days: 2
    builds: 1000
    minimum_succeeded_builds: 1
  plan:
  - get: 10m
  - task: smoke-tests
    # ...

If more than 1000 builds finish in the past 2 days, all of them will be retained thanks to the days configuration. Similarly, if there are 1000 builds spanning more than 2 days, they will also be kept thanks to the builds configuration. And if they all happened to have failed, the minimum_succeeded_builds will keep around at least one successful build. All policies operate independently.

build_log_retention_policy schema

Keep logs for builds which have finished within the specified number of days.

Keep logs for the last specified number of builds.

Keep a minimum number of successful build logs that would normally be reaped.

Requires builds to be set to an integer higher than 0 in order to work. For example, if builds is set to 5, and this attribute to 1, say a job has the following build history: 7(f), 6(f), 5(f), 4(f), 3(f), 2(f), 1(s), where f means failed and s means succeeded, then builds 2 and 3 will be reaped, because it retains 5 build logs, and at least 1 succeeded build log. Default is 0.

Deprecated. Equivalent to setting job.build_log_retention.builds.

Default []. When set to an array of arbitrary tag-like strings, builds of this job and other jobs referencing the same tags will be serialized.

This can be used to ensure that certain jobs do not run at the same time, like so:

jobs:
- name: job-a
  serial_groups: [some-tag]
- name: job-b
  serial_groups: [some-tag, some-other-tag]
- name: job-c
  serial_groups: [some-other-tag]

In this example, job-a and job-c can run concurrently, but neither job can run builds at the same time as job-b.

The builds are executed in their order of creation, across all jobs with common tags.

If set, specifies a maximum number of builds to run at a time. If serial or serial_groups are set, they take precedence and force this value to be 1.

Default false. If set to true, the build log of this job will be viewable by unauthenticated users. Unauthenticated users will always be able to see the inputs, outputs, and build status history of a job. This is useful if you would like to expose your pipeline publicly without showing sensitive information in the build log.

Note: when this is set to true, any get step and put steps will show the metadata for their resource version, regardless of whether the resource itself has set resource.public to true.

Default false. If set to true, manual triggering of the job (via the web UI or fly trigger-job) will be disabled.

Default false. Normally, when a worker is shutting down it will wait for builds with containers running on that worker to finish before exiting. If this value is set to true, the worker will not wait on the builds of this job. You may want this if e.g. you have a self-deploying Concourse or long-running-but-low-importance jobs.

Step to execute when the job succeeds. Equivalent to the step.on_success hook.

Step to execute when the job fails. Equivalent to the step.on_failure hook.

Step to execute when the job errors. Equivalent to the step.on_error hook.

Step to execute when the job aborts. Equivalent to the step.on_abort hook.

Step to execute regardless of whether the job succeeds, fails, errors, or aborts. Equivalent to the step.ensure hook.

Table of contents:
  1. 1.9.1 Steps
  2. 1.9.2 Managing Jobs
    1. 1.9.2.1 fly jobs
    2. 1.9.2.2 fly trigger-job
    3. 1.9.2.3 fly rerun-build
    4. 1.9.2.4 fly pause-job
    5. 1.9.2.5 fly unpause-job
    6. 1.9.2.6 fly clear-task-cache

Steps

Each job has a single build plan configured as job.plan. A build plan is a recipe for what to run when a build of the job is created.

A build plan is a sequence of steps:

When a new version is available for a get step with trigger: true configured, a new build of the job will be created from the build plan.

When viewing the job in the pipeline, resources that are used as get steps appear as inputs, and resources that are used in put steps appear as outputs. Jobs are rendered downstream of any jobs they reference in passed constraints, connected by the resource.

If any step in the build plan fails, the build will fail and subsequent steps will not be executed. Additional steps may be configured to run after failure by configuring step.on_failure or step.ensure (or the job equivalents, job.on_failure and job.ensure).

step schema

one of...

Fetches a version of a resource.

The fetched bits will be registered in the build's artifact namespace under the given identifier. Subsequent task step and put step which list the identifier as an input will have a copy of the bits in their working directory.

Almost every simple unit test job will look something like this: fetch my code with a get step and run its tests with a task step.

plan:
- get: my-repo
  trigger: true
- task: unit
  file: my-repo/ci/unit.yml

Defaults to the value of get. The resource to fetch, as configured in pipeline.resources.

Use this attribute to rename a resource from the overall pipeline context into the job-specific context.

When specified, only the versions of the resource that made it through the given list of jobs (AND-ed together) will be considered when triggering and fetching.

If multiple gets are configured with passed constraints, all of the mentioned jobs are correlated. That is, with the following set of inputs:

plan:
- get: a
  passed: [a-unit, integration]
- get: b
  passed: [b-unit, integration]
- get: x
  passed: [integration]

This means "give me the versions of a, b, and x that have passed the same build of integration, with the same version of a passing a-unit and the same version of b passing b-unit."

This is crucial to being able to implement safe "fan-in" semantics as things progress through a pipeline.

Arbitrary configuration to pass to the resource. Refer to the resource type's documentation to see what it supports.

The following plan fetches a version number via the semver resource, bumps it to the next release candidate, and puts it back.

plan:
- get: version
  params:
    bump: minor
    rc: true
- put: version
  params: {version: version/number}

Default false. If set to true, new builds of the job will be automatically created when a new version for this input becomes available.

Note: if none of a job's get steps are set to true, the job can only be manually triggered.

Default latest. The version of the resource to fetch.

If set to latest, scheduling will just find the latest available version of a resource and use it, allowing versions to be skipped. This is usually what you want, e.g. if someone pushes 100 git commits.

If set to every, builds will walk through all available versions of the resource. Note that if passed is also configured, it will only step through the versions satisfying the constraints.

If set to a specific version (e.g. {ref: abcdef123}), only that version will be used. Note that the version must be available and detected by the resource, otherwise the input will never be satisfied. You may want to use fly check-resource to force detection of resource versions, if you need to use an older one that was never detected (as all newly configured resources start from the latest version).

Pushes to the given resource.

When the step succeeds, the version by the step will be immediately fetched via an additional implicit get step. This is so that later steps in your plan can use the artifact that was produced. The artifact will be available under the identifier put specifies.

The following plan fetches a repo using get and pushes it to another repo (assuming repo-develop and repo-master are defined as git resources):

plan:
- get: repo-develop
- put: repo-master
  params:
    repository: repo-develop

If the logical name (whatever put specifies) differs from the concrete resource, you would specify resource as well, like so:

plan:
- put: resource-image
  resource: registry-image-resource

Additionally, you can control the settings of the implicit get step by setting get_params. For example, if you did not want a put step utilizing the registry-image resource type to download the image, you would implement your put step as such:

plan:
- put: docker-build
  params: {build: git-resource}
  get_params: {skip_download: true}

Defaults to the value of put. The resource to update, as configured in pipeline.resources.

Default all. When not set, or set to all, all artifacts will be provided. This can result in slow performance if the prior steps in the build plan register a bunch of large artifacts before this step, so you may want to consider being explicit.

If configured as a list of identifiers, only the listed artifacts will be provided to the container.

If set to detect, the artifacts are detected based on the configured put step params by looking for all string values and using the first path segment as an identifier. (This may become the default in the future.)

Arbitrary configuration to pass to the resource. Refer to the resource type's documentation to see what it supports.

Arbitrary configuration to get to the resource during the implicit get step. Refer to the resource type's documentation to see what it supports.

Executes a task.

When a task completes, the artifacts specified by task.outputs will be registered in the build's artifact namespace. This allows subsequent task steps and put steps to access the result of a task.

The identifier value is just a name - short and sweet. The value is shown in the web UI but otherwise has no affect on anything. This may change in the future; RFC #32 proposes that the name be used to reference a file within the project.

The following plan pulls down a repo, makes a commit to it, and pushes the commit to another repo (the task must have an output called repo-with-commit):

plan:
- get: my-repo
- task: commit
  file: my-repo/commit.yml
- put: other-repo
  params:
    repository: repo-with-commit

The following plan fetches a single repository and executes multiple tasks, using the in_parallel step, in a build matrix style configuration:

plan:
- get: my-repo
- in_parallel:
  - task: go-1.3
    file: my-repo/go-1.3.yml
  - task: go-1.4
    file: my-repo/ci/go-1.4.yml

Only if both tasks succeed will the overall step succeed. See also in_parallel step.

The task config to execute.

A dynamic alternative to task step config.

file points at a .yml file containing the task config, which allows this to be tracked with your resources.

The first segment in the path should refer to another source from the plan, and the rest of the path is relative to that source.

The content of the config file may contain template ((vars)), which will be filled in using task step vars or a configured credential manager.

Specifies an artifact source containing an image to use for the task. This overrides any task.image_resource configuration present in the task configuration.

This is very useful when part of your pipeline involves building an image, possibly with dependencies pre-baked. You can then propagate that image through the rest of your pipeline, guaranteeing that the correct version (and thus a consistent set of dependencies) is used throughout your pipeline.

This can be used in to explicitly keep track of dependent images:

resources:
- name: my-image
  type: registry-image
  source: {repository: golang, tag: "1.13"}

- name: my-repo
  type: git
  source: # ...

jobs:
- name: use-image
  plan:
  - get: my-image
  - get: my-repo
  - task: unit
    file: my-repo/ci/unit.yml
    image: my-image

Here's a pipeline which builds an image in one job and then propagates it to the next:

resources:
- name: my-project
  type: git
  source: {uri: https://github.com/my-user/my-project}

- name: my-task-image
  type: registry-image
  source: {repository: my-user/my-repo}

jobs:
- name: build-task-image
  plan:
  - get: my-project
  - put: my-task-image
    params: {build: my-project/ci/images/my-task}

- name: use-task-image
  plan:
  - get: my-task-image
    passed: [build-task-image]
  - get: my-project
    passed: [build-task-image]
  - task: use-task-image
    image: my-task-image
    file: my-project/ci/tasks/my-task.yml

Default false. If set to true, the task will run with escalated capabilities available on the task's platform.

Setting privileged: true is a gaping security hole; use wisely and only if necessary. This is not part of the task configuration in order to prevent privilege escalation via pull requests changing the task file.

For the linux platform, this determines whether or not the container will run in a separate user namespace. When set to true, the container's root user is actual root, i.e. not in a user namespace. This is not recommended, and should never be used with code you do not trust - e.g. pull requests.

A map of template variables to pass to an external task. Not to be confused with task.params, which provides environment variables to the task.

This is to be used with external tasks defined in task step file.

A var may be statically passed like so:

plan:
- get: my-repo
- task: integration
  file: my-repo/ci/task.yml
  vars:
    text: "Hello World!"

This is often used in combination with Vars in the pipeline (note the replacement of the string literal with the ((text)) pipeline var):

plan:
- get: my-repo
- task: integration
  file: my-repo/ci/task.yml
  vars:
    text: ((text))

When run with the following task.yml:

---
platform: linux

image_resource:
  type: registry-image
  source:
    repository: my.local.registry:8080/my/image
    username: ((myuser))
    password: ((mypass))

run:
  path: echo
  args: ["((text))"]

...this will resolve "((text))" to "Hello World!", while ((myuser)) and ((mypass)) will be resolved in runtime via a credential manager, if it has been configured.

A map of task environment variable parameters to set, overriding those configured in the task's config or file.

The difference between params and vars is that vars allows you to interpolate any template variable in an external task, while params can be used to overwrite task parameters (i.e. env variables) specifically. Also, params can have default values declared in the task.

Let's say we have a task config in intgration.yml like so:

platform: linux
image_resource: # ...
params:
  REMOTE_SERVER: https://example.com
  USERNAME:
  PASSWORD:

This indicates that there are three params which can be set: REMOTE_SERVER, which has a default, and USERNAME and PASSWORD.

A pipeline could run the task with credentials passed in like so:

plan:
- get: my-repo
- task: integration
  file: my-repo/ci/integration.yml
  params:
    USERNAME: my-user
    PASSWORD: my-pass
plan:
- get: my-repo
- task: integration
  file: my-repo/ci/integration.yml
  params:
    REMOTE_SERVER: 10.20.30.40:8080
    USERNAME: ((integration-username))
    PASSWORD: ((integration-password))

A map from task input names to concrete names in the build plan. This allows a task with generic input names to be used multiple times in the same plan, mapping its inputs to specific resources within the plan.

The following example demonstrates a task with a generic release-repo input being mapped to more specific artifact names:

plan:
- get: diego-release
- get: cf-release
- get: ci-scripts
- task: audit-diego-release
  file: ci-scripts/audit-release.yml
  input_mapping: {release-repo: diego-release}
- task: audit-cf-release
  file: ci-scripts/audit-release.yml
  input_mapping: {release-repo: cf-release}

A map from task output names to concrete names to register in the build plan. This allows a task with generic output names to be used multiple times in the same plan.

This is often used together with task step input_mapping:

plan:
- get: diego-release
- get: cf-release
- get: ci-scripts
- task: create-diego-release
  file: ci-scripts/create-release.yml
  input_mapping: {release-repo: diego-release}
  output_mapping: {release-tarball: diego-release-tarball}
- task: create-cf-release
  file: ci-scripts/create-release.yml
  input_mapping: {release-repo: cf-release}
  output_mapping: {release-tarball: cf-release-tarball}

Configures a pipeline.

The identifier specifies the name of the pipeline to configure. Unless set_pipeline step team is set, it will be configured within the current team and be created unpaused. If set to self, the current pipeline will update its own config.

set_pipeline: self was introduced in Concourse v6.5.0. It is considered an experimental feature and may be removed at any time. Contribute to the associated discussion with feedback.

Pipelines configured with the set_pipeline step are tied to the job that configured them, and will be automatically archived in the following scenarios:

  • When the job runs a successful build which did not configure the pipeline (i.e. the set_pipeline step was removed).

  • When the job is removed from its pipeline configuration (see job.old_name for renaming instead of removing).

  • When the job's pipeline is archived or destroyed.

This means any job that uses set_pipeline should set all still-desired pipelines in each build, rather than setting them one-by-one through many builds.

See fly archive-pipeline for what happens when a pipeline is archived.

This is a way to ensure a pipeline stays up to date with its definition in a source code repository, eliminating the need to manually run fly set-pipeline.

resources:
- name: booklit
  type: git
  source: {uri: https://github.com/vito/booklit}
jobs:
- name: reconfigure
  plan:
  - get: booklit
    trigger: true
  - set_pipeline: booklit
    file: booklit/ci/pipeline.yml

The path to the pipeline's configuration file.

file points at a .yml file containing the pipeline configuration, which allows this to be tracked with your resources or generated by a task step.

The first segment in the path should refer to another artifact from the plan, and the rest of the path is relative to that artifact.

The get step can be used to fetch your configuration from a git repo and auto-configure it using a set_pipeline step:

- get: ci
- set_pipeline: my-pipeline
  file: ci/pipelines/my-pipeline.yml

A map of template variables to pass to the pipeline config.

Note that variables set with this field will not propagate to tasks configured via task step file. If you want those variables to be determined at the time the pipeline is set, use task step vars as well.

A var may be statically passed like so:

plan:
- get: my-repo
- set_pipeline: configure-the-pipeline
  file: my-repo/ci/pipeline.yml
  vars:
    text: "Hello World!"

Any Vars in the pipeline config will be filled in statically using this field.

For example, if my-repo/ci/pipeline.yml looks like...:

resources:
- name: task-image
    type: registry-image
    source:
      repository: my.local.registry:8080/my/image
      username: ((myuser))
      password: ((mypass))
jobs:
- name: job
  plan:
  - get: task-image
  - task: do-stuff
    image: task-image
    config:
      platform: linux
      run:
        path: echo
        args: ["((text))"]

...this will resolve "((text))" to "Hello World!", while ((myuser)) and ((mypass)) will be left in the pipeline to be fetched at runtime.

A list of paths to .yml files that will be passed to the pipeline config in the same manner as the --load-vars-from flag to fly set-pipeline. This means that if a variable appears in multiple files, the value from a file that is passed later in the list will override the values from files earlier in the list.

By default, the set_pipeline step sets the pipeline for the same team that is running the build.

The team attribute can be used to specify another team.

Only the main team is allowed to set another team's pipeline. Any team other than the main team using the team attribute will error, unless they reference their own team.

The team attribute was introduced in Concourse v6.4.0. It is considered an experimental feature and may be removed at any time. Contribute to the associated discussion with feedback.

The load_var step was introduced in Concourse v6.0.0. It is considered an experimental feature until its associated RFC is resolved.

Load the value for a var at runtime, making it available to subsequent steps as a build-local var named after the given identifier.

The following build plan uses a version produced by the semver resource as a tag:

plan:
- get: version
- load_var: version-tag
  file: version/version
- put: image
  params: {tag: ((.:version-tag))}

The path to a file whose content shall be read and used as the var's value.

The format of the file's content.

If unset, Concourse will try to detect the format from the file extension. If the file format cannot be determined, Concourse will fallback to trim.

If set to json, yaml, or yml, the file content will be parsed accordingly and the resulting structure will be the value of the var.

If set to trim, the var will be set to the content of the file with any trailing and leading whitespace removed.

If set to raw, the var will be set to the content of the file without modification (i.e. with any existing whitespace).

Let's say we have a task, generate-creds, which produces a generated-user output containing a user.json file like so:

{
  "username": "some-user",
  "password": "some-password"
}

We could pass these credentials to subsequent steps by loading it into a var with load_var, which will detect that it is in JSON format based on the file extension:

plan:
- task: generate-creds
- load_var: user
  file: generated-user/user.json
- task: use-creds
  params:
    USERNAME: ((.:user.username))
    PASSWORD: ((.:user.username))

If the use-creds task were to print these values, they would be automatically redacted unless reveal: true is set.

Default false. If set to true, allow the var's content to be printed in the build output even with secret redaction enabled.

Performs the given steps in parallel.

If any sub-steps (or task) in a parallel result in a failure or error, the parallel step as a whole is considered to have failed or errored.

Steps are either configured as a array or within a in_parallel_config schema.

Using the in_parallel step where possible is the easiest way to speeding up a builds.

It is often used to fetch all dependent resources together at the start of a build plan:

plan:
- in_parallel:
  - get: component-a
  - get: component-b
  - get: integration-suite
- task: integration
  file: integration-suite/task.yml

If any step in the in_parallel fails, the build will fail, making it useful for build matrices:

plan:
- get: some-repo
- in_parallel:
  - task: unit-windows
    file: some-repo/ci/windows.yml
  - task: unit-linux
    file: some-repo/ci/linux.yml
  - task: unit-darwin
    file: some-repo/ci/darwin.yml

Using limit is useful for performing parallel execution of a growing number of tasks without overloading your workers. In the example below, two tasks will be run in parallel and in order until all steps have been executed:

plan:
- get: some-repo
- in_parallel:
    limit: 2
    fail_fast: false
    steps:
      - task: unit-windows
        file: some-repo/ci/windows.yml
      - task: unit-linux
        file: some-repo/ci/linux.yml
      - task: unit-darwin
        file: some-repo/ci/darwin.yml

in_parallel_config schema

The steps to perform in parallel.

Default unlimited. A sempahore which limits the parallelism when executing the steps in a in_parallel step. When set, the number of running steps will not exceed the limit.

When not specified, in_parallel will execute all steps immediately, making the default behavior identical to aggregate.

Default false. When enabled the parallel step will fail fast by returning as soon as any sub-step fails. This means that running steps will be interrupted and pending steps will no longer be scheduled.

Deprecated. Use in_parallel step instead.

Simply performs the given steps serially, with the same semantics as if they were at the top level step listing.

This can be used to perform multiple steps serially in an step.on_failure:

plan:
- get: my-repo
- task: unit
  file: my-repo/ci/unit.yml
  on_failure:
    do:
    - put: alert
    - put: email

Performs the given step, ignoring any failure and masking it with success.

This can be used when you want to perform some side-effect, but you don't really want the whole build to fail if it doesn't work.

When emitting logs somewhere for analyzing later, if the destination flakes out it may not really be critical, so we may want to just swallow the error:

plan:
- task: run-tests
  config: # ...
  on_success:
    try:
      put: test-logs
      params:
        from: run-tests/*.log
- task: do-something-else
  config: # ...

The amount of time to limit the step's execution to, e.g. 30m for 30 minutes.

When exceeded, the step will be interrupted, with the same semantics as aborting the build (except the build will be failed, not aborted, to distinguish between human intervention and timeouts being inforced).

The following will run the unit task and cancel it if it takes longer than 1 hour and 30 minutes:

plan:
- get: foo
- task: unit
  file: foo/unit.yml
  timeout: 1h30m

The total number of times a step should be tried before it should fail, e.g. 5 will run the step up to 5 times before giving up.

Attempts will retry on a Concourse error as well as build failure. When the number of attempts is reached and the step has still not succeeded then the step will fail.

The following will run the task and retry it up to 9 times (for a total of 10 attempts) if it fails:

plan:
- get: foo
- task: unit
  file: foo/unit.yml
  attempts: 10

When used in combination with timeout, the timeout applies to each step.

This semi-arbitary decision was made because often things either succeed in a reasonable amount of time or fail due to hanging/flakiness. In this case it seems more useful to allow each attempt the allotted timeout rather than have one very long attempt prevent more attempts.

plan:
- get: flake
- task: flaky-tests
  file: flake/integration.yml
  timeout: 10m
  attempts: 3

Default []. The tags by which to match workers.

The step will be placed within the a pool of workers that match all of the given set of tags.

For example, if [a, b] is specified, only workers advertising the a and b tags (in addition to any others) will be used for running the step.

You may have a private cluster only reachable by special workers running on-premises. To run steps against those workers, just provide a matching tag:

plan:
- get: my-repo
- put: my-site
  tags: [private]
  params: {path: my-repo}
- task: acceptance-tests
  tags: [private]
  file: my-repo/ci/acceptance.yml

A hook step to execute if the parent step succeeds.

The following will perform the second task only if the first one succeeds:

plan:
- get: foo
- task: unit
  file: foo/unit.yml
  on_success:
    task: alert
    file: foo/alert.yml

Note that this is semantically equivalent to the following:

plan:
- get: foo
- task: unit
  file: foo/unit.yml
- task: alert
  file: foo/alert.yml

The on_success hook is provided mainly for cases where there is an equivalent step.on_failure, and having them next to each other is more clear.

A hook step to execute if the parent step fails.

This does not "recover" the failure - it will still fail even if the hook step succeeds.

The following will perform the alert task only if the unit task fails:

plan:
- get: foo
- task: unit
  file: foo/unit.yml
  on_failure:
    task: alert
    file: foo/alert.yml

A hook step to execute if the build is aborted and the parent step is terminated.

The following will perform the cleanup task only if the build is aborted while the unit task was running:

plan:
- get: foo
- task: unit
  file: foo/unit.yml
  on_abort:
    task: cleanup
    file: foo/cleanup.yml

A hook step to execute after the parent step if the parent step terminates abnormally in any way other than those handled by the step.on_abort or step.on_failure. This covers scenarios as broad as configuration mistakes, temporary network issues with the workers, or running longer than a step.timeout.

Until notifications become first-class (RFC #28, this step can be used to notify folks if their builds errored out:

plan:
- do:
  - get: ci
  - task: unit
    file: ci/unit.yml
  on_error:
    put: slack

A hook step to execute after the parent step regardless of whether the parent step succeeds, fails, or errors. The step will also be executed if the build was aborted and its parent step was interrupted.

If the parent step succeeds and the ensured step fails, the overall step fails.

The following build plan acquires a lock and then ensures that the lock is released.

plan:
- put: some-lock
  params: {acquire: true}
- task: integration
  file: foo/integration.yml
  ensure:
    put: some-lock
    params: {release: some-lock}

Managing Jobs

fly jobs

To list the jobs configured in a pipeline, run:

$ fly -t example jobs -p my-pipeline

fly trigger-job

To immediately queue a new build of a job, run:

$ fly -t example trigger-job --job my-pipeline/my-job

This will enqueue a new build of the my-job job in the my-pipeline pipeline.

To start watching the newly created build, append the --watch flag like so:

$ fly -t example trigger-job --job my-pipeline/my-job --watch

You can also queue new builds by clicking the + button on the job or build pages in the web UI.

fly rerun-build

To queue a new build of a job with exactly the same inputs as a given build of the same job, run:

$ fly -t example rerun-build --job my-pipeline/my-job --build 4

This will enqueue a new build of the my-job job in the my-pipeline pipeline, using the same input versions as build number 4.

To start watching the newly created build, append the --watch flag like so:

$ fly -t example rerun-build --job my-pipeline/my-job --build 4 --watch

You can also rerun builds by visiting the build page for the build in question in the web UI and clicking the rerun button.

fly pause-job

To prevent scheduling and running builds of a job, run:

$ fly -t example pause-job --job my-pipeline/my-job

This will prevent pending builds of the job from being scheduled, though builds that are in-flight will still run, and pending builds will still be created as normal.

fly unpause-job

To resume scheduling of a job, run:

$ fly -t example unpause-job --job my-pipeline/my-job

This will resume scheduling of builds queued for the job.

fly clear-task-cache

If you've got a task cache that you need to clear out for whatever reason, this can be done like so:

$ fly -t example clear-task-cache --job my-pipeline/my-job --step my-step-name

This will immediately invalidate the caches - they'll be garbage collected asynchronously and subsequent builds will run with empty caches.

You can also clear out a particular path for the given step's cache, using --cache-path:

$ fly -t example clear-task-cache \
    --job my-pipeline/my-job \
    --step my-step-name \
    --cache-path go/pkg

If --cache-path is not specified, all caches for the given step will be cleared.