galoy-personal-notes/concourse-notes.md
2025-10-24 14:33:36 +02:00

2.9 KiB

Concourse Note

On 2025-10-07, I wanted to modify CI to build a new docker image in CI out of lana-bank.

I teamed up with Justin who gave me a nice overview of what's happening in lana-bank's concourse CI. Was good both for learning about concourse and lana-bank's specifics.

There's a lot to unpack, so I'm writing some notes before all the knowledge flies away:

  • Hierarchy

    • Pipelines (such as lana-bank) contain...
      • Groups (such as lana-bank and nix-cache) are dumb namespaces to group...
        • Jobs (the colored boxes in the UI, which get run)
  • Each job gets defined through one single YAML file. This YAML file gets generated dynamically from the scripts under ci named repipe. This repipe scripts use ytt, a templating tool. repipe:

    • Composes the final job output
    • Applies it to the pipeline to define what runs on concourse
    • Note: you can actually "test in production" by running the repipe script while your local fly CLI is pointing to our concourse production instance. It will modify the actual production job definition there.
  • Jobs use resources, which are external states

  • State can either be an input (get) or an output (put). gets just get fetched, puts get mutated.

    • Resources have a type, which specifies what it really means to get or put them. There are many included types in concourse, but you can also build your own custom ones if needed.
  • Some stuff on ytt

    • The special characters to reference values are #@.
    • You can create a file to drop values to keep it all tidy and then reference it in the target file (we set values in values.yml, then reference them in pipeline.yml)
    • ytt not only provides simple values templating but also more sophisticated python function passing.
  • On resources in a job:

    • On get resources, specifying trigger: true defines that any updates to that resources should trigger a job run.
    • Also on get resources, specifying passed while pointing to other jobs will signal that a certain resource should only be fetched if a certain job has built successfully. This chains jobs and prevents running downstream if upstream is failing.
    • Even if you define resources in get and put in a job, you still need to define them again inputs and outputs within the task entry so they are available (technically mounted).
    • This is because the get, task and put parts are all runnable steps. Calling get runs it, but doesn't make the state available to task by default. That's why you need inputs.
  • On secrets

    • Many of the values that we interpolate with ytt are actually references to our Hashicorp Vault. They can be spotted because they use double parantheses (( some_secret )). This get replaced at runtime in concourse. It's fine to hardcode stuff in values.yml in the repo, but secrets must go into the Vault.