Merged PR 4889: Adds CI docs and artifacts
# Description This PR adds documentation and artifacts for our CI pipelines. The docs explain how to set things up. The artifacts are various scripts, templates and similar files that are needed for either the one-off initial setup or on a recurring basis.
This commit is contained in:
commit
9677ab23b3
9 changed files with 190 additions and 6 deletions
|
|
@ -136,7 +136,11 @@ This goes beyond the scope of this project: to understand how you can serve thes
|
|||
|
||||
## CI
|
||||
|
||||
TBD.
|
||||
CI can be setup to review PRs and make the developer experience more solid and less error prone.
|
||||
|
||||
You can find more details on the topic in the `ci` folder.
|
||||
|
||||
Note that this is an optional part of the project: you can happily work without CI if needed.
|
||||
|
||||
## Stuff that we haven't done but we would like to
|
||||
|
||||
|
|
|
|||
|
|
@ -33,11 +33,11 @@ steps:
|
|||
- script: |
|
||||
set -a && source .env && set +a
|
||||
|
||||
psql -h $POSTGRES_HOST -U $POSTGRES_USER -d prd-pointer -c "select refresh_foreign_schemas(ARRAY[$PRD_SCHEMAS_TO_SYNC]);"
|
||||
psql -h $POSTGRES_HOST -U $POSTGRES_USER -d prd_pointer -c "select refresh_foreign_schemas(ARRAY[$PRD_SCHEMAS_TO_SYNC]);"
|
||||
displayName: 'Sync Foreign Data Wrappers schemas'
|
||||
|
||||
- script: |
|
||||
cd ~/dbt-ci
|
||||
cd ci
|
||||
/bin/bash build-master-artifacts.sh
|
||||
displayName: 'Build master artifacts'
|
||||
|
||||
|
|
@ -83,7 +83,7 @@ steps:
|
|||
- script: |
|
||||
set -a && source .env && set +a
|
||||
|
||||
psql -h $POSTGRES_HOST -U $POSTGRES_USER -d prd-pointer -c "DROP SCHEMA IF EXISTS $CI_SCHEMA_NAME CASCADE;"
|
||||
psql -h $POSTGRES_HOST -U $POSTGRES_USER -d prd_pointer -c "DROP SCHEMA IF EXISTS $CI_SCHEMA_NAME CASCADE;"
|
||||
|
||||
displayName: "Preemptive DROP SCHEMA"
|
||||
|
||||
|
|
@ -125,7 +125,7 @@ steps:
|
|||
|
||||
- script: |
|
||||
set -a && source .env && set +a
|
||||
psql -h $POSTGRES_HOST -U $POSTGRES_USER -d prd-pointer -c "DROP SCHEMA IF EXISTS $CI_SCHEMA_NAME CASCADE;"
|
||||
psql -h $POSTGRES_HOST -U $POSTGRES_USER -d prd_pointer -c "DROP SCHEMA IF EXISTS $CI_SCHEMA_NAME CASCADE;"
|
||||
|
||||
condition: always()
|
||||
displayName: 'Delete PR schema'
|
||||
55
ci/README.md
55
ci/README.md
|
|
@ -1,3 +1,56 @@
|
|||
# CI
|
||||
|
||||
This folder contains things we use for Continuous Integration.
|
||||
You can setup CI pipelines for the project if you want. This enables performing certain checks in PRs and master commits, which is useful to minimize errors and ensure certain quality levels are met.
|
||||
|
||||
The details here are specific to Azure Devops. If you need to set things up in a different Git/CI env, you'll have to adjust your way into it.
|
||||
|
||||
## CI VM Setup
|
||||
|
||||
### Requirements
|
||||
|
||||
These instructions assume that:
|
||||
- You have a VM ready to be setup as the CI server.
|
||||
- You can SSH into it.
|
||||
- The VM has Docker and Docker Compose installed and ready to run.
|
||||
- The VM has `psql` installed.
|
||||
- The VM has the Azure CI agent installed.
|
||||
- That you have cloned this repository in the home folder of the user you use in that VM.
|
||||
- The DWH production instance has a CI dedicated user that can read from all sync schemas as well as `staging`, `intermediate` and `reporting`, and you have the credentials.
|
||||
|
||||
If you don't have this, it probably means you need to review our Infrastructure repository where we describe how to set a VM up with all of this.
|
||||
|
||||
### Setting things up
|
||||
|
||||
- SSH into the CI VM.
|
||||
- Create a folder in the user home directory named `dbt-ci`.
|
||||
- Create a copy of the `ci/ci.env` file there naming it `.env` (assuming you're in the repo root dir, `cp ci/ci.env ~/dbt-ci/.env`) and fill it with values of your choice.
|
||||
- Copy the `docker-compose.yml` file into `dbt-ci`. Modify your copy with values for the Postgres server parameters. Which values to set depend on your hardware. If you don't want or can't decide values for these parameters, you can just comment the lines.
|
||||
- Enter the `ci` folder and execute the script named `ci-vm-setup.sh` in with `.env` file you just filled in sourced (you can run this: `(set -a && source ~/dbt-ci/.env && set +a && bash ci-vm-setup.sh)`). This script will take care of most of the setup that need to be executed, including:
|
||||
- Preparing the postgres database.
|
||||
- Setting up the dockerized postgres with the right database, FDW, etc.
|
||||
- Prepare the `profiles.yml` file.
|
||||
|
||||
### Testing
|
||||
|
||||
- If the infra was set correctly and you followed the previous steps, you should be ready to roll.
|
||||
- You might want to activate pipeline executions in Devops if you had it off while preparing everything.
|
||||
- Once that's done:
|
||||
- Create a branch in this repository.
|
||||
- Add some silly change to any dbt model.
|
||||
- Open a PR in Devops from the branch.
|
||||
- If everything is fine, you should see in Devops the pipeline getting triggered automatically and walking through all the steps described in `.azure-pipelines.master.yml`.
|
||||
- Once you make a commit to `master` or merge PR to `master`, you should also see pipelines getting triggered automatically `.azure-pipelines.master.yml`.
|
||||
|
||||
### What the hell are these files
|
||||
|
||||
A small inventory of the funky files here:
|
||||
- `ci-vm-setup.sh`: executes some set up steps that are needed the first time you prepare the CI VM.
|
||||
- `ci.env`: template for the `.env` file that needs to be placed in the CI VM.
|
||||
- `ci.profiles.yml`: template for the dbt `profiles.yml` file that needs to be placed in the CI VM.
|
||||
- `ci-requirements.txt`: CI specific Python packages that need to be installed in CI runs (but not for running or developing on this project).
|
||||
- `docker-compose.yml`: the docker compose file that defines the Postgres that runs in the CI VM.
|
||||
- `postgres-initial-setup.sql`: a SQL file that completes set up steps required in the CI Postgres in the one-off initial setup.
|
||||
- `sqlfluff-check.sh`: a script to check a folder's SQL files and validate them. Fails if any SQL is not parseable.
|
||||
- `.sqlfluff`: some config for sqlfluff.
|
||||
- `build-master-artifacts.sh`: a script that generates the `manifest.json` for the master branch and places it in a target folder.
|
||||
- `.azure-pipelines.blablabla.yml`: the actual pipeline definitions for Azure.
|
||||
|
|
|
|||
24
ci/build-master-artifacts.sh
Normal file
24
ci/build-master-artifacts.sh
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
#!/bin/bash
|
||||
|
||||
|
||||
cd ~/data-dwh-dbt-project
|
||||
|
||||
git checkout master
|
||||
git pull
|
||||
|
||||
rm -rf venv
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
dbt deps
|
||||
|
||||
rm .env
|
||||
cp ~/dbt-ci/.env .env
|
||||
set -a && source .env && set +a
|
||||
|
||||
rm -rf target/
|
||||
|
||||
dbt compile
|
||||
|
||||
mkdir -p ~/dbt-ci/master-artifacts/
|
||||
cp target/manifest.json ~/dbt-ci/master-artifacts/manifest.json
|
||||
10
ci/ci-vm-setup.sh
Normal file
10
ci/ci-vm-setup.sh
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
# Start container
|
||||
docker compose -f ~/dbt-ci/docker-compose.yml --env-file ~/dbt-ci/.env up -d
|
||||
|
||||
# Run script to set things up in Postgres (DB, FDWs, etc)
|
||||
|
||||
envsubst < postgres-initial-setup.sql | psql -h $POSTGRES_HOST -U $POSTGRES_USER -d postgres
|
||||
|
||||
# Copy profiles file
|
||||
mkdir -p ~/.dbt
|
||||
cp ci.profiles.yml ~/.dbt/profiles.yml
|
||||
10
ci/ci.env
Normal file
10
ci/ci.env
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
POSTGRES_HOST=localhost
|
||||
POSTGRES_USER=place a user here
|
||||
PGPASSWORD=place a password here
|
||||
POSTGRES_PORT=5432
|
||||
PRD_SCHEMAS_TO_SYNC="'sync_xero_superhog_limited','sync_xedotcom_currency_rates','sync_stripe_us','sync_stripe_uk','sync_hubspot','sync_guest_product','sync_default','sync_core','sync_cdb_screening','sync_cdb_screen_and_protect','sync_cdb_resolutions','sync_cdb_edeposit','sync_cdb_check_in_hero','sync_cdb_athena','staging','reporting','intermediate'"
|
||||
PRD_CI_USER='ci_reader'
|
||||
PRD_CI_PASSWORD=
|
||||
PRD_HOST=the host
|
||||
PRD_DB=the database
|
||||
PRD_PORT=the port
|
||||
13
ci/ci.profiles.yml
Normal file
13
ci/ci.profiles.yml
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
dwh_dbt:
|
||||
outputs:
|
||||
prd_pointer:
|
||||
dbname: prd_pointer
|
||||
host: "{{ env_var('POSTGRES_HOST') }}"
|
||||
port: "{{ env_var('POSTGRES_PORT') | as_number }}"
|
||||
schema: public
|
||||
user: "{{ env_var('POSTGRES_USER') }}"
|
||||
pass: "{{ env_var('PGPASSWORD') }}"
|
||||
type: postgres
|
||||
threads: 1
|
||||
|
||||
target: prd_pointer
|
||||
35
ci/docker-compose.yml
Normal file
35
ci/docker-compose.yml
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
services:
|
||||
postgres:
|
||||
image: postgres:16
|
||||
container_name: postgres_db
|
||||
environment:
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${PGPASSWORD}
|
||||
POSTGRES_DB: postgres
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
# Note that some of the values below are very HW specific. You should
|
||||
# absolutely adjust them to the available hardware where this will be
|
||||
# running. This might help if you feel lost: https://pgtune.leopard.in.ua/
|
||||
command: [
|
||||
"-c", "max_connections=XX",
|
||||
"-c", "shared_buffers=XGB",
|
||||
"-c", "effective_cache_size=XXXGB",
|
||||
"-c", "maintenance_work_mem=XXXMB",
|
||||
"-c", "checkpoint_completion_target=0.9",
|
||||
"-c", "wal_buffers=XXXMB",
|
||||
"-c", "default_statistics_target=XXX",
|
||||
"-c", "random_page_cost=1.1",
|
||||
"-c", "effective_io_concurrency=XXX",
|
||||
"-c", "work_mem=XXXkB",
|
||||
"-c", "huge_pages=off",
|
||||
"-c", "min_wal_size=XXXGB",
|
||||
"-c", "max_wal_size=XXXGB"
|
||||
]
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
driver: local
|
||||
35
ci/postgres-initial-setup.sql
Normal file
35
ci/postgres-initial-setup.sql
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
CREATE DATABASE prd_pointer;
|
||||
\c prd_pointer
|
||||
|
||||
CREATE EXTENSION postgres_fdw;
|
||||
|
||||
CREATE SERVER dwh_prd
|
||||
FOREIGN DATA WRAPPER postgres_fdw
|
||||
OPTIONS (host '$PRD_HOST', dbname '$PRD_DB', port '$PRD_PORT');
|
||||
|
||||
ALTER SERVER dwh_prd OPTIONS (fetch_size '100000');
|
||||
|
||||
CREATE USER MAPPING FOR current_user
|
||||
SERVER dwh_prd
|
||||
OPTIONS (user '$PRD_CI_USER', password '$PRD_CI_PASSWORD');
|
||||
|
||||
CREATE OR REPLACE FUNCTION refresh_foreign_schemas(schema_list TEXT[]) RETURNS void AS $$
|
||||
DECLARE
|
||||
schema_name TEXT;
|
||||
BEGIN
|
||||
-- Loop through each schema in the provided list
|
||||
FOREACH schema_name IN ARRAY schema_list LOOP
|
||||
|
||||
-- Drop and recreate the schema to avoid conflicts
|
||||
EXECUTE format('DROP SCHEMA IF EXISTS %I CASCADE', schema_name);
|
||||
EXECUTE format('CREATE SCHEMA %I', schema_name);
|
||||
|
||||
-- Import all tables from the foreign server
|
||||
EXECUTE format(
|
||||
'IMPORT FOREIGN SCHEMA %I FROM SERVER dwh_prd INTO %I',
|
||||
schema_name, schema_name
|
||||
);
|
||||
|
||||
END LOOP;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
Loading…
Add table
Add a link
Reference in a new issue