more stuff in readme

This commit is contained in:
Pablo Martin 2025-04-02 15:54:41 +02:00
parent db0d97d2e8
commit 38ee399156

View file

@ -17,8 +17,11 @@ These instructions assume that:
- That you have cloned this repository in the home folder of the user you use in that VM.
- The DWH production instance has a CI dedicated user that can read from all sync schemas as well as `staging`, `intermediate` and `reporting`, and you have the credentials.
If you don't have this, it probably means you need to review our Infrastructure repository where we describe how to set a VM up with all of this.
### Setting things up
- SSH into the CI VM.
- Create a folder in the user home directory named `dbt-ci`.
- Create a copy of the `ci.env` file there naming it `.env` (`cp ci.env ~/dbt-ci/.env`) and fill it with values of your choice.
- Execute the script named `ci-vm-setup.sh` in this folder. This script will take care of most of the setup that need to be executed, including:
@ -26,6 +29,13 @@ These instructions assume that:
- Setting up the dockerized postgres with the right database, FDW, etc.
- Prepare the `profiles.yml` file.
### Connecting to Devops
### Testing
- TBD
- If the infra was set correctly and you followed the previous steps, you should be ready to roll.
- You might want to activate pipeline executions in Devops if you had it off while preparing everything.
- Once that's done:
- Create a branch in this repository.
- Add some silly change to any dbt model.
- Open a PR in Devops from the branch.
- If everything is fine, you should see in Devops the pipeline getting triggered automatically and walking through all the steps described in `.azure-pipelines.master.yml`.
- Once you make a commit to `master` or merge PR to `master`, you should also see pipelines getting triggered automatically `.azure-pipelines.master.yml`.