Separate watchtower from vipy

This commit is contained in:
counterweight 2025-07-21 09:39:36 +02:00
parent 2c9a70f0fd
commit 13537aa984
Signed by: counterweight
GPG key ID: 883EDBAA726BD96C
7 changed files with 15 additions and 11 deletions

View file

@ -18,22 +18,23 @@ This describes how to prepare each machine before deploying services on them.
* Getting and configuring the domain is outside the scope of this repo. Whenever a service needs you to set up a subdomain, it will be mentioned explictly.
* You should add the domain to the var `root_domain` in `ansible/infra_vars.yml`.
## Prepare the VPS (Vipy)
## Prepare the VPSs (vipy and watchtower)
### Source the VPS
### Source the VPSs
* The guide is agnostic to which provider you pick, but has been tested with VMs from https://99stack.com and contains some operations that are specifically relevant to their VPSs.
* The expectations are that the VPS ticks the following boxes:
+ Runs Debian 12 bookworm.
+ Has a public IP4 and starts out with SSH listening on port 22.
+ Boots with one of your SSH keys already authorized. If this is not the case, you'll have to manually drop the pubkey there before using the playbooks.
* Move on once your VPS is running and satisfies the prerequisites.
* You will need two VPSs: one to host most services, and another tiny one to monitor Uptime. We use two to prevent the monitoring service from falling down with the main machine.
* Move on once your VPSs are running and satisfies the prerequisites.
### Prepare Ansible vars
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in with the values for your VPS.
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in with the values for your VPSs. `[vipy]` is the services VPS. `[watchtower]` is the watchtower VPS.
* A few notes:
* The guides assume you'll only have one VPS in the `[Vipy]` group. Stuff will break if you have multiple, so avoid that.
* The guides assume you'll only have one VPS in the `[vipy]` group. Stuff will break if you have multiple, so avoid that.
### Create user and secure VPS access
@ -42,4 +43,4 @@ This describes how to prepare each machine before deploying services on them.
* Run `ansible-playbook -i inventory.ini infra/01_user_and_access_setup_playbook.yml -e 'ansible_user="your root user here"'`
* Then, configure firewall access, fail2ban and auditd with `ansible-playbook -i inventory.ini infra/02_firewall_and_fail2ban_playbook.yml`. Since the user we will use is now present, there is no need to specify the user anymore.
Note that, by applying this playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth.
Note that, by applying these playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth.

View file

@ -1,6 +1,9 @@
[vipy]
your.vps.ip.here ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/your-key
[watchtower]
your.vps.ip.here ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/your-key
# Local connection to laptop: this assumes you're running ansible commands from your personal laptop
# Make sure to adjust the username
[lapy]

View file

@ -1,5 +1,5 @@
- name: Secure Debian VPS
hosts: vipy
hosts: vipy,watchtower
vars_files:
- ../infra_vars.yml
become: true

View file

@ -1,5 +1,5 @@
- name: Secure Debian VPS
hosts: vipy
hosts: vipy,watchtower
vars_files:
- ../infra_vars.yml
become: true

View file

@ -1,5 +1,5 @@
- name: Install and configure Caddy on Debian 12
hosts: vipy
hosts: vipy,watchtower
become: yes
tasks:

View file

@ -1,5 +1,5 @@
- name: Deploy Uptime Kuma with Docker Compose and configure Caddy reverse proxy
hosts: vipy
hosts: watchtower
become: yes
vars_files:
- ../../infra_vars.yml

View file

@ -8,7 +8,7 @@ caddy_sites_dir: /etc/caddy/sites-enabled
uptime_kuma_subdomain: uptime
# Remote access
remote_host: "{{ groups['vipy'][0] }}"
remote_host: "{{ groups['watchtower'][0] }}"
remote_user: "{{ hostvars[remote_host]['ansible_user'] }}"
remote_key_file: "{{ hostvars[remote_host]['ansible_ssh_private_key_file'] | default('') }}"