Compare commits

...

10 commits

Author SHA1 Message Date
8766af831c
a few things 2025-07-09 00:32:51 +02:00
Pablo Martin
04fce4fcae forgejo work in progress 2025-07-04 16:52:08 +02:00
Pablo Martin
2097a39663 update vaultwarden docs 2025-07-04 15:53:44 +02:00
Pablo Martin
14075fe1cc add playbook to disable registration 2025-07-04 15:53:35 +02:00
Pablo Martin
f3030f9d6d allow http, so caddy can redirect 2025-07-04 15:53:27 +02:00
Pablo Martin
3d3d65575b lots of stuff 2025-07-03 17:21:31 +02:00
Pablo Martin
dac4a98f79 uptime kuma backups work 2025-07-02 17:17:56 +02:00
Pablo Martin
eddde5e53a uptime kuma works 2025-07-01 17:02:28 +02:00
Pablo Martin
97ff4b40e3 docker playbook 2025-07-01 16:50:58 +02:00
Pablo Martin
3343de2dc0 thingies 2025-07-01 16:14:44 +02:00
22 changed files with 863 additions and 65 deletions

1
.gitignore vendored
View file

@ -1 +1,2 @@
inventory.ini
venv/*

View file

@ -1,28 +1,45 @@
# 01. Infra Setup
# 01 Infra Setup
This describes how to prepare each machine before deploying services on them.
## 01.01 First steps
## First steps
* Create an ssh key or pick an existing one. We'll refer to it as the `personal_ssh_key`.
* The guide assumes the laptop (Lapy) has `ansible` installed. If not, do `sudo apt install -y ansible` and `ansible --version` to check.
* Deploy ansible on the laptop (Lapy), which will act as the ansible control node. To do so:
* Create a `venv`: `python3 -m venv venv`
* Activate it: `source venv/bin/activate`
* Install the listed ansible requirements with `pip install -r requirements.txt`
* Keep in mind you should activate this `venv` from now on when running `ansible` commands.
## 01.02 Prepare the VPS (Vipy)
## Domain
### 01.02.01 Source the VPS
* Some services are designed to be accessible through WAN through a friendly URL.
* You'll need to have a domain where you can set DNS records and have the ability to create different subdomains, as the guide assumes each service will get its own subdomain.
* Getting and configuring the domain is outside the scope of this repo. Whenever a service needs you to set up a subdomain, it will be mentioned explictly.
* You should add the domain to the var `root_domain` in `ansible/infra_vars.yml`.
* The guide is agnostic to which provider you pick, but has been tested with VMs from https://lnvps.net.
## Prepare the VPS (Vipy)
### Source the VPS
* The guide is agnostic to which provider you pick, but has been tested with VMs from https://99stack.com and contains some operations that are specifically relevant to their VPSs.
* The expectations are that the VPS ticks the following boxes:
+ Runs Debian 12 bookworm.
+ Has a public IP4 and starts out with SSH listening on port 22.
+ Boots with one of your SSH keys already authorized.
* Move on once your VPS is running.
+ Boots with one of your SSH keys already authorized. If this is not the case, you'll have to manually drop the pubkey there before using the playbooks.
* Move on once your VPS is running and satisfies the prerequisites.
### 01.02.02 Prepare Ansible vars
### Prepare Ansible vars
* You have an example `infra/example.inventory.ini`. Copy it with `cp example.inventory.ini inventory.ini` and fill in with the vars for your VPS.
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in with the values for your VPS.
* A few notes:
* The guides assume you'll only have one VPS in the `[Vipy]` group. Stuff will break if you have multiple, so avoid that.
### 01.02.03 First steps with Ansible
### Create user and secure VPS access
* cd into `infra`
* Run `ansible-playbook playbook.yml`
* Ansible will create a user on the first playbook `01_basic_vps_setup_playbook.yml`. This is the user that will get used regularly. But, since this user doesn't exist, you obviosuly need to first run this playbook from some other user. We assume your VPS provider has given you a root user, which is what you need to define as the running user in the next command.
* cd into `ansible`
* Run `ansible-playbook -i inventory.ini infra/01_user_and_access_setup_playbook.yml -e 'ansible_user="your root user here"'`
* Then, configure firewall access, fail2ban and auditd with `ansible-playbook -i inventory.ini infra/02_firewall_and_fail2ban_playbook.yml`. Since the user we will use is now present, there is no need to specify the user anymore.
Note that, by applying this playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth.

View file

@ -0,0 +1,104 @@
# 02 VPS Core Services Setup
Now that Vipy is ready, we need to deploy some basic services which are foundational for the apps we're actually interested in.
This assumes you've completed the markdown `01`.
## General tools
This repo contains some rather general tools that you may or may not need depending on what services you want to deploy and what device you're working on. This tools can be installed with the `900` group of playbooks sitting at `ansible/infra`.
By default, these playbooks are configured for `hosts: all`. Be mindful if you want to limit, you can use the `--limit groupname` flag when running the playbook.
Below you have notes on adding each specific tool to a device.
### rsync
Simply run the playbook:
```
ansible-playbook -i inventory.ini infra/900_install_rsync.yml
```
### docker and compose
Simply run the playbook:
```
ansible-playbook -i inventory.ini infra/910_docker_playbook.yml
```
## Deploy Caddy
* Use Ansible to run the caddy playbook:
```
cd ansible
ansible-playbook -i inventory.ini services/caddy_playbook.yml
```
* Starting config will be empty. Modifying the caddy config file to add endpoints as we add services is covered by the instructions of each service.
## Uptime Kuma
Uptime Kuma gets used to monitor the availability of services, keep track of their uptime and notify issues.
### Deploy
* Decide what subdomain you want to serve Uptime Kuma on and add it to `services/uptime_kuma/uptime_kuma_vars.yml` on the `uptime_kuma_subdomain`.
* Note that you will have to add a DNS entry to point to the VPS public IP.
* Make sure docker is available on the host.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/uptime_kuma/deploy_uptime_kuma_playbook.yml`.
### Set up backups to Lapy
* Make sure rsync is available on the host and on Lapy.
* Run the backup playbook: `ansible-playbook -i inventory.ini services/uptime_kuma/setup_backup_uptime_kuma_to_lapy.yml`.
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically.
### Configure
* Uptime Kuma will be available for you to create a user on first start. Do that and store the creds safe.
* From that point on, you can configure through the Web UI.
### Restoring to a previous state
* Stop Uptime Kuma.
* Overwrite the data folder with one of the backups.
* Start it up again.
## Vaultwarden
Vaultwarden is a credentials manager.
### Deploy
* Decide what subdomain you want to serve Vaultwarden on and add it to `services/vaultwarden/vaultwarden_vars.yml` on the `vaultwarden_subdomain`.
* Note that you will have to add a DNS entry to point to the VPS public IP.
* Make sure docker is available on the host.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/vaultwarden/deploy_vaultwarden_playbook.yml`.
### Configure
* Vaultwarden will be available for you to create a user on first start. Do that and store the creds safely.
* From that point on, you can configure through the Web UI.
### Disable registration
* You probably don't want anyone to just be able to register without permission.
* To prevent that, you can run the playbook `disable_vaultwarden_sign_ups_playbook.yml` after creating the first user.
### Set up backups to Lapy
* Make sure rsync is available on the host and on Lapy.
* Run the backup playbook: `ansible-playbook -i inventory.ini services/vaultwarden/setup_backup_vaultwarden_to_lapy.yml`.
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically.
### Restoring to a previous state
* Stop Vaultwarden.
* Overwrite the data folder with one of the backups.
* Start it up again.

View file

@ -2,6 +2,10 @@
My repo documenting my personal infra, along with artifacts, scripts, etc.
## How to use
Go through the different numbered markdowns in the repo root to do the different parts.
## Overview
### Services

View file

@ -0,0 +1,7 @@
[vipy]
your.vps.ip.here ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/your-key
# Local connection to laptop: this assumes you're running ansible commands from your personal laptop
# Make sure to adjust the username
[lapy]
localhost ansible_connection=local ansible_user=your laptop user

View file

@ -0,0 +1,72 @@
- name: Secure Debian VPS
hosts: vipy
vars_files:
- ../infra_vars.yml
become: true
tasks:
- name: Update and upgrade apt packages
apt:
update_cache: yes
upgrade: full
autoremove: yes
- name: Create new user
user:
name: "{{ new_user }}"
groups: sudo
shell: /bin/bash
state: present
create_home: yes
- name: Set up SSH directory for new user
file:
path: "/home/{{ new_user }}/.ssh"
state: directory
mode: "0700"
owner: "{{ new_user }}"
group: "{{ new_user }}"
- name: Copy current user's authorized_keys to new user
copy:
src: "{{ (ansible_user == 'root') | ternary('/root/.ssh/authorized_keys', '/home/' + ansible_user + '/.ssh/authorized_keys') }}"
dest: "/home/{{ new_user }}/.ssh/authorized_keys"
owner: "{{ new_user }}"
group: "{{ new_user }}"
mode: "0600"
remote_src: true
- name: Allow new user to run sudo without password
copy:
dest: "/etc/sudoers.d/{{ new_user }}"
content: "{{ new_user }} ALL=(ALL) NOPASSWD:ALL"
owner: root
group: root
mode: "0440"
- name: Disable root login
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
backrefs: yes
loop:
- { regexp: "^#?PermitRootLogin .*", line: "PermitRootLogin no" }
- {
regexp: "^#?PasswordAuthentication .*",
line: "PasswordAuthentication no",
}
- name: Ensure PasswordAuthentication is set to no in cloud-init config
lineinfile:
path: /etc/ssh/sshd_config.d/50-cloud-init.conf
regexp: "^PasswordAuthentication"
line: "PasswordAuthentication no"
create: yes
backup: yes
- name: Restart SSH
service:
name: ssh
state: restarted

View file

@ -1,56 +1,10 @@
- name: Secure Debian VPS
hosts: vipy
vars_files:
- vars.yml
- ../infra_vars.yml
become: true
tasks:
- name: Update and upgrade apt packages
apt:
update_cache: yes
upgrade: full
autoremove: yes
- name: Create new user
user:
name: "{{ new_user }}"
groups: sudo
shell: /bin/bash
state: present
create_home: yes
- name: Set up SSH directory for new user
file:
path: "/home/{{ new_user }}/.ssh"
state: directory
mode: "0700"
owner: "{{ new_user }}"
group: "{{ new_user }}"
- name: Change SSH port and disable root login
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
backrefs: yes
loop:
- { regexp: "^#?Port .*", line: "Port {{ ssh_port }}" }
- { regexp: "^#?PermitRootLogin .*", line: "PermitRootLogin no" }
- {
regexp: "^#?PasswordAuthentication .*",
line: "PasswordAuthentication no",
}
- name: Restart SSH
service:
name: ssh
state: restarted
- name: Set SSH port to new port
set_fact:
ansible_port: "{{ ssh_port }}"
- name: Install UFW
apt:
name: ufw
@ -68,11 +22,12 @@
- name: Allow outgoing traffic
ufw:
rule: allow
direction: outgoing
direction: out
- name: Allow SSH port through UFW
ufw:
rule: allow
direction: in
port: "{{ ssh_port }}"
proto: tcp
from_ip: "{{ allow_ssh_from if allow_ssh_from != 'any' else omit }}"

View file

@ -0,0 +1,11 @@
- name: Install rsync
hosts: all
vars_files:
- ../infra_vars.yml
become: true
tasks:
- name: Install rsync
apt:
name: rsync
state: present

View file

@ -0,0 +1,79 @@
- name: Install Docker and Docker Compose on Debian 12
hosts: all
become: yes
tasks:
- name: Remove old Docker-related packages
apt:
name:
- docker.io
- docker-doc
- docker-compose
- podman-docker
- containerd
- runc
state: absent
purge: yes
autoremove: yes
- name: Update apt cache
apt:
update_cache: yes
- name: Install prerequisites
apt:
name:
- ca-certificates
- curl
state: present
- name: Create directory for Docker GPG key
file:
path: /etc/apt/keyrings
state: directory
mode: '0755'
- name: Download Docker GPG key
get_url:
url: https://download.docker.com/linux/debian/gpg
dest: /etc/apt/keyrings/docker.asc
mode: '0644'
- name: Get Debian architecture
command: dpkg --print-architecture
register: deb_arch
- name: Add Docker repository
apt_repository:
repo: "deb [arch={{ deb_arch.stdout }} signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian {{ ansible_lsb.codename }} stable"
filename: docker
state: present
update_cache: yes
- name: Update apt cache
apt:
update_cache: yes
- name: Install Docker packages
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: present
update_cache: yes
- name: Ensure Docker is started and enabled
systemd:
name: docker
enabled: yes
state: started
- name: Add user to docker group
user:
name: "{{ ansible_user }}"
groups: docker
append: yes

View file

@ -1,3 +1,4 @@
new_user: counterweight
ssh_port: 2222
ssh_port: 22
allow_ssh_from: "any"
root_domain: contrapeso.xyz

View file

@ -0,0 +1,67 @@
- name: Install and configure Caddy on Debian 12
hosts: vipy
become: yes
tasks:
- name: Install required packages
apt:
name:
- debian-keyring
- debian-archive-keyring
- apt-transport-https
- curl
state: present
update_cache: yes
- name: Download Caddy GPG armored key
ansible.builtin.get_url:
url: https://dl.cloudsmith.io/public/caddy/stable/gpg.key
dest: /tmp/caddy-stable-archive-keyring.asc
mode: '0644'
- name: Convert ASCII armored key to binary keyring
ansible.builtin.command:
cmd: gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg /tmp/caddy-stable-archive-keyring.asc
args:
creates: /usr/share/keyrings/caddy-stable-archive-keyring.gpg
- name: Ensure permissions on keyring file
ansible.builtin.file:
path: /usr/share/keyrings/caddy-stable-archive-keyring.gpg
owner: root
group: root
mode: '0644'
- name: Add Caddy repository list file
ansible.builtin.get_url:
url: https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt
dest: /etc/apt/sources.list.d/caddy-stable.list
mode: '0644'
validate_certs: yes
- name: Update apt cache after adding repo
apt:
update_cache: yes
- name: Install Caddy
apt:
name: caddy
state: present
- name: Ensure Caddy service is enabled and started
systemd:
name: caddy
enabled: yes
state: started
- name: Allow HTTP through UFW
ufw:
rule: allow
port: '80'
proto: tcp
- name: Allow HTTPS through UFW
ufw:
rule: allow
port: '443'
proto: tcp

View file

@ -0,0 +1,94 @@
- name: Install Forgejo on Debian 12 with Caddy reverse proxy
hosts: vipy
become: yes
vars:
forgejo_domain: "{{ forgejo_subdomain }}.{{ root_domain }}"
tasks:
- name: Ensure required packages are installed
apt:
name:
- git
- git-lfs
- wget
state: present
update_cache: true
- name: Download Forgejo binary
get_url:
url: "{{ forgejo_url }}"
dest: "/tmp/forgejo"
mode: '0755'
- name: Move Forgejo binary to /usr/local/bin
copy:
src: "/tmp/forgejo"
dest: "{{ forgejo_bin_path }}"
remote_src: yes
mode: '0755'
- name: Create git system user
user:
name: "{{ forgejo_user }}"
system: yes
shell: /bin/bash
home: "/home/{{ forgejo_user }}"
create_home: yes
comment: 'Git Version Control'
- name: Create Forgejo data directory
file:
path: "{{ forgejo_data_dir }}"
state: directory
owner: "{{ forgejo_user }}"
group: "{{ forgejo_user }}"
mode: '0750'
- name: Create Forgejo config directory
file:
path: "{{ forgejo_config_dir }}"
state: directory
owner: "root"
group: "{{ forgejo_user }}"
mode: '0770'
- name: Download Forgejo systemd service file
get_url:
url: "{{ forgejo_service_url }}"
dest: "/etc/systemd/system/forgejo.service"
mode: '0644'
- name: Reload systemd
systemd:
daemon_reload: yes
- name: Enable and start Forgejo service
systemd:
name: forgejo
enabled: yes
state: started
- name: Add Caddy reverse proxy config for Forgejo
copy:
dest: "{{ caddy_config_path }}"
mode: '0644'
content: |
{{ caddy_site_domain }} {
reverse_proxy localhost:3000
}
- name: Create Caddy reverse proxy configuration for uptime kuma
copy:
dest: "{{ caddy_sites_dir }}/forgejo.conf"
content: |
{{ uptime_kuma_domain }} {
reverse_proxy localhost:{{ uptime_kuma_port }}
}
owner: root
group: root
mode: '0644'
- name: Reload Caddy to apply new config
service:
name: caddy
state: reloaded

View file

@ -0,0 +1,23 @@
# General
forgejo_data_dir: "/var/lib/forgejo"
forgejo_config_dir: "/etc/forgejo"
forgejo_port: 7657
forgejo_service_url: "https://codeberg.org/forgejo/forgejo/raw/branch/forgejo/contrib/systemd/forgejo.service"
forgejo_version: "11.0.2"
forgejo_arch: "linux-amd64"
forgejo_url: "https://codeberg.org/forgejo/forgejo/releases/download/v{{ forgejo_version }}/forgejo-{{ forgejo_version }}-{{ forgejo_arch }}"
forgejo_bin_path: "/usr/local/bin/forgejo"
forgejo_user: "git"
# Caddy
caddy_sites_dir: /etc/caddy/sites-enabled
forgejo_subdomain: forgejo
# Remote access
remote_host: "{{ groups['vipy'][0] }}"
remote_user: "{{ hostvars[remote_host]['ansible_user'] }}"
remote_key_file: "{{ hostvars[remote_host]['ansible_ssh_private_key_file'] | default('') }}"
# Local backup
local_backup_dir: "{{ lookup('env', 'HOME') }}/forgejo-backups"
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/forgejo_backup.sh"

View file

@ -0,0 +1,67 @@
- name: Deploy Uptime Kuma with Docker Compose and configure Caddy reverse proxy
hosts: vipy
become: yes
vars_files:
- ../../infra_vars.yml
- ./uptime_kuma_vars.yml
vars:
uptime_kuma_domain: "{{ uptime_kuma_subdomain }}.{{ root_domain }}"
tasks:
- name: Create uptime kuma directory
file:
path: "{{ uptime_kuma_dir }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
- name: Create docker-compose.yml for uptime kuma
copy:
dest: "{{ uptime_kuma_dir }}/docker-compose.yml"
content: |
version: "3"
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
restart: unless-stopped
ports:
- "{{ uptime_kuma_port }}:3001"
volumes:
- ./data:/app/data
- name: Deploy uptime kuma container with docker compose
command: docker compose up -d
args:
chdir: "{{ uptime_kuma_dir }}"
- name: Ensure Caddy sites-enabled directory exists
file:
path: /etc/caddy/sites-enabled
state: directory
owner: root
group: root
mode: '0755'
- name: Ensure Caddyfile includes import directive for sites-enabled
lineinfile:
path: /etc/caddy/Caddyfile
line: 'import sites-enabled/*'
insertafter: EOF
state: present
backup: yes
- name: Create Caddy reverse proxy configuration for uptime kuma
copy:
dest: "{{ caddy_sites_dir }}/uptime-kuma.conf"
content: |
{{ uptime_kuma_domain }} {
reverse_proxy localhost:{{ uptime_kuma_port }}
}
owner: root
group: root
mode: '0644'
- name: Reload Caddy to apply new config
command: systemctl reload caddy

View file

@ -0,0 +1,65 @@
- name: Configure local backup for Uptime Kuma from remote
hosts: lapy
gather_facts: no
vars_files:
- ../../infra_vars.yml
- ./uptime_kuma_vars.yml
vars:
remote_data_path: "{{ uptime_kuma_data_dir }}"
local_backup_dir: "{{ lookup('env', 'HOME') }}/uptime-kuma-backups"
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/uptime_kuma_backup.sh"
tasks:
- name: Debug remote backup vars
debug:
msg:
- "remote_host={{ remote_host }}"
- "remote_user={{ remote_user }}"
- "remote_data_path='{{ remote_data_path }}'"
- "local_backup_dir={{ local_backup_dir }}"
- name: Ensure local backup directory exists
file:
path: "{{ local_backup_dir }}"
state: directory
mode: '0755'
- name: Ensure ~/.local/bin exists
file:
path: "{{ lookup('env', 'HOME') }}/.local/bin"
state: directory
mode: '0755'
- name: Create backup script
copy:
dest: "{{ backup_script_path }}"
mode: '0750'
content: |
#!/bin/bash
set -euo pipefail
TIMESTAMP=$(date +'%Y-%m-%d')
BACKUP_DIR="{{ local_backup_dir }}/$TIMESTAMP"
mkdir -p "$BACKUP_DIR"
{% if remote_key_file %}
SSH_CMD="ssh -i {{ remote_key_file }} -p {{ hostvars[remote_host]['ansible_port'] | default(22) }}"
{% else %}
SSH_CMD="ssh -p {{ hostvars[remote_host]['ansible_port'] | default(22) }}"
{% endif %}
rsync -az -e "$SSH_CMD" --delete {{ remote_user }}@{{ remote_host }}:{{ remote_data_path }}/ "$BACKUP_DIR/"
# Rotate old backups (keep 14 days)
find "{{ local_backup_dir }}" -maxdepth 1 -type d -name '20*' -mtime +13 -exec rm -rf {} \;
- name: Ensure cronjob for backup exists
cron:
name: "Uptime Kuma backup"
user: "{{ lookup('env', 'USER') }}"
job: "{{ backup_script_path }}"
minute: 0
hour: "9,12,15,18"
- name: Run the backup script to make the first backup
command: "{{ backup_script_path }}"

View file

@ -0,0 +1,17 @@
# General
uptime_kuma_dir: /opt/uptime-kuma
uptime_kuma_data_dir: "{{ uptime_kuma_dir }}/data"
uptime_kuma_port: 3001
# Caddy
caddy_sites_dir: /etc/caddy/sites-enabled
uptime_kuma_subdomain: uptime
# Remote access
remote_host: "{{ groups['vipy'][0] }}"
remote_user: "{{ hostvars[remote_host]['ansible_user'] }}"
remote_key_file: "{{ hostvars[remote_host]['ansible_ssh_private_key_file'] | default('') }}"
# Local backup
local_backup_dir: "{{ lookup('env', 'HOME') }}/uptime-kuma-backups"
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/uptime_kuma_backup.sh"

View file

@ -0,0 +1,108 @@
- name: Deploy Vaultwarden with Docker Compose and configure Caddy reverse proxy
hosts: vipy
become: yes
vars_files:
- ../../infra_vars.yml
- ./vaultwarden_vars.yml
vars:
vaultwarden_domain: "{{ vaultwarden_subdomain }}.{{ root_domain }}"
tasks:
- name: Create vaultwarden directory
file:
path: "{{ vaultwarden_dir }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
- name: Create docker-compose.yml for vaultwarden
copy:
dest: "{{ vaultwarden_dir }}/docker-compose.yml"
content: |
version: "3"
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
ports:
- "{{ vaultwarden_port }}:80"
volumes:
- ./data:/data
environment:
WEBSOCKET_ENABLED: 'true'
DOMAIN: "https://{{ vaultwarden_domain }}"
SIGNUPS_ALLOWED: 'true'
LOG_FILE: /data/vaultwarden.log
- name: Deploy vaultwarden container with docker compose
command: docker compose up -d
args:
chdir: "{{ vaultwarden_dir }}"
- name: Create Fail2Ban filter for Vaultwarden
copy:
dest: /etc/fail2ban/filter.d/vaultwarden.local
owner: root
group: root
mode: '0644'
content: |
[INCLUDES]
before = common.conf
[Definition]
failregex = ^.*?Username or password is incorrect\. Try again\. IP: <ADDR>\. Username:.*$
ignoreregex =
- name: Create Fail2Ban jail for Vaultwarden
copy:
dest: /etc/fail2ban/jail.d/vaultwarden.local
owner: root
group: root
mode: '0644'
content: |
[vaultwarden]
enabled = true
port = http,https
filter = vaultwarden
logpath = {{ vaultwarden_data_dir }}/vaultwarden.log
maxretry = 10
findtime = 10m
bantime = 1h
- name: Restart fail2ban to apply changes
systemd:
name: fail2ban
state: restarted
- name: Ensure Caddy sites-enabled directory exists
file:
path: "{{ caddy_sites_dir }}"
state: directory
owner: root
group: root
mode: '0755'
- name: Ensure Caddyfile includes import directive for sites-enabled
lineinfile:
path: /etc/caddy/Caddyfile
line: 'import sites-enabled/*'
insertafter: EOF
state: present
backup: yes
- name: Create Caddy reverse proxy configuration for vaultwarden
copy:
dest: "{{ caddy_sites_dir }}/vaultwarden.conf"
content: |
{{ vaultwarden_domain }} {
reverse_proxy localhost:{{ vaultwarden_port }}
}
owner: root
group: root
mode: '0644'
- name: Reload Caddy to apply new config
command: systemctl reload caddy

View file

@ -0,0 +1,18 @@
- name: Disable Vaultwarden Signups
hosts: vipy
become: yes
vars_files:
- ../../infra_vars.yml
- ./vaultwarden_vars.yml
tasks:
- name: Disable signups in docker-compose.yml
replace:
path: "{{ vaultwarden_dir }}/docker-compose.yml"
regexp: 'SIGNUPS_ALLOWED:.*'
replace: "SIGNUPS_ALLOWED: 'false'"
- name: Re-deploy Vaultwarden with signups disabled
command: docker compose up -d
args:
chdir: "{{ vaultwarden_dir }}"

View file

@ -0,0 +1,63 @@
- name: Configure local backup for Vaultwarden from remote
hosts: lapy
gather_facts: no
vars_files:
- ../../infra_vars.yml
- ./vaultwarden_vars.yml
vars:
remote_data_path: "{{ vaultwarden_data_dir }}"
tasks:
- name: Debug remote backup vars
debug:
msg:
- "remote_host={{ remote_host }}"
- "remote_user={{ remote_user }}"
- "remote_data_path='{{ remote_data_path }}'"
- "local_backup_dir={{ local_backup_dir }}"
- name: Ensure local backup directory exists
file:
path: "{{ local_backup_dir }}"
state: directory
mode: '0755'
- name: Ensure ~/.local/bin exists
file:
path: "{{ lookup('env', 'HOME') }}/.local/bin"
state: directory
mode: '0755'
- name: Create backup script
copy:
dest: "{{ backup_script_path }}"
mode: '0750'
content: |
#!/bin/bash
set -euo pipefail
TIMESTAMP=$(date +'%Y-%m-%d')
BACKUP_DIR="{{ local_backup_dir }}/$TIMESTAMP"
mkdir -p "$BACKUP_DIR"
{% if remote_key_file %}
SSH_CMD="ssh -i {{ remote_key_file }} -p {{ hostvars[remote_host]['ansible_port'] | default(22) }}"
{% else %}
SSH_CMD="ssh -p {{ hostvars[remote_host]['ansible_port'] | default(22) }}"
{% endif %}
rsync -az -e "$SSH_CMD" --delete {{ remote_user }}@{{ remote_host }}:{{ remote_data_path }}/ "$BACKUP_DIR/"
# Rotate old backups (keep 14 days)
find "{{ local_backup_dir }}" -maxdepth 1 -type d -name '20*' -mtime +13 -exec rm -rf {} \;
- name: Ensure cronjob for backup exists
cron:
name: "Vaultwarden backup"
user: "{{ lookup('env', 'USER') }}"
job: "{{ backup_script_path }}"
minute: 5
hour: "9,12,15,18"
- name: Run the backup script to make the first backup
command: "{{ backup_script_path }}"

View file

@ -0,0 +1,17 @@
# General
vaultwarden_dir: /opt/vaultwarden
vaultwarden_data_dir: "{{ vaultwarden_dir }}/data"
vaultwarden_port: 8222
# Caddy
caddy_sites_dir: /etc/caddy/sites-enabled
vaultwarden_subdomain: vault
# Remote access
remote_host: "{{ groups['vipy'][0] }}"
remote_user: "{{ hostvars[remote_host]['ansible_user'] }}"
remote_key_file: "{{ hostvars[remote_host]['ansible_ssh_private_key_file'] | default('') }}"
# Local backup
local_backup_dir: "{{ lookup('env', 'HOME') }}/vaultwarden-backups"
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/vaultwarden_backup.sh"

View file

@ -1,2 +0,0 @@
[vipy]
your.vps.ip.here ansible_user=debian ansible_port=22

10
requirements.txt Normal file
View file

@ -0,0 +1,10 @@
ansible==10.7.0
ansible-core==2.17.12
cffi==1.17.1
cryptography==45.0.4
Jinja2==3.1.6
MarkupSafe==3.0.2
packaging==25.0
pycparser==2.22
PyYAML==6.0.2
resolvelib==1.0.1