lots of stuff
This commit is contained in:
parent
dac4a98f79
commit
3d3d65575b
11 changed files with 296 additions and 17 deletions
|
|
@ -1,8 +1,8 @@
|
||||||
# 01. Infra Setup
|
# 01 Infra Setup
|
||||||
|
|
||||||
This describes how to prepare each machine before deploying services on them.
|
This describes how to prepare each machine before deploying services on them.
|
||||||
|
|
||||||
## 01.01 First steps
|
## First steps
|
||||||
|
|
||||||
* Create an ssh key or pick an existing one. We'll refer to it as the `personal_ssh_key`.
|
* Create an ssh key or pick an existing one. We'll refer to it as the `personal_ssh_key`.
|
||||||
* Deploy ansible on the laptop (Lapy), which will act as the ansible control node. To do so:
|
* Deploy ansible on the laptop (Lapy), which will act as the ansible control node. To do so:
|
||||||
|
|
@ -11,26 +11,35 @@ This describes how to prepare each machine before deploying services on them.
|
||||||
* Install the listed ansible requirements with `pip install -r requirements.txt`
|
* Install the listed ansible requirements with `pip install -r requirements.txt`
|
||||||
* Keep in mind you should activate this `venv` from now on when running `ansible` commands.
|
* Keep in mind you should activate this `venv` from now on when running `ansible` commands.
|
||||||
|
|
||||||
## 01.02 Prepare the VPS (Vipy)
|
## Domain
|
||||||
|
|
||||||
### 01.02.01 Source the VPS
|
* Some services are designed to be accessible through WAN through a friendly URL.
|
||||||
|
* You'll need to have a domain where you can set DNS records and have the ability to create different subdomains, as the guide assumes each service will get its own subdomain.
|
||||||
|
* Getting and configuring the domain is outside the scope of this repo. Whenever a service needs you to set up a subdomain, it will be mentioned explictly.
|
||||||
|
* You should add the domain to the var `root_domain` in `ansible/infra_vars.yml`.
|
||||||
|
|
||||||
|
## Prepare the VPS (Vipy)
|
||||||
|
|
||||||
|
### Source the VPS
|
||||||
|
|
||||||
* The guide is agnostic to which provider you pick, but has been tested with VMs from https://lnvps.net.
|
* The guide is agnostic to which provider you pick, but has been tested with VMs from https://lnvps.net.
|
||||||
* The expectations are that the VPS ticks the following boxes:
|
* The expectations are that the VPS ticks the following boxes:
|
||||||
+ Runs Debian 12 bookworm.
|
+ Runs Debian 12 bookworm.
|
||||||
+ Has a public IP4 and starts out with SSH listening on port 22.
|
+ Has a public IP4 and starts out with SSH listening on port 22.
|
||||||
+ Boots with one of your SSH keys already authorized.
|
+ Boots with one of your SSH keys already authorized. If this is not the case, you'll have to manually drop the pubkey there before using the playbooks.
|
||||||
* Move on once your VPS is running.
|
* Move on once your VPS is running and satisfies the prerequisites.
|
||||||
|
|
||||||
### 01.02.02 Prepare Ansible vars
|
### Prepare Ansible vars
|
||||||
|
|
||||||
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in with the values for your VPS.
|
* You have an example `ansible/example.inventory.ini`. Copy it with `cp ansible/example.inventory.ini ansible/inventory.ini` and fill in with the values for your VPS.
|
||||||
|
* A few notes:
|
||||||
|
* The guides assume you'll only have one VPS in the `[Vipy]` group. Stuff will break if you have multiple, so avoid that.
|
||||||
|
|
||||||
### 01.02.03 Create user and secure VPS access
|
### Create user and secure VPS access
|
||||||
|
|
||||||
* Ansible will create a user on the first playbook `01_basic_vps_setup_playbook.yml`. This is the user that will get used regularly. But, since this user doesn't exist, you obviosuly need to first run this playbook from some other user. We assume your VPS provider has given you a root user, which is what you need to define as the running user in the next command.
|
* Ansible will create a user on the first playbook `01_basic_vps_setup_playbook.yml`. This is the user that will get used regularly. But, since this user doesn't exist, you obviosuly need to first run this playbook from some other user. We assume your VPS provider has given you a root user, which is what you need to define as the running user in the next command.
|
||||||
* cd into `ansible`
|
* cd into `ansible`
|
||||||
* Run `ansible-playbook -i inventory.ini infra/01_user_and_access_setup_playbook.yml -e 'ansible_user="your root user here"'
|
* Run `ansible-playbook -i inventory.ini infra/01_user_and_access_setup_playbook.yml -e 'ansible_user="your root user here"'`
|
||||||
* Then, configure firewall access, fail2ban and auditd with `ansible-playbook -i inventory.ini infra/02_firewall_playbook.yml`
|
* Then, configure firewall access, fail2ban and auditd with `ansible-playbook -i inventory.ini infra/02_firewall_and_fail2ban_playbook.yml`. Since the user we will use is now present, there is no need to specify the user anymore.
|
||||||
|
|
||||||
Note that both the root user and the `counterweight` user will use the same SSH pubkey for auth.
|
Note that, by applying this playbooks, both the root user and the `counterweight` user will use the same SSH pubkey for auth.
|
||||||
|
|
@ -1,10 +1,35 @@
|
||||||
# 02. VPS Core Services Setup
|
# 02 VPS Core Services Setup
|
||||||
|
|
||||||
Now that Vipy is ready, we need to deploy some basic services which are foundational for the apps we're actually interested in.
|
Now that Vipy is ready, we need to deploy some basic services which are foundational for the apps we're actually interested in.
|
||||||
|
|
||||||
This assumes you've completed the markdown `01`.
|
This assumes you've completed the markdown `01`.
|
||||||
|
|
||||||
## 02.01 Deploy Caddy
|
## General tools
|
||||||
|
|
||||||
|
This repo contains some rather general tools that you may or may not need depending on what services you want to deploy and what device you're working on. This tools can be installed with the `900` group of playbooks sitting at `ansible/infra`.
|
||||||
|
|
||||||
|
By default, these playbooks are configured for `hosts: all`. Be mindful if you want to limit, you can use the `--limit groupname` flag when running the playbook.
|
||||||
|
|
||||||
|
Below you have notes on adding each specific tool to a device.
|
||||||
|
|
||||||
|
### rsync
|
||||||
|
|
||||||
|
Simply run the playbook:
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible-playbook -i inventory.ini infra/900_install_rsync.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### docker and compose
|
||||||
|
|
||||||
|
Simply run the playbook:
|
||||||
|
|
||||||
|
```
|
||||||
|
ansible-playbook -i inventory.ini infra/910_docker_playbook.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Deploy Caddy
|
||||||
|
|
||||||
* Use Ansible to run the caddy playbook:
|
* Use Ansible to run the caddy playbook:
|
||||||
|
|
||||||
|
|
@ -15,4 +40,58 @@ This assumes you've completed the markdown `01`.
|
||||||
|
|
||||||
* Starting config will be empty. Modifying the caddy config file to add endpoints as we add services is covered by the instructions of each service.
|
* Starting config will be empty. Modifying the caddy config file to add endpoints as we add services is covered by the instructions of each service.
|
||||||
|
|
||||||
## 02.02 Deploy Uptime Kuma
|
|
||||||
|
## Uptime Kuma
|
||||||
|
|
||||||
|
Uptime Kuma gets used to monitor the availability of services, keep track of their uptime and notify issues.
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
|
||||||
|
* Decide what subdomain you want to serve Uptime Kuma on and add it to `services/uptime_kuma/uptime_kuma_vars.yml` on the `uptime_kuma_subdomain`.
|
||||||
|
* Make sure docker is available on the host.
|
||||||
|
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/uptime_kuma/deploy_uptime_kuma_playbook.yml`.
|
||||||
|
|
||||||
|
### Set up backups to Lapy
|
||||||
|
|
||||||
|
* Make sure rsync is available on the host and on Lapy.
|
||||||
|
* Run the backup playbook: `ansible-playbook -i inventory.ini services/uptime_kuma/setup_backup_uptime_kuma_to_lapy.yml`.
|
||||||
|
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically.
|
||||||
|
|
||||||
|
### Configure
|
||||||
|
|
||||||
|
* Uptime Kuma will be available for you to create a user on first start. Do that and store the creds safe.
|
||||||
|
* From that point on, you can configure through the Web UI.
|
||||||
|
|
||||||
|
### Restoring to a previous state
|
||||||
|
|
||||||
|
* Stop Uptime Kuma.
|
||||||
|
* Overwrite the data folder with one of the backups.
|
||||||
|
* Start it up again.
|
||||||
|
|
||||||
|
|
||||||
|
## Vaultwarden
|
||||||
|
|
||||||
|
Vaultwarden is a credentials manager.
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
|
||||||
|
* Decide what subdomain you want to serve Vaultwarden on and add it to `services/vaultwarden/vaultwarden_vars.yml` on the `vaultwarden_subdomain`.
|
||||||
|
* Make sure docker is available on the host.
|
||||||
|
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/vaultwarden/deploy_vaultwarden_playbook.yml`.
|
||||||
|
|
||||||
|
### Set up backups to Lapy
|
||||||
|
|
||||||
|
* Make sure rsync is available on the host and on Lapy.
|
||||||
|
* Run the backup playbook: `ansible-playbook -i inventory.ini services/vaultwarden/setup_backup_vaultwarden_to_lapy.yml`.
|
||||||
|
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically.
|
||||||
|
|
||||||
|
### Configure
|
||||||
|
|
||||||
|
* Vaultwarden will be available for you to create a user on first start. Do that and store the creds safe.
|
||||||
|
* From that point on, you can configure through the Web UI.
|
||||||
|
|
||||||
|
### Restoring to a previous state
|
||||||
|
|
||||||
|
* Stop Vaultwarden.
|
||||||
|
* Overwrite the data folder with one of the backups.
|
||||||
|
* Start it up again.
|
||||||
|
|
|
||||||
|
|
@ -2,6 +2,10 @@
|
||||||
|
|
||||||
My repo documenting my personal infra, along with artifacts, scripts, etc.
|
My repo documenting my personal infra, along with artifacts, scripts, etc.
|
||||||
|
|
||||||
|
## How to use
|
||||||
|
|
||||||
|
Go through the different numbered markdowns in the repo root to do the different parts.
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
### Services
|
### Services
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,7 @@
|
||||||
[vipy]
|
[vipy]
|
||||||
your.vps.ip.here ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/your-key
|
your.vps.ip.here ansible_user=counterweight ansible_port=22 ansible_ssh_private_key_file=~/.ssh/your-key
|
||||||
|
|
||||||
|
# Local connection to laptop: this assumes you're running ansible commands from your personal laptop
|
||||||
|
# Make sure to adjust the username
|
||||||
[lapy]
|
[lapy]
|
||||||
localhost ansible_connection=local ansible_user=your laptop user
|
localhost ansible_connection=local ansible_user=your laptop user
|
||||||
|
|
@ -15,6 +15,3 @@ remote_key_file: "{{ hostvars[remote_host]['ansible_ssh_private_key_file'] | def
|
||||||
# Local backup
|
# Local backup
|
||||||
local_backup_dir: "{{ lookup('env', 'HOME') }}/uptime-kuma-backups"
|
local_backup_dir: "{{ lookup('env', 'HOME') }}/uptime-kuma-backups"
|
||||||
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/uptime_kuma_backup.sh"
|
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/uptime_kuma_backup.sh"
|
||||||
|
|
||||||
# Encryption
|
|
||||||
pgp_recipient: "your-gpg-id@example.com" # Replace this with your actual GPG email or ID
|
|
||||||
|
|
|
||||||
108
ansible/services/vaultwarden/deploy_vaultwarden_playbook.yml
Normal file
108
ansible/services/vaultwarden/deploy_vaultwarden_playbook.yml
Normal file
|
|
@ -0,0 +1,108 @@
|
||||||
|
- name: Deploy Vaultwarden with Docker Compose and configure Caddy reverse proxy
|
||||||
|
hosts: vipy
|
||||||
|
become: yes
|
||||||
|
vars_files:
|
||||||
|
- ../../infra_vars.yml
|
||||||
|
- ./vaultwarden_vars.yml
|
||||||
|
vars:
|
||||||
|
vaultwarden_domain: "{{ vaultwarden_subdomain }}.{{ root_domain }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Create vaultwarden directory
|
||||||
|
file:
|
||||||
|
path: "{{ vaultwarden_dir }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ ansible_user }}"
|
||||||
|
group: "{{ ansible_user }}"
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Create docker-compose.yml for vaultwarden
|
||||||
|
copy:
|
||||||
|
dest: "{{ vaultwarden_dir }}/docker-compose.yml"
|
||||||
|
content: |
|
||||||
|
version: "3"
|
||||||
|
services:
|
||||||
|
vaultwarden:
|
||||||
|
image: vaultwarden/server:latest
|
||||||
|
container_name: vaultwarden
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "{{ vaultwarden_port }}:80"
|
||||||
|
volumes:
|
||||||
|
- ./data:/data
|
||||||
|
environment:
|
||||||
|
WEBSOCKET_ENABLED: 'true'
|
||||||
|
DOMAIN: "https://{{ vaultwarden_domain }}"
|
||||||
|
SIGNUPS_ALLOWED: 'true'
|
||||||
|
LOG_FILE: /data/vaultwarden.log
|
||||||
|
|
||||||
|
- name: Deploy vaultwarden container with docker compose
|
||||||
|
command: docker compose up -d
|
||||||
|
args:
|
||||||
|
chdir: "{{ vaultwarden_dir }}"
|
||||||
|
|
||||||
|
- name: Create Fail2Ban filter for Vaultwarden
|
||||||
|
copy:
|
||||||
|
dest: /etc/fail2ban/filter.d/vaultwarden.local
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
content: |
|
||||||
|
[INCLUDES]
|
||||||
|
before = common.conf
|
||||||
|
|
||||||
|
[Definition]
|
||||||
|
failregex = ^.*?Username or password is incorrect\. Try again\. IP: <ADDR>\. Username:.*$
|
||||||
|
ignoreregex =
|
||||||
|
|
||||||
|
- name: Create Fail2Ban jail for Vaultwarden
|
||||||
|
copy:
|
||||||
|
dest: /etc/fail2ban/jail.d/vaultwarden.local
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
content: |
|
||||||
|
[vaultwarden]
|
||||||
|
enabled = true
|
||||||
|
port = http,https
|
||||||
|
filter = vaultwarden
|
||||||
|
logpath = {{ vaultwarden_data_dir }}/vaultwarden.log
|
||||||
|
maxretry = 10
|
||||||
|
findtime = 10m
|
||||||
|
bantime = 1h
|
||||||
|
|
||||||
|
- name: Restart fail2ban to apply changes
|
||||||
|
systemd:
|
||||||
|
name: fail2ban
|
||||||
|
state: restarted
|
||||||
|
|
||||||
|
- name: Ensure Caddy sites-enabled directory exists
|
||||||
|
file:
|
||||||
|
path: "{{ caddy_sites_dir }}"
|
||||||
|
state: directory
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Ensure Caddyfile includes import directive for sites-enabled
|
||||||
|
lineinfile:
|
||||||
|
path: /etc/caddy/Caddyfile
|
||||||
|
line: 'import sites-enabled/*'
|
||||||
|
insertafter: EOF
|
||||||
|
state: present
|
||||||
|
backup: yes
|
||||||
|
|
||||||
|
- name: Create Caddy reverse proxy configuration for vaultwarden
|
||||||
|
copy:
|
||||||
|
dest: "{{ caddy_sites_dir }}/vaultwarden.conf"
|
||||||
|
content: |
|
||||||
|
{{ vaultwarden_domain }} {
|
||||||
|
reverse_proxy localhost:{{ vaultwarden_port }}
|
||||||
|
}
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
|
||||||
|
- name: Reload Caddy to apply new config
|
||||||
|
command: systemctl reload caddy
|
||||||
|
|
||||||
|
|
@ -0,0 +1,63 @@
|
||||||
|
- name: Configure local backup for Vaultwarden from remote
|
||||||
|
hosts: lapy
|
||||||
|
gather_facts: no
|
||||||
|
vars_files:
|
||||||
|
- ../../infra_vars.yml
|
||||||
|
- ./vaultwarden_vars.yml
|
||||||
|
vars:
|
||||||
|
remote_data_path: "{{ vaultwarden_data_dir }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Debug remote backup vars
|
||||||
|
debug:
|
||||||
|
msg:
|
||||||
|
- "remote_host={{ remote_host }}"
|
||||||
|
- "remote_user={{ remote_user }}"
|
||||||
|
- "remote_data_path='{{ remote_data_path }}'"
|
||||||
|
- "local_backup_dir={{ local_backup_dir }}"
|
||||||
|
|
||||||
|
- name: Ensure local backup directory exists
|
||||||
|
file:
|
||||||
|
path: "{{ local_backup_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Ensure ~/.local/bin exists
|
||||||
|
file:
|
||||||
|
path: "{{ lookup('env', 'HOME') }}/.local/bin"
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Create backup script
|
||||||
|
copy:
|
||||||
|
dest: "{{ backup_script_path }}"
|
||||||
|
mode: '0750'
|
||||||
|
content: |
|
||||||
|
#!/bin/bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
TIMESTAMP=$(date +'%Y-%m-%d')
|
||||||
|
BACKUP_DIR="{{ local_backup_dir }}/$TIMESTAMP"
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
|
{% if remote_key_file %}
|
||||||
|
SSH_CMD="ssh -i {{ remote_key_file }} -p {{ hostvars[remote_host]['ansible_port'] | default(22) }}"
|
||||||
|
{% else %}
|
||||||
|
SSH_CMD="ssh -p {{ hostvars[remote_host]['ansible_port'] | default(22) }}"
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
rsync -az -e "$SSH_CMD" --delete {{ remote_user }}@{{ remote_host }}:{{ remote_data_path }}/ "$BACKUP_DIR/"
|
||||||
|
|
||||||
|
# Rotate old backups (keep 14 days)
|
||||||
|
find "{{ local_backup_dir }}" -maxdepth 1 -type d -name '20*' -mtime +13 -exec rm -rf {} \;
|
||||||
|
|
||||||
|
- name: Ensure cronjob for backup exists
|
||||||
|
cron:
|
||||||
|
name: "Vaultwarden backup"
|
||||||
|
user: "{{ lookup('env', 'USER') }}"
|
||||||
|
job: "{{ backup_script_path }}"
|
||||||
|
minute: 5
|
||||||
|
hour: "9,12,15,18"
|
||||||
|
|
||||||
|
- name: Run the backup script to make the first backup
|
||||||
|
command: "{{ backup_script_path }}"
|
||||||
17
ansible/services/vaultwarden/vaultwarden_vars.yml
Normal file
17
ansible/services/vaultwarden/vaultwarden_vars.yml
Normal file
|
|
@ -0,0 +1,17 @@
|
||||||
|
# General
|
||||||
|
vaultwarden_dir: /opt/vaultwarden
|
||||||
|
vaultwarden_data_dir: "{{ vaultwarden_dir }}/data"
|
||||||
|
vaultwarden_port: 8222
|
||||||
|
|
||||||
|
# Caddy
|
||||||
|
caddy_sites_dir: /etc/caddy/sites-enabled
|
||||||
|
vaultwarden_subdomain: vault
|
||||||
|
|
||||||
|
# Remote access
|
||||||
|
remote_host: "{{ groups['vipy'][0] }}"
|
||||||
|
remote_user: "{{ hostvars[remote_host]['ansible_user'] }}"
|
||||||
|
remote_key_file: "{{ hostvars[remote_host]['ansible_ssh_private_key_file'] | default('') }}"
|
||||||
|
|
||||||
|
# Local backup
|
||||||
|
local_backup_dir: "{{ lookup('env', 'HOME') }}/vaultwarden-backups"
|
||||||
|
backup_script_path: "{{ lookup('env', 'HOME') }}/.local/bin/vaultwarden_backup.sh"
|
||||||
Loading…
Add table
Add a link
Reference in a new issue