This commit is contained in:
counterweight 2025-12-06 23:44:17 +01:00
parent 47baa9d238
commit 83fa331ae4
Signed by: counterweight
GPG key ID: 883EDBAA726BD96C
8 changed files with 359 additions and 7 deletions

View file

@ -117,6 +117,7 @@ Checklist
Checklist: Checklist:
- [ ] You can see both the system healthcheck and disk usage check for all VPSs in the uptime kuma UI. - [ ] You can see both the system healthcheck and disk usage check for all VPSs in the uptime kuma UI.
- [ ] The checks are all green after ~30 min.
## Vaultwarden ## Vaultwarden
@ -150,6 +151,12 @@ Vaultwarden is a credentials manager.
* Stop Vaultwarden. * Stop Vaultwarden.
* Overwrite the data folder with one of the backups. * Overwrite the data folder with one of the backups.
* Start it up again. * Start it up again.
* Be careful! The restoring of a backup doesn't include the signup behaviour. If you deployed a new instance and restored a backup, you still need to manually repeat as described above the disabling of the sign ups.
Checklist
- [ ] The service is reachable at the URL
- [ ] You have stored the admin creds properly
- [ ] You can't create another user at the /signup path
## Forgejo ## Forgejo
@ -168,6 +175,28 @@ Forgejo is a git server.
* You can tweak more settings from that point on. * You can tweak more settings from that point on.
* SSH cloning should work out of the box (after you've set up your SSH pub key in Forgejo, that is). * SSH cloning should work out of the box (after you've set up your SSH pub key in Forgejo, that is).
### Set up backups to Lapy
* Make sure rsync is available on the host and on Lapy.
* Ensure GPG is configured with a recipient in your inventory (the backup script requires `gpg_recipient` to be set).
* Run the backup playbook: `ansible-playbook -i inventory.ini services/forgejo/setup_backup_forgejo_to_lapy.yml`.
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically. The script backs up both the data and config directories. Backups are GPG encrypted for safety. Note that the Forgejo service is stopped during backup to ensure consistency.
### Restoring to a previous state
* Stop Forgejo.
* Decrypt the backup: `gpg --decrypt forgejo-backup-YYYY-MM-DD.tar.gz.gpg | tar -xzf -`
* Overwrite the data and config directories with the restored backup.
* Ensure that files in `/var/lib/foregejo/` are owned by the right user.
* Start Forgejo again.
* You may need to refresh ssh pub key so your old SSH driven git remotes work. Go to site administration, dashboard, and run task `Update the ".ssh/authorized_keys" file with Forgejo SSH keys.`.
Checklist:
- [ ] Forgejo is accessible at the FQDN
- [ ] You have stored the admin credentials properly
- [ ] The backup script runs fine
- [ ] SSH cloning works after setting up your SSH pub key
## LNBits ## LNBits
@ -175,7 +204,7 @@ LNBits is a Lightning Network wallet and accounts system.
### Deploy ### Deploy
* Decide what subdomain you want to serve LNBits on and add it to `services/lnbits/lnbits_vars.yml` on the `lnbits_subdomain`. * Decide what subdomain you want to serve LNBits on and add it to `ansible/services_config.yml` under `lnbits`.
* Note that you will have to add a DNS entry to point to the VPS public IP. * Note that you will have to add a DNS entry to point to the VPS public IP.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/lnbits/deploy_lnbits_playbook.yml`. * Run the deployment playbook: `ansible-playbook -i inventory.ini services/lnbits/deploy_lnbits_playbook.yml`.
@ -265,3 +294,31 @@ Headscale is a self-hosted Tailscale control server that allows you to create yo
* View users: `headscale users list` * View users: `headscale users list`
* Generate new pre-auth keys: `headscale preauthkeys create --user counter-net --reusable` * Generate new pre-auth keys: `headscale preauthkeys create --user counter-net --reusable`
* Remove a device: `headscale nodes delete --identifier <node-id>` * Remove a device: `headscale nodes delete --identifier <node-id>`
## Personal Blog
Personal Blog is a static site served by Caddy's file server.
### Deploy
* Decide what subdomain you want to serve the personal blog on and add it to `ansible/services_config.yml` under `personal_blog`.
* Note that you will have to add a DNS entry to point to the VPS public IP.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/personal-blog/deploy_personal_blog_playbook.yml`.
* The playbook will:
* Create the web root directory at `/var/www/pablohere.contrapeso.xyz` (or your configured domain)
* Set up Caddy configuration with file_server directive
* Create an Uptime Kuma monitor for the site
* Configure proper permissions for deployment
### Set up deployment alias on Lapy
* Run the deployment alias setup playbook: `ansible-playbook -i inventory.ini services/personal-blog/setup_deploy_alias_lapy.yml`.
* This creates a `deploy-personal-blog` alias in your `.bashrc` that allows you to deploy your static site from `~/pablohere/public/` to the server.
* Source your `.bashrc` or open a new terminal to use the alias: `source ~/.bashrc`
* Deploy your site by running: `deploy-personal-blog`
* The alias copies files via scp to a temporary location, then moves them with sudo to the web root and fixes permissions.
Checklist:
- [ ] Personal blog is accessible at the FQDN
- [ ] Uptime Kuma monitor for the blog is showing as healthy
- [ ] Deployment alias is working and you can successfully deploy files

View file

@ -55,3 +55,5 @@ This document describes which playbooks each setup script applies to which machi

View file

@ -68,8 +68,18 @@
echo "Starting Forgejo service..." echo "Starting Forgejo service..."
$SSH_CMD {{ remote_user }}@{{ remote_host }} "sudo systemctl start {{ forgejo_service_name }}" $SSH_CMD {{ remote_user }}@{{ remote_host }} "sudo systemctl start {{ forgejo_service_name }}"
echo "Rotating old backups..." # Rotate old backups (keep 3 days)
find "{{ local_backup_dir }}" -name "forgejo-backup-*.tar.gz.gpg" -mtime +13 -delete # Calculate cutoff date (3 days ago) and delete backups older than that
CUTOFF_DATE=$(date -d '3 days ago' +'%Y-%m-%d')
for backup_file in "{{ local_backup_dir }}"/forgejo-backup-*.tar.gz.gpg; do
if [ -f "$backup_file" ]; then
# Extract date from filename: forgejo-backup-YYYY-MM-DD.tar.gz.gpg
file_date=$(basename "$backup_file" | sed -n 's/forgejo-backup-\([0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}\)\.tar\.gz\.gpg/\1/p')
if [ -n "$file_date" ] && [ "$file_date" != "$TIMESTAMP" ] && [ "$file_date" \< "$CUTOFF_DATE" ]; then
rm -f "$backup_file"
fi
fi
done
echo "Backup completed successfully" echo "Backup completed successfully"
@ -84,3 +94,29 @@
- name: Run Forgejo backup script to create initial backup - name: Run Forgejo backup script to create initial backup
ansible.builtin.command: "{{ backup_script_path }}" ansible.builtin.command: "{{ backup_script_path }}"
- name: Verify backup was created
block:
- name: Get today's date
command: date +'%Y-%m-%d'
register: today_date
changed_when: false
- name: Check if backup file exists
stat:
path: "{{ local_backup_dir }}/forgejo-backup-{{ today_date.stdout }}.tar.gz.gpg"
register: backup_file_stat
- name: Verify backup file exists
assert:
that:
- backup_file_stat.stat.exists
- backup_file_stat.stat.isreg
fail_msg: "Backup file {{ local_backup_dir }}/forgejo-backup-{{ today_date.stdout }}.tar.gz.gpg was not created"
success_msg: "Backup file {{ local_backup_dir }}/forgejo-backup-{{ today_date.stdout }}.tar.gz.gpg exists"
- name: Verify backup file is not empty
assert:
that:
- backup_file_stat.stat.size > 0
fail_msg: "Backup file {{ local_backup_dir }}/forgejo-backup-{{ today_date.stdout }}.tar.gz.gpg exists but is empty"
success_msg: "Backup file size is {{ backup_file_stat.stat.size }} bytes"

View file

@ -68,9 +68,27 @@
echo "Starting LNBits service..." echo "Starting LNBits service..."
$SSH_CMD {{ remote_user }}@{{ remote_host }} "sudo systemctl start lnbits.service" $SSH_CMD {{ remote_user }}@{{ remote_host }} "sudo systemctl start lnbits.service"
# Rotate old encrypted backups (keep 14 days) # Rotate old backups (keep 14 days)
find "{{ local_backup_dir }}" -name "lnbits-backup-*.tar.gz.gpg" -mtime +13 -delete # Calculate cutoff date (14 days ago) and delete backups older than that
find "{{ local_backup_dir }}" -name "lnbits-env-*.gpg" -mtime +13 -delete CUTOFF_DATE=$(date -d '14 days ago' +'%Y-%m-%d')
for backup_file in "{{ local_backup_dir }}"/lnbits-backup-*.tar.gz.gpg; do
if [ -f "$backup_file" ]; then
# Extract date from filename: lnbits-backup-YYYY-MM-DD.tar.gz.gpg
file_date=$(basename "$backup_file" | sed -n 's/lnbits-backup-\([0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}\)\.tar\.gz\.gpg/\1/p')
if [ -n "$file_date" ] && [ "$file_date" != "$TIMESTAMP" ] && [ "$file_date" \< "$CUTOFF_DATE" ]; then
rm -f "$backup_file"
fi
fi
done
for env_file in "{{ local_backup_dir }}"/lnbits-env-*.gpg; do
if [ -f "$env_file" ]; then
# Extract date from filename: lnbits-env-YYYY-MM-DD.gpg
file_date=$(basename "$env_file" | sed -n 's/lnbits-env-\([0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}\)\.gpg/\1/p')
if [ -n "$file_date" ] && [ "$file_date" != "$TIMESTAMP" ] && [ "$file_date" \< "$CUTOFF_DATE" ]; then
rm -f "$env_file"
fi
fi
done
echo "Backup completed successfully" echo "Backup completed successfully"

View file

@ -0,0 +1,189 @@
- name: Deploy personal blog static site with Caddy file server
hosts: vipy
become: yes
vars_files:
- ../../infra_vars.yml
- ../../services_config.yml
- ../../infra_secrets.yml
- ./personal_blog_vars.yml
vars:
personal_blog_subdomain: "{{ subdomains.personal_blog }}"
caddy_sites_dir: "{{ caddy_sites_dir }}"
personal_blog_domain: "{{ personal_blog_subdomain }}.{{ root_domain }}"
uptime_kuma_api_url: "https://{{ subdomains.uptime_kuma }}.{{ root_domain }}"
tasks:
- name: Ensure user is in www-data group
user:
name: "{{ ansible_user }}"
groups: www-data
append: yes
- name: Create web root directory for personal blog
file:
path: "{{ personal_blog_web_root }}"
state: directory
owner: "{{ ansible_user }}"
group: www-data
mode: '2775'
- name: Fix ownership and permissions on web root directory
shell: |
chown -R {{ ansible_user }}:www-data {{ personal_blog_web_root }}
find {{ personal_blog_web_root }} -type d -exec chmod 2775 {} \;
find {{ personal_blog_web_root }} -type f -exec chmod 664 {} \;
- name: Create placeholder index.html
copy:
dest: "{{ personal_blog_web_root }}/index.html"
content: |
<!DOCTYPE html>
<html>
<head>
<title>Personal Blog</title>
</head>
<body>
<h1>Personal Blog</h1>
<p>Site is ready. Deploy your static files here.</p>
</body>
</html>
owner: "{{ ansible_user }}"
group: www-data
mode: '0664'
- name: Ensure Caddy sites-enabled directory exists
file:
path: "{{ caddy_sites_dir }}"
state: directory
owner: root
group: root
mode: '0755'
- name: Ensure Caddyfile includes import directive for sites-enabled
lineinfile:
path: /etc/caddy/Caddyfile
line: 'import sites-enabled/*'
insertafter: EOF
state: present
backup: yes
- name: Create Caddy file server configuration for personal blog
copy:
dest: "{{ caddy_sites_dir }}/personal-blog.conf"
content: |
{{ personal_blog_domain }} {
root * {{ personal_blog_web_root }}
file_server
}
owner: root
group: root
mode: '0644'
- name: Reload Caddy to apply new config
command: systemctl reload caddy
- name: Create Uptime Kuma monitor setup script for Personal Blog
delegate_to: localhost
become: no
copy:
dest: /tmp/setup_personal_blog_monitor.py
content: |
#!/usr/bin/env python3
import sys
import yaml
from uptime_kuma_api import UptimeKumaApi, MonitorType
try:
with open('/tmp/ansible_config.yml', 'r') as f:
config = yaml.safe_load(f)
url = config['uptime_kuma_url']
username = config['username']
password = config['password']
monitor_url = config['monitor_url']
monitor_name = config['monitor_name']
api = UptimeKumaApi(url, timeout=30)
api.login(username, password)
# Get all monitors
monitors = api.get_monitors()
# Find or create "services" group
group = next((m for m in monitors if m.get('name') == 'services' and m.get('type') == 'group'), None)
if not group:
group_result = api.add_monitor(type='group', name='services')
# Refresh to get the group with id
monitors = api.get_monitors()
group = next((m for m in monitors if m.get('name') == 'services' and m.get('type') == 'group'), None)
# Check if monitor already exists
existing_monitor = None
for monitor in monitors:
if monitor.get('name') == monitor_name:
existing_monitor = monitor
break
# Get ntfy notification ID
notifications = api.get_notifications()
ntfy_notification_id = None
for notif in notifications:
if notif.get('type') == 'ntfy':
ntfy_notification_id = notif.get('id')
break
if existing_monitor:
print(f"Monitor '{monitor_name}' already exists (ID: {existing_monitor['id']})")
print("Skipping - monitor already configured")
else:
print(f"Creating monitor '{monitor_name}'...")
api.add_monitor(
type=MonitorType.HTTP,
name=monitor_name,
url=monitor_url,
parent=group['id'],
interval=60,
maxretries=3,
retryInterval=60,
notificationIDList={ntfy_notification_id: True} if ntfy_notification_id else {}
)
api.disconnect()
print("SUCCESS")
except Exception as e:
print(f"ERROR: {str(e)}", file=sys.stderr)
sys.exit(1)
mode: '0755'
- name: Create temporary config for monitor setup
delegate_to: localhost
become: no
copy:
dest: /tmp/ansible_config.yml
content: |
uptime_kuma_url: "{{ uptime_kuma_api_url }}"
username: "{{ uptime_kuma_username }}"
password: "{{ uptime_kuma_password }}"
monitor_url: "https://{{ personal_blog_domain }}"
monitor_name: "Personal Blog"
mode: '0644'
- name: Run Uptime Kuma monitor setup
command: python3 /tmp/setup_personal_blog_monitor.py
delegate_to: localhost
become: no
register: monitor_setup
changed_when: "'SUCCESS' in monitor_setup.stdout"
ignore_errors: yes
- name: Clean up temporary files
delegate_to: localhost
become: no
file:
path: "{{ item }}"
state: absent
loop:
- /tmp/setup_personal_blog_monitor.py
- /tmp/ansible_config.yml

View file

@ -0,0 +1,16 @@
# Personal Blog Configuration
# Web root directory on server
personal_blog_web_root: "/var/www/pablohere.contrapeso.xyz"
# Remote access for deployment
remote_host_name: "vipy"
remote_host: "{{ hostvars.get(remote_host_name, {}).get('ansible_host', remote_host_name) }}"
remote_user: "{{ hostvars.get(remote_host_name, {}).get('ansible_user', 'counterweight') }}"
remote_key_file: "{{ hostvars.get(remote_host_name, {}).get('ansible_ssh_private_key_file', '') }}"
remote_port: "{{ hostvars.get(remote_host_name, {}).get('ansible_port', 22) }}"
# Local deployment paths
local_source_dir: "{{ lookup('env', 'HOME') }}/pablohere/public"
deploy_alias_name: "deploy-personal-blog"

View file

@ -0,0 +1,33 @@
- name: Configure deployment alias for personal blog in lapy .bashrc
hosts: lapy
gather_facts: no
vars_files:
- ../../infra_vars.yml
- ./personal_blog_vars.yml
vars:
bashrc_path: "{{ lookup('env', 'HOME') }}/.bashrc"
alias_line: "alias {{ deploy_alias_name }}='scp -r {{ local_source_dir }}/* {{ remote_user }}@{{ remote_host }}:/tmp/blog-deploy/ && ssh {{ remote_user }}@{{ remote_host }} \"sudo rm -rf {{ personal_blog_web_root }}/* && sudo cp -r /tmp/blog-deploy/* {{ personal_blog_web_root }}/ && sudo rm -rf /tmp/blog-deploy && sudo chown -R {{ remote_user }}:www-data {{ personal_blog_web_root }} && sudo find {{ personal_blog_web_root }} -type d -exec chmod 2775 {} \\; && sudo find {{ personal_blog_web_root }} -type f -exec chmod 664 {} \\;\"'"
tasks:
- name: Remove any existing deployment alias from .bashrc (to avoid duplicates)
lineinfile:
path: "{{ bashrc_path }}"
regexp: "^alias {{ deploy_alias_name }}="
state: absent
backup: yes
- name: Add or update deployment alias in .bashrc
lineinfile:
path: "{{ bashrc_path }}"
line: "{{ alias_line }}"
backup: yes
insertafter: EOF
- name: Display deployment alias information
debug:
msg:
- "Deployment alias '{{ deploy_alias_name }}' has been configured in {{ bashrc_path }}"
- "Usage: {{ deploy_alias_name }}"
- "This will scp {{ local_source_dir }}/* to {{ remote_user }}@{{ remote_host }}:{{ personal_blog_web_root }}/"
- "Note: You may need to run 'source ~/.bashrc' or open a new terminal to use the alias"

View file

@ -13,10 +13,11 @@ subdomains:
# Core Services (on vipy) # Core Services (on vipy)
vaultwarden: vault vaultwarden: vault
forgejo: forgejo forgejo: forgejo
lnbits: lnbits lnbits: wallet
# Secondary Services (on vipy) # Secondary Services (on vipy)
ntfy_emergency_app: emergency ntfy_emergency_app: emergency
personal_blog: pablohere
# Memos (on memos-box) # Memos (on memos-box)
memos: memos memos: memos