11 KiB
02 VPS Core Services Setup
Now that the VPSs are ready, we need to deploy some basic services which are foundational for the apps we're actually interested in.
This assumes you've completed the markdown 01.
General tools
This repo contains some rather general tools that you may or may not need depending on what services you want to deploy and what device you're working on. This tools can be installed with the 900 group of playbooks sitting at ansible/infra.
By default, these playbooks are configured for hosts: all. Be mindful if you want to limit, you can use the --limit groupname flag when running the playbook.
Below you have notes on adding each specific tool to a device.
rsync
Simply run the playbook:
ansible-playbook -i inventory.ini infra/900_install_rsync.yml
docker and compose
Simply run the playbook:
ansible-playbook -i inventory.ini infra/910_docker_playbook.yml
Checklist:
- All 3 VPSs responde to
docker version - All 3 VPSs responde to
docker compose version
Deploy Caddy
-
Use Ansible to run the caddy playbook:
cd ansible ansible-playbook -i inventory.ini services/caddy_playbook.yml -
Starting config will be empty. Modifying the caddy config file to add endpoints as we add services is covered by the instructions of each service.
Checklist:
- All 3 VPSs have Caddy up and running
Uptime Kuma
Uptime Kuma gets used to monitor the availability of services, keep track of their uptime and notify issues.
Deploy
- Decide what subdomain you want to serve Uptime Kuma on and add it to
services/services_config.ymlon theuptime_kumaentry.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Run the deployment playbook:
ansible-playbook -i inventory.ini services/uptime_kuma/deploy_uptime_kuma_playbook.yml.
Set up backups to Lapy
- Make sure rsync is available on the host and on Lapy.
- Run the backup playbook:
ansible-playbook -i inventory.ini services/uptime_kuma/setup_backup_uptime_kuma_to_lapy.yml. - A first backup process gets executed and then a cronjob is set up to refresh backups periodically.
Configure
- Uptime Kuma will be available for you to create a user on first start. Do that and store the creds safe.
- From that point on, you can configure through the Web UI.
Restoring to a previous state
- Stop Uptime Kuma.
- Overwrite the data folder with one of the backups.
- Start it up again.
Checklist:
- Uptime kuma is accesible at the FQDN
- The backup script runs fine
- You have stored the credentials of the Uptime kuma admin user
ntfy
ntfy is a notifications server.
Deploy
- Decide what subdomain you want to serve ntfy on and add it to
services/ntfy/ntfy_vars.ymlon thentfy_subdomain.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Ensure the admin user credentials are set in
ansible/infra_secrets.ymlunderntfy_usernameandntfy_password. This user is the only one authorised to send and read messages from topics. - Run the deployment playbook:
ansible-playbook -i inventory.ini services/ntfy/deploy_ntfy_playbook.yml. - Run this playbook to create a notifaction entry in uptime kuma that points to your freshly deployed ntfy instance:
ansible-playbook -i inventory.ini services/ntfy/setup_ntfy_uptime_kuma_notification.yml
Configure
- You can visit the ntfy web UI at the FQDN you configured.
- You can start using notify to send alerts with uptime kuma by visiting the uptime kuma UI and using the credentials for the ntfy admin user.
- To receive alerts on your phone, install the official ntfy app: https://github.com/binwiederhier/ntfy-android.
- You can also subscribe on the web UI on your laptop.
Backups
Given that ntfy is almost stateless, no backups are made. If it blows up, simply set it up again.
Checklist
- ntfy UI is reachable
- You can see the notification in uptime kuma and test it successfully
VPS monitoring scripts
Deploy
- Run playbooks:
ansible-playbook -i inventory.ini infra/410_disk_usage_alerts.yml --limit vpsansible-playbook -i inventory.ini infra/420_system_healthcheck.yml --limit vps
Checklist:
- You can see both the system healthcheck and disk usage check for all VPSs in the uptime kuma UI.
Vaultwarden
Vaultwarden is a credentials manager.
Deploy
- Decide what subdomain you want to serve Vaultwarden on and add it to
services/vaultwarden/vaultwarden_vars.ymlon thevaultwarden_subdomain.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Make sure docker is available on the host.
- Run the deployment playbook:
ansible-playbook -i inventory.ini services/vaultwarden/deploy_vaultwarden_playbook.yml.
Configure
- Vaultwarden will be available for you to create a user on first start. Do that and store the creds safely.
- From that point on, you can configure through the Web UI.
Disable registration
- You probably don't want anyone to just be able to register without permission.
- To prevent that, you can run the playbook
disable_vaultwarden_sign_ups_playbook.ymlafter creating the first user.
Set up backups to Lapy
- Make sure rsync is available on the host and on Lapy.
- Run the backup playbook:
ansible-playbook -i inventory.ini services/vaultwarden/setup_backup_vaultwarden_to_lapy.yml. - A first backup process gets executed and then a cronjob is set up to refresh backups periodically.
Restoring to a previous state
- Stop Vaultwarden.
- Overwrite the data folder with one of the backups.
- Start it up again.
Forgejo
Forgejo is a git server.
Deploy
- Decide what subdomain you want to serve Forgejo on and add it to
services/forgejo/forgejo_vars.ymlon theforgejo_subdomain.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Run the deployment playbook:
ansible-playbook -i inventory.ini services/forgejo/deploy_forgejo_playbook.yml.
Configure
- Forgejo will be available for you to create a user on first start. Do that and store the creds safely.
- Default behaviour after that is not allow for registrations.
- You can tweak more settings from that point on.
- SSH cloning should work out of the box (after you've set up your SSH pub key in Forgejo, that is).
LNBits
LNBits is a Lightning Network wallet and accounts system.
Deploy
- Decide what subdomain you want to serve LNBits on and add it to
services/lnbits/lnbits_vars.ymlon thelnbits_subdomain.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Run the deployment playbook:
ansible-playbook -i inventory.ini services/lnbits/deploy_lnbits_playbook.yml.
Configure
- LNBits will be available for you to create a superuser on first start. Do that and store the creds safely.
- From that point on, you can configure through the Web UI.
- Some advice around specifics of LNbits:
- The default setup uses a FakeWallet backend for testing. Configure a real Lightning backend as needed by modifying the
.envfile located or using the superuser UI. - For security, disable the new users registration.
- The default setup uses a FakeWallet backend for testing. Configure a real Lightning backend as needed by modifying the
Set up backups to Lapy
- Make sure rsync is available on the host and on Lapy.
- Run the backup playbook:
ansible-playbook -i inventory.ini services/lnbits/setup_backup_lnbits_to_lapy.yml. - A first backup process gets executed and then a cronjob is set up to refresh backups periodically. The script backs up both the
.envfile and the sqlite database. Backups are gpg encrypted for safety.
Restoring to a previous state
- Stop LNBits.
- Overwrite the data folder with one of the backups.
- Start it up again.
ntfy-emergency-app
ntfy-emergency-app is a simple web application that allows trusted people to send emergency messages via ntfy notifications. Perfect for situations where you need to be alerted immediately but don't want to enable notifications on your regular messaging apps.
Deploy
- Decide what subdomain you want to serve the emergency app on and update
ansible/services_config.ymlunderntfy_emergency_app.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Configure the ntfy settings in
ntfy_emergency_app_vars.yml:ntfy_emergency_app_topic: The ntfy topic to send messages to (default: "emergency")ntfy_emergency_app_ui_message: Custom message displayed in the web interface
- Ensure
infra_secrets.ymlcontainsntfy_usernameandntfy_passwordwith the credentials the app should use. - Make sure docker is available on the host.
- Run the deployment playbook:
ansible-playbook -i inventory.ini services/ntfy-emergency-app/deploy_ntfy_emergency_app_playbook.yml.
Headscale
Headscale is a self-hosted Tailscale control server that allows you to create your own Tailscale network.
Deploy
- Decide what subdomain you want to serve Headscale on and add it to
services/headscale/headscale_vars.ymlon theheadscale_subdomain.- Note that you will have to add a DNS entry to point to the VPS public IP.
- Run the deployment playbook:
ansible-playbook -i inventory.ini services/headscale/deploy_headscale_playbook.yml.
Configure
- Network Security: The network starts with a deny-all policy - no devices can communicate with each other until you explicitly configure ACL rules in
/etc/headscale/acl.json. - After deployment, the namespace specified in
services/headscale/headscale_vars.ymlis automatically created.
Connect devices
Automated method (for servers reachable via SSH from lapy)
- Use the Ansible playbook to automatically join machines to the mesh:
ansible-playbook -i inventory.ini infra/920_join_headscale_mesh.yml --limit <target-host> - The playbook will:
- Generate an ephemeral pre-auth key (expires in 1 minute) by SSHing from lapy to the headscale server
- Install Tailscale on the target machine
- Configure Tailscale to connect to your headscale server
- Enable magic DNS so devices can talk to each other by hostname
Manual method (for mobile apps, desktop clients, etc.)
- Install Tailscale on your devices (mobile apps, desktop clients, etc.).
- Generate a pre-auth key by SSHing into your headscale server:
ssh <headscale-server> sudo headscale preauthkeys create --user counter-net --reusable - Instead of using the default Tailscale login, use your headscale server:
- Server URL:
https://headscale.contrapeso.xyz(or your configured domain) - Use the pre-auth key you generated above
- Full command:
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
- Server URL:
- Your devices will now be part of your private Tailscale network.
Management
- List connected devices:
headscale nodes list - View users:
headscale users list - Generate new pre-auth keys:
headscale preauthkeys create --user counter-net --reusable - Remove a device:
headscale nodes delete --identifier <node-id>