This repo contains some rather general tools that you may or may not need depending on what services you want to deploy and what device you're working on. This tools can be installed with the `900` group of playbooks sitting at `ansible/infra`.
By default, these playbooks are configured for `hosts: all`. Be mindful if you want to limit, you can use the `--limit groupname` flag when running the playbook.
Below you have notes on adding each specific tool to a device.
- [ ] You have stored the credentials of the Uptime kuma admin user
## ntfy
ntfy is a notifications server.
### Deploy
* Decide what subdomain you want to serve ntfy on and add it to `services/ntfy/ntfy_vars.yml` on the `ntfy_subdomain`.
* Note that you will have to add a DNS entry to point to the VPS public IP.
* Ensure the admin user credentials are set in `ansible/infra_secrets.yml` under `ntfy_username` and `ntfy_password`. This user is the only one authorised to send and read messages from topics.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/ntfy/deploy_ntfy_playbook.yml`.
* Run this playbook to create a notifaction entry in uptime kuma that points to your freshly deployed ntfy instance: `ansible-playbook -i inventory.ini services/ntfy/setup_ntfy_uptime_kuma_notification.yml`
### Configure
* You can visit the ntfy web UI at the FQDN you configured.
* You can start using notify to send alerts with uptime kuma by visiting the uptime kuma UI and using the credentials for the ntfy admin user.
* To receive alerts on your phone, install the official ntfy app: https://github.com/binwiederhier/ntfy-android.
* You can also subscribe on the web UI on your laptop.
### Backups
Given that ntfy is almost stateless, no backups are made. If it blows up, simply set it up again.
Checklist
- [ ] ntfy UI is reachable
- [ ] You can see the notification in uptime kuma and test it successfully
* Be careful! The restoring of a backup doesn't include the signup behaviour. If you deployed a new instance and restored a backup, you still need to manually repeat as described above the disabling of the sign ups.
Checklist
- [ ] The service is reachable at the URL
- [ ] You have stored the admin creds properly
- [ ] You can't create another user at the /signup path
* Make sure rsync is available on the host and on Lapy.
* Ensure GPG is configured with a recipient in your inventory (the backup script requires `gpg_recipient` to be set).
* Run the backup playbook: `ansible-playbook -i inventory.ini services/forgejo/setup_backup_forgejo_to_lapy.yml`.
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically. The script backs up both the data and config directories. Backups are GPG encrypted for safety. Note that the Forgejo service is stopped during backup to ensure consistency.
### Restoring to a previous state
* Stop Forgejo.
* Decrypt the backup: `gpg --decrypt forgejo-backup-YYYY-MM-DD.tar.gz.gpg | tar -xzf -`
* Overwrite the data and config directories with the restored backup.
* Ensure that files in `/var/lib/foregejo/` are owned by the right user.
* Start Forgejo again.
* You may need to refresh ssh pub key so your old SSH driven git remotes work. Go to site administration, dashboard, and run task `Update the ".ssh/authorized_keys" file with Forgejo SSH keys.`.
Checklist:
- [ ] Forgejo is accessible at the FQDN
- [ ] You have stored the admin credentials properly
- [ ] The backup script runs fine
- [ ] SSH cloning works after setting up your SSH pub key
* Note that you will have to add a DNS entry to point to the VPS public IP.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/lnbits/deploy_lnbits_playbook.yml`.
### Configure
* LNBits will be available for you to create a superuser on first start. Do that and store the creds safely.
* From that point on, you can configure through the Web UI.
* Some advice around specifics of LNbits:
* The default setup uses a FakeWallet backend for testing. Configure a real Lightning backend as needed by modifying the `.env` file located or using the superuser UI.
* For security, disable the new users registration.
### Set up backups to Lapy
* Make sure rsync is available on the host and on Lapy.
* Run the backup playbook: `ansible-playbook -i inventory.ini services/lnbits/setup_backup_lnbits_to_lapy.yml`.
* A first backup process gets executed and then a cronjob is set up to refresh backups periodically. The script backs up both the `.env` file and the sqlite database. Backups are gpg encrypted for safety.
### Restoring to a previous state
* Stop LNBits.
* Overwrite the data folder with one of the backups.
ntfy-emergency-app is a simple web application that allows trusted people to send emergency messages via ntfy notifications. Perfect for situations where you need to be alerted immediately but don't want to enable notifications on your regular messaging apps.
Headscale is a self-hosted Tailscale control server that allows you to create your own Tailscale network.
### Deploy
* Decide what subdomain you want to serve Headscale on and add it to `services/headscale/headscale_vars.yml` on the `headscale_subdomain`.
* Note that you will have to add a DNS entry to point to the VPS public IP.
* Run the deployment playbook: `ansible-playbook -i inventory.ini services/headscale/deploy_headscale_playbook.yml`.
### Configure
* **Network Security**: The network starts with a deny-all policy - no devices can communicate with each other until you explicitly configure ACL rules in `/etc/headscale/acl.json`.