diff --git a/.gitignore b/.gitignore deleted file mode 100644 index 8f24f11..0000000 --- a/.gitignore +++ /dev/null @@ -1,14 +0,0 @@ -# Deployment configuration (contains sensitive server details) -deploy.config - -# OS files -.DS_Store -Thumbs.db - -# Editor files -.vscode/ -.idea/ -*.swp -*.swo -*~ - diff --git a/README.md b/README.md index 32ee0c7..b448f02 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,4 @@ The `index.html` is ready in the `public` folder. ## How to deploy -1. Copy `deploy.config.example` to `deploy.config` -2. Fill in your server details in `deploy.config` (host, user, remote path) -3. Run `./deploy.sh` to sync the `public` folder to your remote webserver +Somehow get the `public` folder behind a webserver manually and sort out DNS. diff --git a/deploy.config.example b/deploy.config.example deleted file mode 100644 index 1c84f53..0000000 --- a/deploy.config.example +++ /dev/null @@ -1,21 +0,0 @@ -# Deployment Configuration -# Copy this file to deploy.config and fill in your server details -# deploy.config is gitignored to keep your credentials safe - -# Remote server hostname or IP address -REMOTE_HOST="example.com" - -# SSH username for the remote server -REMOTE_USER="username" - -# Remote path where the website should be deployed -# This should be the directory served by your webserver (e.g., /var/www/html, /home/username/public_html) -REMOTE_PATH="/var/www/html" - -# Optional: Path to SSH private key (if not using default ~/.ssh/id_rsa) -# Leave empty to use default SSH key -SSH_KEY="" - -# Optional: SSH port (defaults to 22 if not specified) -# SSH_PORT="22" - diff --git a/deploy.sh b/deploy.sh deleted file mode 100755 index 5cf9692..0000000 --- a/deploy.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash - -# Deployment script for pablohere website -# This script syncs the public folder to a remote webserver - -set -e # Exit on error - -# Load deployment configuration -if [ ! -f "deploy.config" ]; then - echo "Error: deploy.config file not found!" - echo "Please copy deploy.config.example to deploy.config and fill in your server details." - exit 1 -fi - -source deploy.config - -# Validate required variables -if [ -z "$REMOTE_HOST" ] || [ -z "$REMOTE_USER" ] || [ -z "$REMOTE_PATH" ]; then - echo "Error: Required variables not set in deploy.config" - echo "Please ensure REMOTE_HOST, REMOTE_USER, and REMOTE_PATH are set." - exit 1 -fi - -# Use rsync to sync files -echo "Deploying public folder to $REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH" -rsync -avz --delete \ - --exclude='.git' \ - --exclude='.DS_Store' \ - $SSH_OPTS \ - public/ \ - $REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH - -echo "Deployment complete!" - diff --git a/public/index.html b/public/index.html index 1c18d0d..fe03b71 100644 --- a/public/index.html +++ b/public/index.html @@ -1,230 +1,154 @@ - + - Pablo here - - - + Pablo here + + + -
-

Hi, Pablo here

-

- Welcome to my website. Here I discuss thoughts and ideas. This is mostly - professional. -

-
-

What you'll find here:

- -
-
-

About me

-

A few facts you might care about:

- -
-
-
-

Contact

-

You can contact me on:

- -

- If you are looking for my CV, no need to reach out, - you can fetch it yourself here. -

-

Good reasons to reach out include:

- -

Bad reasons to reach out include:

- -
-
-
-

My projects

-

Some of the projects I've shared publicly:

- -

- There are also some other projects that I generally keep private but - might disclose under the right circumstances. Some notable hints: -

- -
-
-
-

Writings

-

Sometimes I like to jot down ideas and drop them here.

- -
-
- +
+

+ Hi, Pablo here +

+

+ Welcome to my website. Here I discuss thoughts and ideas. This is mostly professional. +

+
+

What you'll find here:

+ +
+
+

About me

+

A few facts you might care about:

+ +
+
+
+

Contact

+

You can contact me on:

+ +

If you are looking for my CV, no need to reach out, you can fetch it + yourself here.

+

Good reason to reach out include:

+ +

Bad reasons to reach out include:

+ +
+
+
+

My projects

+

Some of the projects I've shared publicly:

+ +

There are also some other projects that I generally keep private but might disclose under the right + circumstances. Some notable hints:

+ +
+
+
+

Writings

+

Sometimes I like to jot down ideas and drop them here.

+ +
+
+ \ No newline at end of file diff --git a/public/keybase.txt b/public/keybase.txt deleted file mode 100644 index 43ffead..0000000 --- a/public/keybase.txt +++ /dev/null @@ -1,56 +0,0 @@ -================================================================== -https://keybase.io/pablomartincalvo --------------------------------------------------------------------- - -I hereby claim: - - * I am an admin of https://pablohere.contrapeso.xyz - * I am pablomartincalvo (https://keybase.io/pablomartincalvo) on keybase. - * I have a public key ASDgHxztDlU_R4hjxbkO21-rS4Iv1gABa3BPb_Aff7aNAgo - -To do so, I am signing this object: - -{ - "body": { - "key": { - "eldest_kid": "0120d9bde13d9012e681cef2edd668d70426f1f6ef69ce7dfae20b404096eca5b06f0a", - "host": "keybase.io", - "kid": "0120e01f1ced0e553f478863c5b90edb5fab4b822fd600016b704f6ff01f7fb68d020a", - "uid": "8e71277fbc0fb1fea28d60308f495d19", - "username": "pablomartincalvo" - }, - "merkle_root": { - "ctime": 1755635067, - "hash": "4f91af0b9c674e0f1d74a7cfad7abd15a7065cded92b96ac8a6abeb5c8553318599aa1bf7b065a3312e303506256b729b8b60b3a5dd06b68694423f4341a6a14", - "hash_meta": "6472dbf2ed33341fb30b6a0c5c5c7fb39c219dd0ffd03c6e08b68c788e0de60a", - "seqno": 27031070 - }, - "service": { - "entropy": "LEFJJ4FMmlJQWPPFEO4xHE5y", - "hostname": "pablohere.contrapeso.xyz", - "protocol": "https:" - }, - "type": "web_service_binding", - "version": 2 - }, - "client": { - "name": "keybase.io go client", - "version": "6.5.1" - }, - "ctime": 1755635082, - "expire_in": 504576000, - "prev": "37f12270050ab037897ccf6ef9451b1911cb505eca7c3842993b0b8925bc79b8", - "seqno": 31, - "tag": "signature" -} - -which yields the signature: - -hKRib2R5hqhkZXRhY2hlZMOpaGFzaF90eXBlCqNrZXnEIwEg4B8c7Q5VP0eIY8W5Dttfq0uCL9YAAWtwT2/wH3+2jQIKp3BheWxvYWTESpcCH8QgN/EicAUKsDeJfM9u+UUbGRHLUF7KfDhCmTsLiSW8ebjEIAnIWTmufZ017e9WLdI1LhKBPaZ3HzmTrgyASDvY3PwoAgHCo3NpZ8RA9a3xgkSTU6Ht7M7DCsy4ClMmoWFtDEqzX9/dqskeoH2DrJUZYVymBQE1nyB0p1GuXiZA1cP5WY5SDURWZ5bBC6hzaWdfdHlwZSCkaGFzaIKkdHlwZQildmFsdWXEIEJZ4g4HC5qXcqbFf6sJ8XuZyMtoppazFqr1zPu0LH5co3RhZ80CAqd2ZXJzaW9uAQ== - -And finally, I am proving ownership of this host by posting or -appending to this document. - -View my publicly-auditable identity here: https://keybase.io/pablomartincalvo - -================================================================== \ No newline at end of file diff --git a/public/my_cv.pdf b/public/my_cv.pdf index f3422a0..27038a6 100644 Binary files a/public/my_cv.pdf and b/public/my_cv.pdf differ diff --git a/public/static/computers.png b/public/static/computers.png deleted file mode 100644 index 955951a..0000000 Binary files a/public/static/computers.png and /dev/null differ diff --git a/public/static/homophobic-socialist-drug-dealer.png b/public/static/homophobic-socialist-drug-dealer.png deleted file mode 100644 index f013ebc..0000000 Binary files a/public/static/homophobic-socialist-drug-dealer.png and /dev/null differ diff --git a/public/static/hospitals-inside.png b/public/static/hospitals-inside.png deleted file mode 100644 index 515541e..0000000 Binary files a/public/static/hospitals-inside.png and /dev/null differ diff --git a/public/static/hospitals-outside.png b/public/static/hospitals-outside.png deleted file mode 100644 index 33c1cd0..0000000 Binary files a/public/static/hospitals-outside.png and /dev/null differ diff --git a/public/static/plumbings.png b/public/static/plumbings.png deleted file mode 100644 index 68d61d0..0000000 Binary files a/public/static/plumbings.png and /dev/null differ diff --git a/public/static/stations.png b/public/static/stations.png deleted file mode 100644 index 54ad4ef..0000000 Binary files a/public/static/stations.png and /dev/null differ diff --git a/public/static/streetlamps.png b/public/static/streetlamps.png deleted file mode 100644 index 84a42d0..0000000 Binary files a/public/static/streetlamps.png and /dev/null differ diff --git a/public/styles.css b/public/styles.css index e7908b3..1b99426 100644 --- a/public/styles.css +++ b/public/styles.css @@ -7,8 +7,7 @@ body { h1, h2, -h3, -h4 { +h3 { text-align: center; } @@ -22,8 +21,7 @@ img { display: block; } -figcaption { +figcaption a { font-style: italic; font-size: small; - text-align: center; } \ No newline at end of file diff --git a/public/writings/a-degraded-pool-with-a-healthy-disk.html b/public/writings/a-degraded-pool-with-a-healthy-disk.html deleted file mode 100644 index 521b7cc..0000000 --- a/public/writings/a-degraded-pool-with-a-healthy-disk.html +++ /dev/null @@ -1,133 +0,0 @@ - - - - - Pablo here - - - - - - - -
-

- Hi, Pablo here -

-

back to home

-
-

A degraded pool with a healthy disk

-

Part 2 of 3 in my "First ZFS Degradation" series. See also Part 1: The Setup and Part 3: The Fix.

-

The "Oh Shit" Moment

-

I wasn't even looking for trouble. I was clicking around the Proxmox web UI, exploring some storage views I hadn't noticed before, when I saw it: my ZFS pool was in DEGRADED state.

-

I opened the details. One of my two mirrored drives was listed as FAULTED.

-

I was very surprised. This box and disks were brand new and didn't even have three months of running on them. I was not expecting HW issues to come at me that fast. I SSH'd into the server and ran the command that would become my best friend over the next 24 hours:

-
zpool status -v proxmox-tank-1
-

No glitch. The pool was degraded. The drive had racked up over 100 read errors, 600+ write errors, and 129 checksum errors. ZFS had given up on it.

-
  NAME                                 STATE     READ WRITE CKSUM
-  proxmox-tank-1                       DEGRADED     0     0     0
-    mirror-0                           DEGRADED     0     0     0
-      ata-ST4000NT001-3M2101_WX11TN0Z  FAULTED    108   639   129  too many errors
-      ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
-

The good news: errors: No known data errors. ZFS was serving all my data from the healthy drive. Nothing was lost yet.

-

The bad news: I was running on a single point of failure. If AGAPITO2 decided to have a bad day too, I'd be in real trouble.

-

I tried the classic IT move: rebooting. The system came back up and ZFS immediately started trying to resilver (rebuild) the degraded drive. But within minutes, the errors started piling up again and the resilver stalled.

-

Time to actually figure out what was wrong.

-

The Diagnostic Toolbox

-

When a ZFS drive acts up, you have two main sources of truth: what the kernel sees happening at the hardware level, and what the drive itself reports about its health. This can be looked up with dmesg and smartctl.

-

dmesg: The Kernel's Diary

-

The Linux kernel maintains a ring buffer of messages about hardware events, driver activities, and system operations. The dmesg command lets you read it. For disk issues, you want to grep for SATA-related keywords:

-
dmesg -T | egrep -i 'ata[0-9]|sata|reset|link|i/o error' | tail -100
-

The -T flag gives you human-readable timestamps instead of seconds-since-boot.

-

What I saw was... weird. Here's an excerpt:

-
[Fri Jan  2 22:25:13 2026] ata4.00: exception Emask 0x50 SAct 0x70220001 SErr 0xe0802 action 0x6 frozen
-[Fri Jan  2 22:25:13 2026] ata4.00: irq_stat 0x08000000, interface fatal error
-[Fri Jan  2 22:25:13 2026] ata4.00: failed command: READ FPDMA QUEUED
-[Fri Jan  2 22:25:13 2026] ata4: hard resetting link
-[Fri Jan  2 22:25:14 2026] ata4: SATA link down (SStatus 0 SControl 300)
-

Let me translate: the kernel tried to read from the drive on ata4, got a "fatal error," and responded by doing a hard reset of the SATA link. Then the link went down entirely. The drive just... disappeared.

-

But it didn't stay gone. A few seconds later:

-
[Fri Jan  2 22:25:24 2026] ata4: link is slow to respond, please be patient (ready=0)
-[Fri Jan  2 22:25:24 2026] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
-

The drive came back! At full speed! But then...

-
[Fri Jan  2 22:25:29 2026] ata4.00: qc timeout after 5000 msecs (cmd 0xec)
-[Fri Jan  2 22:25:29 2026] ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
-[Fri Jan  2 22:25:29 2026] ata4: limiting SATA link speed to 3.0 Gbps
-

It failed again. The kernel, trying to be helpful, dropped the link speed from 6.0 Gbps to 3.0 Gbps. Maybe a slower speed would be more stable?

-

It wasn't. The pattern repeated: connect, fail, reset, reconnect at a slower speed. 6.0 Gbps, then 3.0 Gbps, then 1.5 Gbps. Eventually:

-
[Fri Jan  2 22:27:06 2026] ata4.00: disable device
-

The kernel gave up entirely.

-

This wasn't what a dying drive looks like. A dying drive throws read errors on specific bad sectors. This drive was connecting and disconnecting like someone was jiggling the cable. The kernel was calling it "interface fatal error", emphasis on interface.

-

smartctl: Asking the Drive Directly

-

Every modern hard drive has S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) — basically a built-in health monitor. The smartctl command lets you get info out of it.

-

First, the overall health check:

-
smartctl -H /dev/sdb
-
SMART overall-health self-assessment test result: PASSED
-

Okay, that looks great. But if the disk is healthy, what the hell is going on, and where are all those errors that ZFS was spotting coming from?

-

Let's dig deeper with the extended info:

-
smartctl -x /dev/sdb
-

The key attributes I was looking for:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AttributeValueWhat it means
Reallocated_Sector_Ct0Bad sectors the drive has swapped out. Zero is good.
Current_Pending_Sector0Sectors waiting to be checked. Zero is good.
UDMA_CRC_Error_Count0Data corruption during transfer. Zero is good.
Number of Hardware Resets39Times the connection has been reset. Uh...
-

All the sector-level health metrics looked perfect. No bad blocks, no pending errors, no CRC errors. The drive's magnetic platters and read/write heads were fine.

-

But 39 hardware resets? That's not normal. That's the drive (or its connection) getting reset nearly 40 times.

-

I ran the short self-test to be sure:

-
smartctl -t short /dev/sdb
-# Wait a minute...
-smartctl -l selftest /dev/sdb
-
# 1  Short offline       Completed without error       00%
-

The drive passed its own self-test. The platters spin, the heads move, the firmware works, and it can read its own data just fine.

-

Hypothesis

-

At this point, the evidence was pointing clearly away from "the drive is dying" and toward "something is wrong with the connection."

-

What the kernel logs told me: the drive keeps connecting and disconnecting. Each time it reconnects, the kernel tries slower speeds. Eventually it gives up entirely. This is what you see with an unstable physical connection.

-

What SMART told me: the drive itself is healthy. No bad sectors, no media errors, no signs of wear. But there have been dozens of hardware resets — the connection keeps getting interrupted.

-

The suspects, in order of likelihood:

-
    -
  1. SATA data cable: the most common culprit for intermittent connection issues. Cables go bad, or weren't seated properly in the first place.
  2. -
  3. Power connection: if the drive isn't getting stable power, it might brown out intermittently.
  4. -
  5. SATA port on the motherboard: less likely, but possible.
  6. -
  7. PSU: power supply issues could affect the power rail feeding the drive. Unlikely, since both disks where feeding from the same cable tread, but still an option.
  8. -
-

Given that I had just built this server a few weeks earlier, and a good part of that happened after midnight... I was beginning to suspect that perhaps I simply might not have plugged in the disk properly.

-

The Verdict

-

I was pretty confident now: the drive was fine, but the connection was bad. Most likely the SATA data cable, and most probably simply not connected properly.

-

The fix would require shutting down the server, opening the case, and reseating (or replacing) cables. Before doing that, I wanted to take the drive offline cleanly and document everything.

-

In Part 3, I'll walk through exactly how I fixed it: the ZFS commands, the physical work, and the validation to make sure everything was actually okay afterward.

-

Continue to Part 3: The Fix

-

back to home

-
-
- - - - - diff --git a/public/writings/a-note-for-the-future-the-tax-bleeding-in-2025.html b/public/writings/a-note-for-the-future-the-tax-bleeding-in-2025.html deleted file mode 100644 index abefd1d..0000000 --- a/public/writings/a-note-for-the-future-the-tax-bleeding-in-2025.html +++ /dev/null @@ -1,188 +0,0 @@ - - - - Pablo here - - - - - - -
-

Hi, Pablo here

-

back to home

-
-
-

A note for the future: the tax bleeding in 2025

-

- I hate taxes deeply. I fell through the rabbit hole of libertarian and - anarcocapitalist ideas some years ago, and taxes have been repulsive - to me ever since. I go to great lengths to not pay them, and feel - deeply hurt everytime they sting my wallet against my will. -

-

- I know life goes by fast, and what today is vivid in your memory fades - away bit by bit until it's gone. I'm truly hoping that, some day in - the future, the world will have changed to the better and people won't - be paying as much tax as we're doing today in the West. Since in that - bright, utopical future I'm dreaming of I might have forgotten about - how bad things were on this matter in 2025, I've decided to make a - little entry here making an estimate on how many taxes I'm - theoretically bleeding on a yearly basis right now. So that we can - someday look back in time and wonder: "how the fuck did we tolerate - that pillaging". -

-

Inventory

-

- Before going hard into the number crunching let's list all the tax - items I'm aware of being subject to: -

- -

- There may be some other small, less frequent taxes that I'm not - considering. These are the ones that will hit most people in my - country. -

-

The numbers

-

- Okay, let's go compute the hideous bill. I'll make a hypothetical - profile that's roughly close to mine, with a few assumptions along the - way. -

- -

With those clear, let's see the actual figures:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Tax€/year
Income Tax (IRPF)22,401 €
Social Security (worker + employer) - 25,375 € - (worker 4,445 € + employer 20,930 €) -
VAT (blended basket)5,250 €
Real Estate Tax (IBI)1,000 €
Vehicle Tax (1.5 vehicles)225 €
Wealth Transfer (10% home, spread 50y)1,000 €
Inheritance (7% of 250k, spread 50y)350 €
Total55,602 €
-

- So there you go. A peaceful existence as a tech professional living a - normal life leads to bleeding at least 55K€ per year, all while - getting an 85K€ salary. The tax rate sits at a wonderful 64%. How far - away is this from hardcore USSR-grade communism? -

-

- And this is generous, since I didn't model (1) what gets stolen - through inflation diluting savings and (2) any capital gains that this - profile might end up paying for whatever investments he is doing with - his savings. -

-

- Then you'll see mainstream media puppets discussing why young people - don't have children. As if it was some kind of mistery. They're being - robbed their children's bread left and right, while getting hypnotized - into believing that protecting themselves against this outrageous - robbery is somehow morally despicable. -

-

Motherfuckers.

-
-

back to home

-
-
- - diff --git a/public/writings/dont-hide-it-make-it-beautiful.html b/public/writings/dont-hide-it-make-it-beautiful.html deleted file mode 100644 index fea806a..0000000 --- a/public/writings/dont-hide-it-make-it-beautiful.html +++ /dev/null @@ -1,125 +0,0 @@ - - - - - Pablo here - - - - - - - -
-

- Hi, Pablo here -

-

back to home

-
-
-

Don't hide it, make it beautiful

-

I'm currently living in a flat, and my internet connection physically comes in through my living room. That's where - my home router is placed. However, my main workspace is not in my living room but in my working room, - which is a few meters away. I would love to have a wired internet connection for my laptop, but unfortunately, with - the router being so far away, setting it up would require running a lot of cable through the walls and - ceilings. I could either leave the cable visible or go through some serious construction work to poke holes through walls and fake ceilings and - tunnel the cable through there. The latter is out of the table, since I don't even know where would I start.

- -

The first option being the only available one, there is one fundamental and unavoidable reason I don't do this: aesthetics. My partner is very conscious about - keeping our home visually pleasing. I care too, though she probably values aesthetics even more than I - do. She likely doesn't find a wired internet connection to be as essential as I do. So, for now, I have to rely on - wifi to connect from my workspace to the home router.

- -

When I was on holiday in Thailand a few years ago, I noticed that Thai homes are far more practical than - European ones in such matters. In Thailand, plumbing, electrical systems, and other maintenance-requiring - installations are typically very visible, just out there on the wall. They don't hide these things behind - fake walls or ceilings. I believe they do this because they highly value the ability to access and work - on their home's systems themselves. Many Thai people build and maintain their own homes, so they leave - everything exposed for easy access.

- -

I sometimes envy this approach. Which is funny because I don't think they do it for pleasure but out of necessity. - Still, when I saw a Thai homeowner fixing their plumbing outside their house, I thought to myself: "Damn, you're so - in control of your home". If something bad happens—like a fallen tree damaging the plumbing—they can fix - it themselves. Meanwhile, if that happened to me, I wouldn't even know where to start. I don't even know - where my plumbing is because it's all hidden behind walls.

- -

That makes me wonder: Is there a way to make these essential systems both accessible and aesthetically - pleasing? Could we have the convenience of exposed infrastructure without it looking ugly? I believe we - can.

- -

I find the problem is that we have decided certain things—plumbing, electrical wiring, visible - infrastructure—are inherently ugly. But they don't have to be. Some household items, like lamps, must be - visible by their very nature. Since they can't be hidden, we put effort into making them look good. We choose stylish designs - that complement our home's aesthetics. Why can't we do the same for cables and pipes?

- -

Imagine if all the wiring in your home was encased in beautifully braided, colorful ropes, arranged in - elegant geometric patterns. The connections, junction boxes, and fittings could be crafted from - high-quality materials like metal and wood with artistic designs. Wouldn't that be nice?

- -

Now, you might think I'm crazy—that these things are just ugly by nature. But they're not. In fact, many - aspects of modern design have become uglier over time, and we've just accepted it.

- -

Consider street lamps. In most cities today, they are dull, industrial-looking poles—rusty, ugly, - and purely functional. Yet, in older parts of my city, we still have beautiful, ornate lamp posts from - over a hundred years ago. They were designed with care, meant to serve a purpose, to be visually - appealing, and to last ages. Take a look:

- -
- -
On the left, your ugly, could-be-anywhere post 1971 streetlamp. On the right, a 19th century bad body from Gaudí.
-
- -

The same goes for train stations. Modern stations are bleak, sterile spaces—metal, plastic, and harsh - lighting. They resemble hospital emergency rooms. But look at the older ones, like this one. - Those stations are masterpieces, designed like grand halls with chandeliers and intricate details.

- -
- -
On the left, Sants Station, built in 1975. On the right, France Station, built in 1848.
-
- -

And talking about hospitals, they are also a good example. Most modern hospitals have the same white, cold, spaceship-like - aesthetic. While cleanliness is important, there's no reason they have to be so uninviting. In my city, - there's a hospital built over a hundred years ago that's so beautiful people visit it as a tourist - attraction. On the other hand, the hospitals I visit personally are plain depressing, soviet style - atrocities.

- -
- -
A random modern clinic in Barcelona vs A small section of the outside of Hospital de Sant Pau. I can skip the left and right thing now, right?
-
-
- -
Some random room in that same modern clinic vs Your regular corridor in Sant Pau.
-
- - - -

I think we can bring things back, if we care enough.

- -

Look at computers. Most office desktop cases are dull, gray boxes—uninspired and purely functional. - Naturally, many of them end up buried inside desks, or if they are small enough, simply hidden behind - the screen on a VESA mount. But gamers, who deeply care about their PCs, go the extra mile to make their - setups look amazing. They invest in custom cases, LED lighting, and stylish cooling systems. They turn - their computers into art. They are testament to the fact that we can make practical things also be - beautiful if we choose to.

- -
- -
The all-present ugly office optiplex vs A beautiful case from a passionate man.
-
- -

If we put the same effort into our homes, we wouldn't need to hide cables and pipes. We could proudly - display them as part of our interior design. Infrastructure could be both functional and beautiful, - giving us accessibility without sacrificing aesthetics.

- -

I guess the point I want to make is... Don't hide it. Instead, make it beautiful.

- -
-

back to home

-
-
- - - - \ No newline at end of file diff --git a/public/writings/fixing-a-degraded-zfs-mirror.html b/public/writings/fixing-a-degraded-zfs-mirror.html deleted file mode 100644 index e8f1f0a..0000000 --- a/public/writings/fixing-a-degraded-zfs-mirror.html +++ /dev/null @@ -1,188 +0,0 @@ - - - - - Pablo here - - - - - - - -
-

- Hi, Pablo here -

-

back to home

-
-

Fixing a Degraded ZFS Mirror: Reseat, Resilver, and Scrub

-

Part 3 of 3 in my "First ZFS Degradation" series. See also Part 1: The Setup and Part 2: Diagnosing the Problem.

-

The Game Plan

-

By now I was pretty confident about what was wrong: not a dying drive, but a flaky SATA connection. The fix should be straightforward. Just take the drive offline, shut down, reseat the cables, bring it back up, and let ZFS heal itself.

-

But I wanted to do this methodically. ZFS is forgiving, but I didn't want to make things worse by rushing.

-

Here was my plan:

-
    -
  1. Take the faulty drive offline in ZFS (tell ZFS "stop trying to use this drive")
  2. -
  3. Power down the server
  4. -
  5. Open the case, inspect and reseat cables
  6. -
  7. Boot up, verify the drive is detected
  8. -
  9. Bring the drive back online in ZFS
  10. -
  11. Let the resilver complete
  12. -
  13. Run a scrub to verify data integrity
  14. -
  15. Check SMART one more time
  16. -
-

Let's walk through each step.

-

Step 1: Taking the Drive Offline

-

Before touching hardware, I wanted ZFS to stop trying to use the problematic drive.

-

First, I set up some variables to avoid typos with that long disk ID:

-
DISKID="ata-ST4000NT001-3M2101_WX11TN0Z"
-DISKPATH="/dev/disk/by-id/$DISKID"
-

Then I took it offline:

-
zpool offline proxmox-tank-1 "$DISKID"
-

Checking the status afterward:

-
zpool status -v proxmox-tank-1
-
  NAME                                 STATE     READ WRITE CKSUM
-  proxmox-tank-1                       DEGRADED     0     0     0
-    mirror-0                           DEGRADED     0     0     0
-      ata-ST4000NT001-3M2101_WX11TN0Z  OFFLINE    108   639   129
-      ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
-

The state changed from FAULTED to OFFLINE. ZFS knows I intentionally took it offline rather than it failing on its own. The error counts are still there as a historical record, but ZFS isn't actively trying to use the drive anymore.

-

Time to shut down and get my hands dirty.

-

Step 2: Opening the Case

-

I powered down the server and opened up the Fractal Node 804. This case has a lovely design with drive bays accessible from the side, which I love. No reaching out into weird corners in the case, just unscrew a couple screws, slide the drive bay out and there they are, handy and reachable.

-

I located AGAPITO1 (I had handwritten labels on the drives, lesson learned after many sessions of playing "which drive is which") and inspected the connections.

-

Here's the honest truth: everything looked fine. The SATA data cable was plugged in. The power connector was plugged in. Nothing was obviously loose or damaged. There was a bit of tension in the cable as it moved from one area of the case (where the motherboard is) to the drives area, but I really didn't think that was affecting the connection to either the drive or the motherboard itself.

-

But "looks fine" doesn't mean "is fine". So I did a full reseat:

- -

I made sure each connector clicked in solidly. Then I closed up the case and hit the power button.

-

Step 3: Verifying Detection

-

The server booted up. Would Linux see the drive?

-
ls -l /dev/disk/by-id/ | grep WX11TN0Z
-
lrwxrwxrwx 1 root root  9 Jan  2 23:15 ata-ST4000NT001-3M2101_WX11TN0Z -> ../../sdb
-

The drive was there, mapped to /dev/sdb.

-

I opened a second terminal and started watching the kernel log in real time:

-
dmesg -Tw
-

This would show me immediately if the connection started acting flaky again. For now, it was quiet, showing just normal boot messages, the drive being detected successfully, etc. Nothing alarming.

-

Step 4: Bringing It Back Online

-

Moment of truth. I told ZFS to start using the drive again:

-
zpool online proxmox-tank-1 "$DISKID"
-

Immediately checked the status:

-
zpool status -v proxmox-tank-1
-
  pool: proxmox-tank-1
- state: DEGRADED
-status: One or more devices is currently being resilvered.
-action: Wait for the resilver to complete.
-  scan: resilver in progress since Fri Jan  2 23:17:35 2026
-        0B resilvered, 0.00% done, no estimated completion time
-
-    NAME                                 STATE     READ WRITE CKSUM
-    proxmox-tank-1                       DEGRADED     0     0     0
-      mirror-0                           DEGRADED     0     0     0
-        ata-ST4000NT001-3M2101_WX11TN0Z  DEGRADED     0     0     0  too many errors
-        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
-

Two things to notice: the drive's error counters are now at zero (we're starting fresh), and ZFS immediately started resilvering. It shows "too many errors" as the reason for the degraded state, which is historical, it remembers why the drive was marked bad before.

-

I kept watching both the status and the kernel log. No errors, no link resets.

-

Step 5: The Resilver

-

Resilvering is ZFS's term for rebuilding redundancy. Copying data from the healthy drive to the one that fell behind. In my case, the drive had been desynchronized for who knows how long (the pool had drifted 524GB out of sync before I noticed), so there was a lot to copy.

-

I shut down my VMs to reduce I/O contention and let the resilver have the disk bandwidth. Progress:

-
scan: resilver in progress since Fri Jan  2 23:17:35 2026
-      495G / 618G scanned, 320G / 618G issued at 100M/s
-      320G resilvered, 51.78% done, 00:50:12 to go
-

The kernel log stayed quiet the whole time. Everything was indicating the cable reseat had worked.

-

I went to bed and let it run overnight. The next morning:

-
scan: resilvered 495G in 01:07:58 with 0 errors on Sat Jan  3 00:25:33 2026
-

495 gigabytes resilvered in about an hour, zero errors. But the pool still showed DEGRADED with a warning about "unrecoverable error." I was very confused about this, but I solved that with some research. Apparently, ZFS is cautious and wants human acknowledgement before declaring everything healthy again.

-
zpool clear proxmox-tank-1 ata-ST4000NT001-3M2101_WX11TN0Z
-

This command clears the error flags. Immediately:

-
  pool: proxmox-tank-1
- state: ONLINE
-  scan: resilvered 495G in 01:07:58 with 0 errors on Sat Jan  3 00:25:33 2026
-
-    NAME                                 STATE     READ WRITE CKSUM
-    proxmox-tank-1                       ONLINE       0     0     0
-      mirror-0                           ONLINE       0     0     0
-        ata-ST4000NT001-3M2101_WX11TN0Z  ONLINE       0     0     0
-        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
-

Damn, seeing this felt nice.

-

Step 6: The Scrub

-

A resilver copies data to bring the drives back in sync, but it doesn't verify that all the existing data is still good. For that, you run a scrub. ZFS reads every block on the pool, verifies checksums, and repairs anything that doesn't match.

-
zpool scrub proxmox-tank-1
-

I let this run while I brought my VMs back up (scrubs can run in the background without blocking normal operations, though performance takes a hit). A few hours later:

-
scan: scrub repaired 13.0M in 02:14:22 with 0 errors on Sat Jan  3 11:03:54 2026
-
-    NAME                                 STATE     READ WRITE CKSUM
-    proxmox-tank-1                       ONLINE       0     0     0
-      mirror-0                           ONLINE       0     0     0
-        ata-ST4000NT001-3M2101_WX11TN0Z  ONLINE       0     0   992
-        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
-

Interesting. The scrub repaired 13MB of data and found 992 checksum mismatches on AGAPITO1. From what I read, checksum errors are typically a sign of the disk being in terrible shape and needing a replacement ASAP. That sounds scary, but I took the risk and assumed those were blocks that had been written incorrectly (or not at all) during the period when the connection was flaky, and not an issue with the disk itself. ZFS detected the bad checksums and healed them using the good copies from AGAPITO2.

-

I cleared the errors again and the pool was clean:

-
zpool clear proxmox-tank-1 ata-ST4000NT001-3M2101_WX11TN0Z
-

Step 7: Final Validation with SMART

-

One more check. I wanted to see if SMART had anything new to say about the drive after all that activity:

-
smartctl -x /dev/sdb | egrep -i 'overall|Reallocated|Pending|CRC|Hardware Resets'
-
SMART overall-health self-assessment test result: PASSED
-  5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
-197 Current_Pending_Sector  -O--C-   100   100   000    -    0
-199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    0
-0x06  0x008  4              41  ---  Number of Hardware Resets
-

Still passing. The hardware reset count went from 39 to 41 — just the reboots I did during this process.

-

For completeness, I ran the long self-test. The short test only takes a minute and does basic checks, the long test actually reads every sector on the disk, which for a 4TB drive takes... a while.

-
smartctl -t long /dev/sdb
-

The estimated time was about 6 hours. In practice, it took closer to 12. Running VMs in parallel probably didn't help.

-

But eventually:

-
SMART Self-test log structure revision number 1
-Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
-# 1  Extended offline    Completed without error       00%      1563         -
-# 2  Short offline       Completed without error       00%      1551         -
-# 3  Short offline       Completed without error       00%      1462         -
-

The extended test passed. Every sector on the disk is readable. The drive is genuinely healthy — it was just the connection that was bad.

-

Lessons Learned

- -

I'm happy I was able to test out recoverying from a faulty disk with such a tiny issue. I learned a lot fixing it, and now I'm even more happy than before having decided to go for this ZFS pool setup.

-

Quick Reference: The Commands

-

For future me (and anyone else who ends up here with a degraded pool):

-
# Check pool status
-zpool status -v 
-
-# Watch kernel logs in real time
-dmesg -Tw
-
-# Check SMART health
-smartctl -H /dev/sdX
-smartctl -x /dev/sdX
-
-# Take a drive offline before physical work
-zpool offline  
-
-# Bring a drive back online
-zpool online  
-
-# Clear error flags after recovery
-zpool clear  
-
-# Run a scrub to verify all data
-zpool scrub 
-
-# Run SMART self-tests
-smartctl -t short /dev/sdX  # Quick test (~1 min)
-smartctl -t long /dev/sdX   # Full surface scan (hours)
-smartctl -l selftest /dev/sdX  # Check test results
-

Thanks for reading! This was Part 3: The Fix. You might also enjoy Part 1: The Setup and Part 2: Diagnosing the Problem.

-

back to home

-
-
- - - - - diff --git a/public/writings/gresham-law-has-nothing-to-do-with-bitcoin.html b/public/writings/gresham-law-has-nothing-to-do-with-bitcoin.html deleted file mode 100644 index 71ec88f..0000000 --- a/public/writings/gresham-law-has-nothing-to-do-with-bitcoin.html +++ /dev/null @@ -1,138 +0,0 @@ - - - - - Pablo here - - - - - - -
-

Hi, Pablo here

-

back to home

-
-
-

Gresham's Law has nothing to do with Bitcoin

-

- This is going to be a thorough explanation for a simple thing, but we - will take it slow since this topic somehow causes loads of confusion. -

-

- Okay, so there are a lot of people in Bitcoin circles who talk about - Gresham's - Law. They often say, “Gresham's Law states that bad money drives out - good money”, then relate it to Bitcoin and the USD, and finally - proceed to reason all sort of of things on top of that. But here's - some very much needed clarification: Gresham's law has nothing to do - with Bitcoin's relationship to the USD. In fact, it actually has - nothing to do Bitcoin, or with the current USD for that matter. -

- -

- Gresham's Law is relevant to a very specific type of monetary system: - when we used coins that contained precious metals (spoiler: we don't - live in that period of history anymore). The law states that bad money - drives out good money, but what a lot of Bitcoiners seem to miss is - the actual meaning of “good” and “bad” in this context. People tend to - interpret “good” and “bad” as meaning “hard” and "easy" money, so they - reason something like: “Because Bitcoin is harder than the USD, - Gresham's law applies here.” But that is not what Gresham's law is - about at all. -

- -

- In the context of Gresham's law, “good” and “bad” refer to face value - versus commodity value. That doesn't ring a bell? Let me explain: -

- -

- Imagine a magic land where there is only one type of coin. There's no - other money — just this one coin. These coin states on themselves that - they contain one gram of gold, and right now, they really do contain - one gram of gold. Everyone uses it, and everyone is happy. There's no - “bad” money, no “good” money — it's all nice and simple. -

- -

Now, let's spice it up a bit.

- -

- After some time, a cheeky bastard (typically, a king) comes along and - starts making coins that look exactly like the original coins. I'll - call these the bad coins. The original coins will be the good coins. - Both types of coins say on them “one gram of gold,” but the bad coins - only have half a gram of gold actually in them (hence why they are - bad). -

- -

- So, to recap:
- - Good coins: one gram of gold on the coin, and actually one gram of - gold inside.
- - Bad coins: one gram of gold on the coin, but only 0.5 grams of gold - inside. -

- -

This is where Gresham's Law applies.

- -

- People in this coiny fantasy land are not stupid — they know that the - gold content is what matters. At some point, someone will realize the - bad coins don't have as much gold as they claim and will develop a - preference for the good ones. So, if I'm John the Blacksmith and I - want to buy some iron, and I have a stash of coins — some good, some - bad — I would rather keep the good coins and spend the bad coins. Why? - Because I want to keep as much gold as possible, of course. -

- -

- What happens eventually is that people grow into the habit of trying to get - rid of the bad coins and hold on to the good coins. They exploit the - confusion created by the fact that all coins have the same face value - (it says “one gram” on all coins, so everyone assumes they're worth - the same), even though the actual commodity value (the gold inside) - differs.[1] -

- -

That is the quick explanation of Gresham's law.

- -

- Now, back to the original point: what are the face value and commodity - value of Bitcoin? -

- -

- That makes no sense! Bitcoin is not a physical coin with metal in - it. It has no concept of face and commodity value. And neither does the - USD nowadays. Therefore, Gresham's law has absolutely nothing to do - with Bitcoin, the USD and any preferences the world might develop - between the two. -

- -

- Hopefully, this explanation helps make things clear. From now on, if - you want to keep your public image intact, please refrain from - invoking Gresham's law when discussing Bitcoin and USD — because doing it - shows you don't know what Gresham's Law is actually about. Don't feel - too bad if it happened to you though: it can happen even to - massive - exchanges with a great reputation. -

- -

- [1] Not relevant to the point of this post, but it's worth noting - that Gresham's Law situation is not always sure to happen in the - described scenario. If the difference between the good and bad coins - is massive, and no force opposes it, the market might jump into - Thier's Law - instead. -

-
-

back to home

-
-
- - - \ No newline at end of file diff --git a/public/writings/is-your-drug-dealer-a-homophobic-socialist.html b/public/writings/is-your-drug-dealer-a-homophobic-socialist.html deleted file mode 100644 index e3063c8..0000000 --- a/public/writings/is-your-drug-dealer-a-homophobic-socialist.html +++ /dev/null @@ -1,94 +0,0 @@ - - - - - Pablo here - - - - - - -
-

Hi, Pablo here

-

back to home

-
-
-

Is your drug dealer a homophobic socialist?

-

- Lately, I've noticed a branch of - cancel - culture - I've come to find quite disturbing. I think it has mainly extended in - the US, though I think it's starting to happen in Europe too. It's - this tendency for people at companies to politically and morally judge - business counterparties and come to the conclusion that business - shouldn't be done with them because of it. -

- -

- I experienced this first hand during some afterwork beers, and for - some reason the scene got burned into my retina. A colleague of mine, - beer in hand, said something like, “We're working with this customer, - and they're unbearable because they complain a lot and challenge us - all the time when we run the monthly reconciliation. Plus, they're - from Israel.” I was mindblown at how casually that was dropped, with - not even a footnote-like explanation deemed necessary. I played my 5 - year old child attitude card and asked, "What's the problem with them - being in Israel?" She said, "Well, you know, they're in Israel and the - whole thing is happening. It's terrible. We shouldn't deal with them." -

- -

- I couldn't hold it in: I asked her if her hairdresser was from Israel. - She looked at me completely puzzled: “I don't know. Why does that - matter?” I told her, “I don't know. Apparently, you're upset about - dealing people from Israel, so I'm assuming you need to check if - everyone you do business with is from there to not do it if that's the - case.” Silent stood and the air got thick. Someone jumped in with a - nervous joke to break up the tension that my child like questions had - somehow brought to the room, and the conversation moved on. -

- -

- Ever since that day, I've come across this kind of - social-justice-business-censor thinking pop up a lot. Since that fun - first encounter, whenever someone points out at how business should - not be done with <whatever ideology/country/demographic they don't - like>, I started jokingly triggering them by asking, “Actually, are - you making sure your drug dealer a homophobic socialist?” They - generally laugh, not grasping how their stances on politically - deciding to do or not do business with someone sound as ridicolous to - me. -

- - - -

- Here's what disturbs me: trade is a very civilized act. When we - trade—whether it's goods, services, or anything else—we're putting - aside our differences and doing something mutually beneficial. We both - walk away better off. We hurt no one. We make things a tiny bit better - overall. Deciding not to trade with someone because of some political - detail which is completely irrelevant to the trade itself is - backwards. Even if I didn't like communists, I wouldn't care if a - communist is selling me bananas. It just doesn't matter. -

- -

- Seeing people blow up trade over politics makes me sad. I think it's - ignorant and hateful. And I don't think they realize where that kind - of thinking can lead. -

- -

- In the end, I just hope people can leave politics out of business. - Let's do business and all be better off thanks to it. -

-
-

back to home

-
-
- - - \ No newline at end of file diff --git a/public/writings/my-tips-and-tricks-when-using-postgres-as-a-dwh.html b/public/writings/my-tips-and-tricks-when-using-postgres-as-a-dwh.html deleted file mode 100644 index b99ddaf..0000000 --- a/public/writings/my-tips-and-tricks-when-using-postgres-as-a-dwh.html +++ /dev/null @@ -1,173 +0,0 @@ - - - - - Pablo here - - - - - - - -
-

- Hi, Pablo here -

-

back to home

-
-
-

My tips and tricks when using Postgres as a DWH

-

In November 2023, I joined Superhog (now called Truvi) to start out the Data team. As part of that, I - also drafted and deployed the first version of its data platform. -

-

The context led me to choose Postgres for our DWH. In a time of Snowflakes, Bigqueries and Redshifts, - this might surprise some. But I can confidently say Postgres has done a great job for us, and I can even - dare to say it has provided a better experience than other, more trendy alternatives could have. I'll - jot down my rationale for picking Postgres one of these days.

-

- Back to the topic: Postgres is not intended to act as a DWH, so using it as such might feel a bit hacky - at times. There are multiple ways to make your life better with it, as well as related tools and - practices that you might enjoy, which I'll try to list here. -

-

Use unlogged tables

-

The Write Ahead Log comes active by default for the tables you create, and - for good reasons. But in the context of an ELT DWH, it is probably a good idea to deactivate it by - making your tables unlogged. Unlogged - tables will provide you with much faster writes (roughly, twice as fast) which will make data - loading and transformation jobs inside your DWH much faster. -

-

You pay a price for this with a few trade offs, the most notable being that if your Postgres server - crashes, the contents of the unlogged tables will be lost. But, - again, if you have an ELT DWH, you can survive by running a backfill. In Truvi, we made the decision to - have the landing area for our DWH be logged, and everything else unlogged. This means if we experienced - a crash (which still hasn't happened, btw), we would recover by running a full-refresh dbt run.

-

If you are using dbt, you can easily apply this by adding this bit in your dbt_project.yml - :

-

-models:
-    +unlogged: true
-            
- -

Tuning your server's parameters

-

Postgres has many parameters you can fiddle with, with plenty of - chances to either improve or destroy your server's performance.

-

Postgres ships with some default values for it, which are almost surely not the optimal ones for - your needs, specially if you are going to use it as a DWH. Simple changes like adjusting the - work_mem will do wonders to speed up some of your heavier queries. -

-

There are many parameters to get familiar with and proper adjustment must be done taking your specific - context and needs into account. If you have no clue at all, this little web app can give you some suggestions you - canstart from. -

-

Running VACUUM ANALYZE right after building your tables

-

Out of the box, Postgres will automatically run - VACUUM - and - ANALYZE - jobs automatically. The triggers that determine when each of those gets - triggered can be adjusted with a few server parameters. If you follow an ELT pattern, most surely - re-building your non-staging tables will cause Postgres to run them. -

-

But there's a detail that is easy to overlook. Postgres automatic triggers will start those quite fast, - but not right after you build each table. This poses a performance issue: if your intermediate sections - of the DWH have tables that build upon tables, rebuilding a table and then trying to rebuild a dependant - without having an ANALYZE on the first one before might hurt you.

-

Let me describe this with an example, because this one is a bit of a tongue twister: let's assume we have - tables int_orders and int_order_kpis. int_orders holds all of our - orders, and int_order_kpis derives some kpis from them. Naturally, first you will - materialize int_orders from some upstream staging tables, and once that is complete, you - will use its contents to build int_order_kpis. -

-

- Having int_orders ANALYZE-d before you start building - int_order_kpis is highly benefitial for your performance in building - int_order_kpis. Why? Because having perfectly updated statistics and metadata on - int_orders will help Postgres' query optimizer better plan the necessary query to - materialize int_order_kpis. This can improve performance by orders of magnitude in some - queries by allowing Postgres to pick the right kind of join strategy for the specific data you have, for - example. -

-

Now, will Postgres auto VACUUM ANALYZE the freshly built int_orders before you - start building int_order_kpis? Hard to tell. It depends on how you build your DWH, and how - you've tuned your server's parameters. And the most dangerous bit is you're not in full control: it can - be that sometimes it happens, and other times it doesn't. Flaky and annoying. Some day I'll - write a post on how this behaviour drove me mad for two months because it made a model sometimes built - in a few seconds, and other times in >20min. -

-

- My advice is to make sure you always VACUUM ANALYZE right after building your tables. If - you're using dbt, you can easily achieve this by adding this to your project's - dbt_project.yml: -


-models:
-    +post-hook:
-        sql: "VACUUM ANALYZE {{ this }}"
-        transaction: false
-        # ^ This makes dbt run a VACUUM ANALYZE on the models after building each.
-        # It's pointless for views, but it doesn't matter because Postgres fails
-        # silently withour raising an unhandled exception.
-            
-

-

Monitor queries with pg_stats_statements

-

pg_stats_statements is an extension that nowadays ships with Postgres - by default. If activated, it will log info on the queries executed in the server which you can check - afterward. This includes many details, with how frequently does the query get called and what's the min, - max and mean execution time being the ones you probably care about the most. Looking at those allows you - to find queries that take long each time they run, and queries that get run a lot. -

-

Another important piece of info that gets recorded is who ran the query. This is helpful - because, if you use users in a smart way, it can help you isolate expensive queries on different uses - cases or areas. For example, if you use different users to build the DWH and to give your BI tool read - access (you do that... right?), you can easily tell apart dashboard related queries from internal, DWH - transformation ones. Another example could be internal reporting vs embedded analytics in your product: - you might have stricter performance SLAs for product-embedded, customer-facing queries than for internal - dashboards. Using different users and pg_stats_statements makes it possible for you to - dissect performance issues on those separate areas independently.

-

Dalibo's wonderful execution plan visualizer

-

Sometimes you'll have some nasty query you just need to sit down with and optimize. In my experience, in - a DWH this ends up happening with queries that involve many large tables in sequential joining and - aggregation steps (as in, you join a few tables, group to some granularity, join some more, group again, - etc). -

-

You can get the query's real execution details with EXPLAIN ANALYZE, but the output's - readability is on par with morse-encoded regex patterns. I always had headaches dealing with them until - I came across Dalibo's execution plan - visualizer. You can paste the output of EXPLAIN ANALYZE there and see the query - execution presented as a diagram. No amount of words will portray accurately how awesome the UX is, so - I encourage you to try the tool with some nasty query and see for yourself.

-

Local dev env + Foreign Data Wrapper

-

One of the awesome things of using Postgres is how trivial it is to spin up an instance. This makes - goofing around much more simpler than whenever setting up a new instance means paperwork, $$$, etc.

-

Data team members at Truvi have a dockerized Postgres running in their laptops that they can use when - they are developing on our DWH dbt project. In the early days, you could grab some production dump with - some subset of tables from our staging layer and run significant chunks of our dbt DAG in your laptop if - you were patient.

-

A few hundreds of models later, this evolved to increasingly difficult and finally became impossible. -

-

Luckily, we came across Postgres' Foreign Data Wrapper. There's - quite a bit to it, but to keep it short here, just be aware that FDW allows you to make a Postgres - server give access to some table in a different Postgres server while pretending they are local. So, you - query table X in Postgres server A, even though table X is actually stored in Postgres server B. But - your query works just the same as if it was a local genuine table.

-

Setting these up is fairly trivial, and has allowed our dbt project contributors to be able to execute - hybrid dbt runs where some data and tables is local to their laptop, whereas some upstream data is being - read from production server's. The approach has been great so far, enabling them to actually test models - before commiting them to master in a convenient way.

-
-

back to home

-
-
- - - - \ No newline at end of file diff --git a/public/writings/notes-and-lessons-from-my-departure-from-superhog.html b/public/writings/notes-and-lessons-from-my-departure-from-superhog.html deleted file mode 100644 index a38a812..0000000 --- a/public/writings/notes-and-lessons-from-my-departure-from-superhog.html +++ /dev/null @@ -1,203 +0,0 @@ - - - - - Pablo here - - - - - - - -
-

- Hi, Pablo here -

-

back to home

-
-
-

Notes for myself during my departure from Superhog

-

I'm writing this a few days before my last day at Superhog (now called Truvi). Having a few company - departures under my belt already, I know a bit on what will come next. I know one part of the drill is - that 99% of the details of what happened during my tenure at the company will completely disappear from - my memory for the most part, only triggered by eerily coincidental cues here and there every few years. - I will remember clearly a few crucial, exciting days and situations. I will also hold well the names and - faces of those with who I worked closely, as well as my personal impression and judgement of them. I - will remember the office, and some details of how my daily life was when I went there.

-

But most other things will be gone from my brain, surprisingly fast.

-

Knowing that experience is a great teacher, and regretting not doing this in the past, I've decided to - collect a few notes from my time at Superhog, hoping they will serve me in making the lessons I've - learnt here stick properly.

- -
-

back to home

-
-
- - - - \ No newline at end of file diff --git a/public/writings/why-i-put-my-vms-on-a-zfs-mirror.html b/public/writings/why-i-put-my-vms-on-a-zfs-mirror.html deleted file mode 100644 index 77301cb..0000000 --- a/public/writings/why-i-put-my-vms-on-a-zfs-mirror.html +++ /dev/null @@ -1,120 +0,0 @@ - - - - - Pablo here - - - - - - - -
-

- Hi, Pablo here -

-

back to home

-
-

Why I Put My VMs on a ZFS Mirror

-

Part 1 of 3 in my "First ZFS Degradation" series. Also read Part 2: Diagnosing the Problem and Part 3: The Fix.

-

Why This Series Exists

-

A few weeks into running my new homelab server, I stumbled upon something I wasn't expecting to see that early: my ZFS pool was in "DEGRADED" state. One of my two mirrored drives had gone FAULTED.

-

This was the first machine I had set up with a ZFS mirror, precisely to be able to deal with disk issues smoothly, without losing data and having downtime. Although it felt like a pain in the ass to spot the problem, I was also happy because it gave me a chance to drill the kind of disk maintenance I was hoping to do in this new server.

-

But here's the thing: when I was in the middle of it, I couldn't find a single resource that walked through the whole experience in detail. Plenty of docs explain what ZFS is. Plenty of forum posts have people asking "help my pool is degraded." But nothing that said "here's what it actually feels like to go through this, step by step, with all the commands and logs and reasoning behind the decisions."

-

So I wrote it down. I took a lot of notes during the process and crafted a more or less organized story from them. This three-part series is for fellow amateur homelabbers who are curious about ZFS, maybe a little intimidated by it, and want to know what happens when things go sideways. I wish I had found a very detailed log like this when I was researching ZFS initially. Hope it helps you.

-

The server and disks

-

My homelab server is a modest but capable box I built in late 2025. It has decent consumer hardware, but nothing remarkable. I'll only specify that I have currently three disks on it:

- -

The two IronWolf drives are where this story takes place. I labeled them AGAPITO1 and AGAPITO2 because... well, every pair of drives deserves a silly name. I have issues remembering serial numbers.

-

The server runs Proxmox and hosts most of my self-hosted life: personal services, testing VMs, and my Bitcoin infrastructure (which I share over at bitcoininfra.contrapeso.xyz). If this pool goes down, everything goes down.

-

Why ZFS?

-

I'll be honest: I didn't overthink this decision. ZFS is the default storage recommendation for Proxmox, it has a reputation for being rock-solid, and I'd heard enough horror stories about silent data corruption to want something with checksumming built in.

-

What I was most interested in was the ability to define RAID setups in software and deal easily with disks going in and out of them. I had never gone beyond the naive "one disk for the OS, one disk for data" setup in previous servers. After having disks failing on me in previous boxes, I decided it was time to gear up and do it proper this time. My main concern initially was just saving time: it's messy when a "simple" host has disk issues, and I hoped mirroring would allow me to invest less time in cleaning up disasters.

-

Why a Mirror?

-

When I set up the pool, I had two 4TB drives. That gave me a few options:

-
    -
  1. Single disk: Maximum space (8TB usable), zero redundancy. One bad sector and you're crying.
  2. -
  3. Mirror: Half the space (4TB usable from 8TB raw), but everything is written to both drives. One drive can completely die and you lose nothing.
  4. -
  5. RAIDZ: Needs at least 3 drives, gives you parity-based redundancy. More space-efficient than mirrors at scale.
  6. -
-

I went with the mirror for a few reasons.

-

First, I only had two drives to start with, so RAIDZ wasn't even an option yet.

-

Second, mirrors are simple. Data goes to both drives. If one dies, the other has everything. No parity calculations, no write penalties, no complexity.

-

Third (and this is the one that sold me), mirrors let you expand incrementally. With ZFS, you can add more mirror pairs (called "vdevs") to your pool later. You can even mix sizes: start with two 4TB drives, add two 8TB drives later, and ZFS will use all of it. RAIDZ doesn't give you that flexibility; once you set your vdev width, you're stuck with it.

-

When Would RAIDZ Make More Sense?

-

If you're starting with 4+ drives and you want to maximize usable space, RAIDZ starts looking attractive:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ConfigurationDrivesUsable SpaceFault Tolerance
Mirror250%1 drive
RAIDZ13~67%1 drive
RAIDZ1475%1 drive
RAIDZ2450%2 drives
RAIDZ26~67%2 drives
-

RAIDZ2 is popular for larger arrays because it can survive two drive failures, which matters more as you add drives (more drives = higher chance of one failing during a resilver).

-

But for a two-drive homelab that might grow to four drives someday, I felt a mirror was the right call. I can always add another mirror pair later.

-

The Pool: proxmox-tank-1

-

My ZFS pool is called proxmox-tank-1. Here's what it looks like when everything is healthy:

-
  pool: proxmox-tank-1
- state: ONLINE
-config:
-
-    NAME                                 STATE     READ WRITE CKSUM
-    proxmox-tank-1                       ONLINE       0     0     0
-      mirror-0                           ONLINE       0     0     0
-        ata-ST4000NT001-3M2101_WX11TN0Z  ONLINE       0     0     0
-        ata-ST4000NT001-3M2101_WX11TN2P  ONLINE       0     0     0
-

That's it. One pool, one mirror vdev, two drives. The drives are identified by their serial numbers (the WX11TN0Z and WX11TN2P parts), which is important — ZFS uses stable identifiers so it doesn't get confused if Linux decides to shuffle around /dev/sda and /dev/sdb.

-

All my Proxmox VMs store their virtual disks on this pool. When I create a new VM, I point its storage at proxmox-tank-1 and ZFS handles the rest.

-

What Could Possibly Go Wrong?

-

Everything was humming along nicely. VMs were running fine and I was feeling pretty good about my setup.

-

Then, a few weeks in, I was poking around the Proxmox web UI and noticed something that caught my eye.

-

The ZFS pool was DEGRADED. One of my drives — AGAPITO1, serial WX11TN0Z — was FAULTED.

-

In Part 2, I'll walk through how I diagnosed what was actually wrong. Spoiler: the drive itself was fine. The problem was much dumber than that.

-

Continue to Part 2: Diagnosing the Problem

-

back to home

-
-
- - - - -