Technologies | August 10, 2023

HomeLab: How DevOps manage information and maintain high availability of the home infrastructure 

Get to know how DevOps compute, store, preserve, present and play with information but also maintain high-availability of the home infrastructure using corporation-grade software and hardware.

HomeLab: How DevOps manage information and maintain high availability of the home infrastructure 

Introduction (Tale)

Once upon a time, back in the Stone Age of my childhood, I possessed a PC that was considered the bee’s knees, at least in those ancient times. It boasted a whopping 80 gigabytes of hard disk space, an almighty Intel Core 2 Duo, and a mind-blowing 2 gigabytes of RAM. Oh, the joy it brought me! I could frolic in the virtual realms of games, bask in the glow of movies, groove to my favorite tunes, and conquer the daunting realm of homework.

But alas, this blissful existence was not meant to last. After a few months, I stumbled upon the mystical realm of the Internet – P2P, Linux, and all that jazz. As fate would have it, my music library started multiplying faster than gremlins in a swimming pool. It dawned on me that I desperately needed more space. Ah, the dark ages of hard disk scarcity, when they cost a fortune! I was left with no choice but to embark on a ruthless mission of deletion.

Fast forward to the present day, where I now possess a bookshelf adorned with BluRays, a vast collection of CDs, an arsenal of PS2 games, and more books than a library. As a self-proclaimed DevOps maestro and “data enthusiast”, I felt obligated to digitize and preserve all these treasures for future generations – especially my own spawn. I have a sneaking suspicion that my kids will go bonkers for those ancient PS2 games.

And lo and behold, after two decades of technological advancement, I no longer suffer from the “not enough space” syndrome. Behold, my private HomeLab/NAS/ hybrid, lovingly crafted to house my hoard of goodies. It may not be the size of a small planet, but its humble 96 terabytes of raw space (considered puny by today’s standards) somehow manages to accommodate my vast collection.

But enough reminiscing, let us venture back to where it all began, where my tale unfolds like a nostalgic Windows 95 screensaver.

Tales of HomeLabs and the Adventures of their Owners (as I see it)

The First Step of the Journey

Every self-proclaimed “data enthusiast” embarks on their grand adventure with a humble image and a relatable comment:

“Look at this chaos! External disks are scattered everywhere. I must gather all this storage into one central haven”.

The Second Start of Progress

For those who venture further into the realm of HomeLabs, especially the tech-savvy bunch, their journey takes a different turn:


“Behold my Raspberry Pi-like army, united in a glorious cluster. But how do I bestow upon them the gift of storage?”. 

The “JBOA – Just Bunch of Everything” Interlude

Merge the previous images, and witness the birth of a typical Reddit-esque HomeLab, providing solutions to 90% of their troubles:


“See here! Raspberry Pi, NAS devices, external disks, old PCs, and old laptops, all are linked together by the almighty TrueNAS-ish software. My storage now thrives under a unified SMB share. It’s a triumph!”.

“The Dawn of the Small Server Rack”

Yet, in this tale, a crucial question arises: “Is this enough for me?”.
Alas, the answer tends to be a resounding “no”. Thus begins the true HomeLab, where enthusiasts acquire old (and sometimes new) server relics, and wield their magical powers:


“This shall suffice! Let the enchantment begin!”.

The Era of the Grand Server Rack (The Final HomeLab Stage)

However, as time marches on, their HomeLab transforms into a behemoth, and desperate pleas echo through the halls of Reddit:

“How do I escape this labyrinth? My HomeLab has consumed all available space. I must find a larger abode!”.

History of my HomeLabs

For now, end of the tale. Let’s face facts and real examples of history.

My journey began with a straightforward upgrade from a 250 GB HDD to a WD MyBook 1 TB hard drive. However, it quickly became apparent that the storage space was still insufficient. In search of a solution, I stumbled upon two valuable resources: r/DataHoarder and r/HomeLab. After immersing myself in countless articles and delving into the subreddits’ Wiki pages, I gained the necessary knowledge.

My initial choice was the WD PR4100, a relatively straightforward option. I equipped it with four 4 TB WD RED HDDs and configured it in a JBOD setup. However, my further research on those subreddits revealed that this setup lacked redundancy protections. To address this, I educated myself on RAID levels and the distinctions between hardware and software RAIDs. This led me to the ZFS project—a software RAID solution. Initially, it seemed impossible to install ZFS without modifying the hardware. To seek answers, I turned to the WD forums and discovered an alternative – installing a different OS than WD CloudOS. I opted for Ubuntu Server and, after a few days of working with Ansible and KVM, achieved success:

But the story didn’t end there. Although I managed to install ZFS on Ubuntu Server, I couldn’t use it as a boot drive. Consequently, I had to install it on a USB drive. Unfortunately, this decision proved to be a significant mistake as the USB drive became corrupted after a few months, rendering my NAS unbootable. Thankfully, I had automated the reinstallation process using Ansible.

Next, I explored the possibility of installing a lightweight variation of Kubernetes called K3s on my NAS. While the installation was successful, K3s immediately consumed a significant portion of the CPU time (80%) and RAM (90%). Considering the NAS had only 4 GB of RAM, an Intel Pentium N3710 processor, and the OS running from a USB drive, I had to abandon this idea. However, my pursuit continued, and after further research, I came across the concept of a “small business NAS” from HPE – the HPE ProLiant MicroServer Gen10.

It proved to be an ideal solution for my requirements, offering four 3.5″ HDD bays, one 2.5″ SSD bay (sold separately), 8vGB of RAM, and an AMD Opteron X3216 processor. I purchased it second-hand, installed Ubuntu Server, and began migrating my data from the WD PR4100 by gradually replacing the disks. The end result was this tower configuration:


However, I soon realized that the 8 GB of RAM was not sufficient to run any workloads on the Kubernetes (K8s) cluster, such as Prometheus, Grafana, and others. Therefore, an additional upgrade was necessary to meet my requirements.

My current HomeLab

Now without (reasonable) compromises.

After months of research, comparing hardware, counting my savings, and taking a small inspiration from Linus Tech Libs on YouTube, I’ve finally decided to build this monster:


My HomeLab Hardware

Fractal Design Define 7 XL is a big case, and a big case means lots of space, and a lot of space needs a lot of pressure to move air through so my choice here was 8x 120mm and 1x 140mm Be Quiet! Silent Wings Pro 4 PWM fans:


Such a big case allows the installation of a big motherboard, so I’ve chosen Asus X99-E WS. And a powerful motherboard requires a powerful CPU, here with a whooping 44t – Intel Xeon E5-2699v4. And not to stand out the CPU, the RAM banks have been filled with 8x32GB (256GB) DDR4 ECC RAM:


A powerful CPU requires powerful cooling, so I’ve chosen BeQuiet DarkRock Pro 4. And for additional codecs workloads – Radeon WX 7100:


A lot of desired storage requires a lot of SATA ports, so I’ve chosen LSI 9400-16i HBA card with 4x SFF-8643 ports:

And a lot of SATA ports require a lot of cables, so I’ve chosen 4x SFF-8643 to SATA3 cables


This storage needs to be mounted somewhere, so I’ve chosen 16x Fractal Design HDD trays. And finally storage itself: 4x WD Red 4TB and 4x WD Ultrastar HC550 16TB HDDs:


But HDDs are slow on their own, I had to speed them up with 2x QNAP QM2-4P-384 M.2 NVMe SSD PCIe cards and 8x WD SN700 4TB NVMe SSDs:


And finally, all this hardware requires a lot of power, so I’ve chosen BeQuiet Dark Power Pro 12 1500W PSU.

We add all together, do some cable management, and voilà.



zpool list
cheetah   7.27T   825G  6.46T        -         -     0%    11%  1.00x    ONLINE
elephant  58.2T  14.6T  43.6T        -         -     0%    25%  1.00x    ONLINE
tortoise  14.5T   855G  13.7T        -         -    14%     5%  1.00x    ONLINE


zpool status
  pool: cheetah
 state: ONLINE

        NAME                                       STATE     READ WRITE CKSUM
        cheetah                                    ONLINE       0     0     0
          raidz1-0                                 ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800274  ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800291  ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800133  ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800267  ONLINE       0     0     0

errors: No known data errors

  pool: elephant
 state: ONLINE

        NAME                                       STATE     READ WRITE CKSUM
        elephant                                   ONLINE       0     0     0
          raidz1-0                                 ONLINE       0     0     0
            scsi-SATA_WDC_WUH721816AL_2BKW62BT     ONLINE       0     0     0
            scsi-SATA_WDC_WUH721816AL_2BKZBNDT     ONLINE       0     0     0
            scsi-SATA_WDC_WUH721816AL_2PJ0LDMT     ONLINE       0     0     0
            scsi-SATA_WDC_WUH721816AL_4YGA3LWH     ONLINE       0     0     0
          mirror-1                                 ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800200  ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800835  ONLINE       0     0     0

errors: No known data errors

  pool: tortoise
 state: ONLINE

        NAME                                       STATE     READ WRITE CKSUM
        tortoise                                   ONLINE       0     0     0
          raidz1-0                                 ONLINE       0     0     0
            scsi-SATA_WDC_WD4003FFBX-6_V1J8JJBG    ONLINE       0     0     0
            scsi-SATA_WDC_WD4003FFBX-6_V1JAPH9G    ONLINE       0     0     0
            scsi-SATA_WDC_WD4003FFBX-6_V1JASWGG    ONLINE       0     0     0
            scsi-SATA_WDC_WD4003FFBX-6_VBGJN8SF    ONLINE       0     0     0
          mirror-1                                 ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800705  ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800272  ONLINE       0     0     0

errors: No known data errors


PS2/Games/Age Of Empires 2.iso
    634,811,856 100% 321.85MB/s     0:00:01 (xfr#33, ir-chk-2720/2780)
PS2/Games/Crash Nitro Kart.iso
  3,164,798,976 100% 468.01MB/s     0:00:06 (fr#34, ir-chk-2719/2780)
PS2/Games/Deus Ex - The Conspiracy.iso
  1,383,399,424 100% 357.15MB/s     0:00:03 (×fr#35, ir-chk-2718/2780)
PS2/Games/Devil May Cry 2 Dante. iso
  4,698,767,360 100% 471.99MB/s     0:00:09 (xfr#36, ir-chk=2717/2780)
PS2/Games/Devil May Cry 2 Lucia.iso
  4,698,767,360 100% 482.67MB/s     0:00:09 (xfr#37, ir-chk=2716/2780)
PS2/Games/Devil May Cry 3 Special Edition.iso
  4,499,308,544 100% 427.42MB/s     0:00:10 (xfr#38, ir-chk-2715/2780)
PS2/Games/Devil May Cry.iso
  1,246,822,400  26% 594.83MB/s     0:00:05

Benchmark results:

# Small, medium, and large files (4k/1m/32m) on sequential and random read/write
> fio \
  --direct=1 \
  --rw=[write|read|randwrite|randread] \
  --bs=[4k|1m|32m] \
  --iodepth=1 \
  --runtime=120 \
  --numjobs=1 \
  --time_based \
  --group_reporting \
  --name=iops-speed-test-job \

ZFS Pool
Test TypeBSSpeed (avg)Speed (avg)
IOPS (avg)
Cheetahwrite4k358.90 MiB/s89.72K
Cheetahwrite1m3.65 GiB/s3655
Cheetahwrite32m2.75 GiB/s85
Cheetahrandwrite4k350.21 MiB/s58.54K
Cheetahrandwrite1m3.62 GiB/s3626
Cheetahrandwrite32m2.85 GiB/s89
Cheetahread4k734.14 MiB/s183.53K
Cheetahread1m5.46 GiB/s5.46K
Cheetahread32m4.54 GiB/s142
Cheetahrandread4k592.61 MiB/s148.15K
Cheetahrandread1m5.18 GiB/s5.18K
Cheetahrandread32m4.57 GiB/s143
Elephantwrite4k335.33 MiB/s83.83K
Elephantwrite1m754.25 MiB/s736
Elephantwrite32m671.65 MiB/s20
Elephantrandwrite4k302.10 MiB/s75.52K
Elephantrandwrite1m571.56 MiB/s558
Elephantrandwrite32m745.84 MiB/s22
Elephantread4k718.03 MiB/s179.50K
Elephantread1m5.40 GiB/s5400
Elephantread32m4.56 GiB/s142
Elephantrandread4k596.53 MiB/s149.13K
Elephantrandread1m5.13 GiB/s5133
Elephantrandread32m4.47 GiB/s139
Tortoisewrite4k243.74 MiB/s60.93K
Tortoisewrite1m452.69 MiB/s442
Tortoisewrite32m509.24 MiB/s15
Tortoiserandread4k224.53 MiB/s56.13K
Tortoiserandread1m475.20 MiB/s464
Tortoiserandread32m581.77 MiB/s17
Tortoiseread4k731.10 MiB/s182.77K
Tortoiseread1m5.15 GiB/s5151
Tortoiseread32m4.53 GiB/s141
Tortoiserandread4k570.39 MiB/s142.59K
Tortoiserandread1m5.14 GiB/s5145
Tortoiserandread32m4.56 GiB/s142

Two more things:

ZFS LOG Cache on write (pay attention to the w=... values):

> fio \
  --direct=1 \
  --rw=write \
  --bs=32m \
  --iodepth=1 \
  --runtime=120 \
  --numjobs=1 \
  --time_based \
  --group_reporting \
  --name=iops-speed-test-job \
  --size=1g \

iops-speed-test-job: (g=0): rw=write, bs=(R) 32.0MiB-32.0MiB, (W) 32.0MiB-32.0MiB, (T) 32.0MiB-32.0MiB, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [W(1)][6.6%][w=2430MiB/s][w=75 IOPS][eta 01m:53s]
Jobs: 1 (f=1): [W(1)][11.6%][w=2723MiB/s][w=85 IOPS][eta 01m:47s]
Jobs: 1 (f=1): [W(1)][16.5%][w=256MiB/s][w=8 IOPS][eta 01m:41s]
Jobs: 1 (f=1): [W(1)][21.5%][w=96.0MiB/s][w=3 IOPS][eta 01m:35s]
Jobs: 1 (f=1): [W(1)][26.7%][w=64.0MiB/s][w=2 IOPS][eta 01m:28s]
Jobs: 1 (f=1): [W(1)][31.4%][w=32.0MiB/s][w=1 IOPS][eta 01m:23s]
Jobs: 1 (f=1): [W(1)][36.4%][w=32.0MiB/s][w=1 IOPS][eta 01m:17s]
Jobs: 1 (f=1): [W(1)][41.3%][w=512MiB/s][w=16 IOPS][eta 01m:11s]
Jobs: 1 (f=1): [W(1)][46.3%][w=256MiB/s][w=8 IOPS][eta 01m:05s]
Jobs: 1 (f=1): [W(1)][51.2%][w=96.1MiB/s][w=3 IOPS][eta 00m:59s]
Jobs: 1 (f=1): [W(1)][56.2%][w=64.1MiB/s][w=2 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [W(1)][61.2%][w=833MiB/s][w=26 IOPS][eta 00m:47s]
Jobs: 1 (f=1): [W(1)][66.1%][w=160MiB/s][w=5 IOPS][eta 00m:41s]
Jobs: 1 (f=1): [W(1)][71.1%][w=64.0MiB/s][w=2 IOPS][eta 00m:35s]
Jobs: 1 (f=1): [W(1)][76.0%][w=128MiB/s][w=4 IOPS][eta 00m:29s]
Jobs: 1 (f=1): [W(1)][81.0%][w=64.1MiB/s][w=2 IOPS][eta 00m:23s]
Jobs: 1 (f=1): [W(1)][86.0%][w=32.0MiB/s][w=1 IOPS][eta 00m:17s]
Jobs: 1 (f=1): [W(1)][90.9%][w=2208MiB/s][w=69 IOPS][eta 00m:11s]
Jobs: 1 (f=1): [W(1)][95.9%][w=1568MiB/s][w=49 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [W(1)][100.0%][w=2498MiB/s][w=78 IOPS][eta 00m:00s]
iops-speed-test-job: (groupid=0, jobs=1): err= 0: pid=933058:
  write: IOPS=13, BW=439MiB/s (461MB/s)(51.5GiB/120002msec); 0 zone resets
    clat (msec): min=8, max=2301, avg=71.66, stdev=162.35
     lat (msec): min=9, max=2303, avg=72.85, stdev=162.40
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[    9], 10.00th=[   10], 20.00th=[   10],
     | 30.00th=[   11], 40.00th=[   11], 50.00th=[   16], 60.00th=[   21],
     | 70.00th=[   56], 80.00th=[   80], 90.00th=[  155], 95.00th=[  330],
     | 99.00th=[  793], 99.50th=[ 1099], 99.90th=[ 1938], 99.95th=[ 2299],
     | 99.99th=[ 2299]
   bw (  KiB/s): min=65536, max=3276800, per=100.00%, avg=509240.77, stdev=757052.72, samples=208
   iops        : min=    2, max=  100, avg=15.53, stdev=23.09, samples=208
  lat (msec)   : 10=27.44%, 20=32.54%, 50=6.80%, 100=18.34%, 250=7.95%
  lat (msec)   : 500=4.37%, 750=1.46%, 1000=0.43%, 2000=0.61%, >=2000=0.06%
  cpu          : usr=1.77%, sys=17.57%, ctx=164430, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1647,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=439MiB/s (461MB/s), 439MiB/s-439MiB/s (461MB/s-461MB/s), io=51.5GiB (55.3GB), run=120002-120002msec

ZFS ARC Cache on read (pay attention to the r=... values):

# Large file (128m) on random read
> fio \
  --direct=1 \
  --rw=randread \
  --bs=128m \
  --iodepth=1 \
  --runtime=120 \
  --numjobs=8 \
  --time_based \
  --group_reporting \
  --name=iops-speed-test-job \
  --size=1g \

iops-speed-test-job: (g=0): rw=randread, bs=(R) 128MiB-128MiB, (W) 128MiB-128MiB, (T) 128MiB-128MiB, ioengine=psync, iodepth=1
Starting 8 processes
Jobs: 8 (f=8): [r(8)][5.8%][r=9728MiB/s][r=76 IOPS][eta 01m:54s]
Jobs: 8 (f=8): [r(8)][10.7%][r=9984MiB/s][r=78 IOPS][eta 01m:48s]
Jobs: 8 (f=8): [r(8)][15.7%][r=5766MiB/s][r=45 IOPS][eta 01m:42s]
Jobs: 8 (f=8): [r(8)][20.7%][r=6278MiB/s][r=49 IOPS][eta 01m:36s]
Jobs: 8 (f=8): [r(8)][25.6%][r=6278MiB/s][r=49 IOPS][eta 01m:30s]
Jobs: 8 (f=8): [r(8)][30.6%][r=6144MiB/s][r=48 IOPS][eta 01m:24s]
Jobs: 8 (f=8): [r(8)][35.5%][r=6663MiB/s][r=52 IOPS][eta 01m:18s]
Jobs: 8 (f=8): [r(8)][40.5%][r=6663MiB/s][r=52 IOPS][eta 01m:12s]
Jobs: 8 (f=8): [r(8)][45.5%][r=7560MiB/s][r=59 IOPS][eta 01m:06s]
Jobs: 8 (f=8): [r(8)][50.4%][r=9344MiB/s][r=73 IOPS][eta 01m:00s]
Jobs: 8 (f=8): [r(8)][55.8%][r=8832MiB/s][r=69 IOPS][eta 00m:53s]
Jobs: 8 (f=8): [r(8)][60.3%][r=9472MiB/s][r=74 IOPS][eta 00m:48s]
Jobs: 8 (f=8): [r(8)][65.3%][r=9600MiB/s][r=75 IOPS][eta 00m:42s]
Jobs: 8 (f=8): [r(8)][70.2%][r=9610MiB/s][r=75 IOPS][eta 00m:36s]
Jobs: 8 (f=8): [r(8)][75.2%][r=9481MiB/s][r=74 IOPS][eta 00m:30s]
Jobs: 8 (f=8): [r(8)][80.2%][r=9738MiB/s][r=76 IOPS][eta 00m:24s]
Jobs: 8 (f=8): [r(8)][85.1%][r=9856MiB/s][r=77 IOPS][eta 00m:18s]
Jobs: 8 (f=8): [r(8)][90.1%][r=10.0GiB/s][r=80 IOPS][eta 00m:12s]
Jobs: 8 (f=8): [r(8)][95.0%][r=10.3GiB/s][r=82 IOPS][eta 00m:06s]
Jobs: 8 (f=8): [r(8)][100.0%][r=10.0GiB/s][r=80 IOPS][eta 00m:00s]
iops-speed-test-job: (groupid=0, jobs=8): err= 0: pid=1182625: Mon Jun 26 13:44:18 2023
  read: IOPS=67, BW=8658MiB/s (9078MB/s)(1015GiB/120082msec)
    clat (msec): min=30, max=7698, avg=118.25, stdev=581.24
     lat (msec): min=30, max=7698, avg=118.25, stdev=581.24
    clat percentiles (msec):
     |  1.00th=[   32],  5.00th=[   33], 10.00th=[   36], 20.00th=[   41],
     | 30.00th=[   46], 40.00th=[   55], 50.00th=[   75], 60.00th=[   80],
     | 70.00th=[   83], 80.00th=[   87], 90.00th=[  100], 95.00th=[  102],
     | 99.00th=[  107], 99.50th=[ 6544], 99.90th=[ 6745], 99.95th=[ 7215],
     | 99.99th=[ 7684]
   bw (  MiB/s): min= 2048, max=21504, per=100.00%, avg=14738.69, stdev=642.94, samples=1122
   iops        : min=   16, max=  168, avg=114.65, stdev= 5.08, samples=1122
  lat (msec)   : 50=36.70%, 100=55.75%, 250=6.70%, 500=0.04%, 750=0.01%
  lat (msec)   : >=2000=0.80%
  cpu          : usr=0.02%, sys=99.28%, ctx=14764, majf=0, minf=262217
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=8122,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=8658MiB/s (9078MB/s), 8658MiB/s-8658MiB/s (9078MB/s-9078MB/s), io=1015GiB (1090GB), run=120082-120082msec


Being well-informed about the state of your system is critical to maintaining its health. So any information that can be gleaned from the system is valuable. The more observability you have, the better.



Imagine you’re the captain of the Data Defender spaceship, soaring through the digital galaxy. Your crew of heroic disks stands united, ready to face any peril that comes their way. But alas! Amid your interstellar journey, disaster strikes! One of your loyal disks malfunctions and cries out, “Houston, we have a problem!”.

Fear not, intrepid captain! The Data Defender is equipped with the legendary RAIDZ1, a shield of redundancy that will keep your data safe and sound. It’s like having a team of cosmic superheroes at your command.

As soon as the distress signal is detected, your trusty crew leaps into action. They diligently diagnose the issue, like brilliant space engineers on a mission. Once they identify the culprit, it’s time to bring in the reinforcements.

With the faulty disk replaced by a shiny new recruit, the resilvering process commences. It’s like a cosmic restoration as if the universe itself is working its magic. Bit by bit, byte by byte, the missing data is reconstructed and woven back into the fabric of your digital universe.

The Data Defender crew works tirelessly, their laser-like focus ensuring that every precious fragment of information finds its rightful place. It’s a celestial ballet of data, a symphony of technological wizardry.

And lo and behold, in the blink of an eye, your data is resurrected! It returns to its former glory, ready to continue its cosmic adventure. The crew celebrates this triumph, high-fiving and exchanging virtual fist bumps in zero gravity.

So, brave captain, fear not the cosmic glitches and digital hiccups. With RAIDZ1 guarding your data, even the mightiest of failures won’t thwart your mission. The resilvering process is the elixir of restoration, bringing your precious information back from the brink.

Remember, in the vast expanse of the digital universe, RAIDZ1 shines as a beacon of resilience, ensuring that your data journey never comes to a premature end. Onward, intrepid explorer, with RAIDZ1 as your loyal companion!

> zpool status tortoise
  pool: tortoise
 state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat May 6 08:42:29 2023
        7.51G scanned at 135M/s, 2.42G issued at 43.5M/s, 5.11T total
        598M resilvered, 0.05% done, 1 days 10:12:37 to go
        NAME                                       STATE     READ WRITE CKSUM
        tortoise                                   DEGRADED     0     0     0
          raidz1-0                                 DEGRADED   498   572     0
            SCSi-SATA_NDC_ND4003FFBX-6_V138JJBG    DEGRADED   383   233     0
            Scsi-SATA_WDC_WD4003FFBX-6_V1JAPH9G    ONLINE     385   415     0
            SCSi-SATA_WDC_WD4003FFBX-6_V1JASWGG    ONLINE     472   271     0
            scsi-SATA_WDC_WD4003FFBX-6_VBGJN8SF    ONLINE     412   342     0
          mirror-1                                 ONLINE       0     0     0
            nume-WD_Red_SN700_2000GB_23024R800705  ONLINE       0     0     0
            nvme-WD_Red_SN700_2000GB_23024R800272  ONLINE       0     0     0

Documentation at my HomeLab

In the context of this project, it is imperative to possess graphical documentation. Therefore, I have opted for the utilization of for this purpose.

nearshore 2023.08.03 graphic 2

Commencing from the selection of the provider, outbound connections, the establishment of a legend, depiction of data flows, network segregation, the configuration of the mesh, software functionalities, identification of unoccupied RJ45 slots, consideration of future software requirements, and specification of network hardware.

Additionally, it involves the creation of a connection diagram, representation of the docking station, specification of the server, specification of virtual machines (VMs) and Kubernetes (K8s) deployments, as well as the ZFS diagram illustrating the disk architecture.

These measures are crucial to ensure future reference and to prevent any confusion or loss of information.

Networking at my HomeLab

Here, without a surprise, OpenWRT – maximum flexibility and control, minimum requirements.


AdBlock? Pfft, so yesterday! PiHole? Meh, too basic! AdGuard? Yawn, too mainstream! But network-wide ad blocking with OpenWRT? Oh heck yeah, sign me up for some Wi-Fi wizardry and ad annihilation!

The underlying principles are elegantly straightforward: maintain local DNS resolution while redirecting all non-local DNS queries to an external DNS resolver such as NextDNS. By employing this method, you can leverage your local DNS resolver while simultaneously reaping the benefits of network-wide ad blocking.


The conundrum of multiple Wi-Fi networks and signal dropout is a nuisance nobody relishes. To mitigate this predicament, I opted for a TP-Link mesh system featuring a unified SSID for both the 2.4 GHz and 5 GHz networks across three mesh devices. Consequently, I can freely wander throughout my abode without encountering any connectivity hindrances.

Now, as you might already spotted on the docs, here’s the dealio, folks. The key to maintaining peace and harmony in the technological realm is to have not one, but two Wi-Fi networks. Oh yeah, we’re living the high life now! The first one is exclusively for all those fancy-schmancy IoT devices, because, you know, they’ve got a party of their own going on. They’re out there sharing cat memes and discussing the latest smart home gossip, while I’m just chilling on the sidelines.

But hold your horses, folks! We can’t forget about the second network, designed especially for my dear wife. Why, you ask? Well, let me spill the beans. You see, I have a particular talent for accidentally breaking stuff. It’s like my superpower or something. But fear not! With this second Wi-Fi network in place, my wife can still surf the web, browse her favorite websites, and watch adorable puppy videos, all while I’m over here causing havoc like a clumsy ninja.

In the end, my friends, it’s all about preserving domestic tranquility in the face of technological mishaps. So, let’s raise a glass to the two Wi-Fi networks because they’re the real MVPs of our connected household. Cheers to endless internet for everyone, even when I’m on a rampage of destruction!

Ah, my personal favorite — VPN. Have you ever experienced the peculiar scenario of toiling away at a company for X years, while the esteemed security department assumes you’re diligently working from country Y, only to be abruptly confronted with a malfunctioning VPN server that “teleports” you back to your original country, Z? The inevitable phone call from said security department ensues, bewildered by your sudden appearance in country Z. If you answered in the affirmative, rest assured, you’re doing it right!

Every single device within my network must traverse the router, which boasts a meticulously configured VPN client. This meticulous setup guarantees that all my device traffic remains enciphered, safeguarding against any data leakage to the vast expanse of the Internet (well, at least until it reaches the VPN server, but that’s a whole other story).


Host Operating System at my HomeLab


Once again, enter the stage with a drumroll – Proxmox! No surprises here, as this robust virtualization platform reigns supreme.


Ah, but hold your breath for the delightful surprise! In this setup, all the disks and GPU are magnificently passed through to the main VM, ushering in a virtualization extravaganza like no other. The power and potential of Proxmox coupled with this remarkable feature truly take the virtualization game to new heights!


What is also worth mentioning – cloudymax/pxeless – the automation tool for the ISO images to skillfully recraft them with a “pre-generated” seed file, housing essential information about virtual machines (VMs). This ingenious approach enables a seamless, automated installation process, rendering it entirely unattended.

VM & K8s software at my HomeLab

Prepare yourself for an impressive ensemble of software that graces my HomeLab:

Monitoring at my HomeLab

As I already said: “The more observability you have, the better”.

Centralized hardware monitoring:


Centralized S.M.A.R.T. monitoring:


Centralized ZFS-usage monitoring:


Weather at my HomeLab

Hold on to your socks, because I’m about to blow your mind! You know how we can keep an eye on our CPU temperature, right? Well, guess what? It turns out we can also tap into the mystical powers of weather monitoring! That’s right, folks, I’ve got a full-fledged weather station right here in my HomeLab. I can predict rain, wind, and even the occasional sneeze from Mother Nature herself, all from the comfort of my high-tech haven. I’ve officially become the geeky weatherman of my own little tech kingdom!

Costs at my HomeLab

Ah, now let’s delve into some vital details for all you tech-savvy individuals yearning to embark on a similar HomeLab endeavor – costs.


Ah, indeed, we cannot overlook the power consumption of such a magnificent beast!


It’s never enough…

However, to be perfectly honest, the achievements I have accomplished are truly remarkable, and the knowledge I have acquired throughout this process is commendable. I take immense pride in my accomplishments thus far. Nonetheless, I am cognizant of the fact that there is always room for improvement and continuous growth. Consequently, I have already begun contemplating the subsequent phases or endeavors that lie ahead, as I perpetually strive to expand my horizons and advance further in my journey.

Also read: DevOps Monitoring system



If you’re considering building your own Homelab, I’m keeping my fingers crossed, since I know from my own experience that this idea marks the beginning of an exciting path. Homelab is something that allows you to grow, gain knowledge, learn from your missteps, and look for new possibilities. If you would like to create your own homelab, it will certainly bring you a lot of joy. Sometimes it will give you sleepless nights, such as when you think about how to keep your environment harmonious, how to monitor it and ensure the right conditions, how to optimize it, and finally, constantly improve it. In other words, how to be DevOps in your own home.

A technical enthusiast with a love for cooking, travel, and plant care. Exploring the latest tech innovations and gadgets is his passion, but he also finds joy in experimenting with diverse recipes and exploring various cuisines. When not engaged in tech or culinary pursuits, Amadeusz embarks on adventures to discover new cultures and capture breathtaking landscapes through photography. Additionally, plant care is a therapeutic hobby for him, and he enjoys nurturing both indoor and outdoor greenery.

Exclusive Content Awaits!

Dive deep into our special resources and insights. Subscribe to our newsletter now to unlock this file and stay ahead of the curve.

Information on the processing of personal data

Exclusive Content Awaits!

Dive deep into our special resources and insights. Subscribe to our newsletter now to unlock this file and stay ahead of the curve.

Information on the processing of personal data

Almost There!

We’ve sent a verification email to your address. Please click on the confirmation link inside to enjoy our latest updates.


If there is no message in your inbox within 5 minutes then also check your *spam* folder.

Already Part of the Crew!

Looks like you’re already subscribed to our newsletter. Stay tuned for the latest updates!

Oops, Something Went Wrong!

We encountered an unexpected error while processing your request. Please try again later or contact our support team for assistance.

    Get notified about new articles

    Be a part of something more than just newsletter

    I hereby agree that Inetum Polska Sp. z o.o. shall process my personal data (hereinafter ‘personal data’), such as: my full name, e-mail address, telephone number and Skype ID/name for commercial purposes.
    I hereby agree that Inetum Polska Sp. z o.o. shall process my personal data (hereinafter ‘personal data’), such as: my full name, e-mail address and telephone number for marketing purposes.

    Read more

    Just one click away!

    We've sent you an email containing a confirmation link. Please open your inbox and finalize your subscription there to receive your e-book copy.

    Note: If you don't see that email in your inbox shortly, check your spam folder.