| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| computing:btrfsreminders [2026/02/07 20:19] – oemb1905 | computing:btrfsreminders [2026/02/07 20:32] (current) – oemb1905 |
|---|
| cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500d7771943 hdd3 --key-file /home/user/.unlock/wh.key | cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500d7771943 hdd3 --key-file /home/user/.unlock/wh.key |
| cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500cb1689e3 hdd4 --key-file /home/user/.unlock/wh.key | cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500cb1689e3 hdd4 --key-file /home/user/.unlock/wh.key |
| | |
| | Now that we have mounted the crypts at ''/dev/mapper/ssd#'', we can easily create a filesystem with them and or pool them together as we see fit. I've chosen to replicate RAID10 with BTRFS as closely as possible. It should be noted that it's not a perfect replication of RAID10 since it is chunk-based. In the commands below, we create the pools, verify the pools, and then mount them |
| | |
| | mkdir -p /mnt/vm |
| | mkdir -p /mnt/wh |
| | mkfs.btrfs -f -d raid10 -m raid1 --checksum=xxhash --nodesize=32k /dev/mapper/ssd1 /dev/mapper/ssd2 /dev/mapper/ssd3 /dev/mapper/ssd4 /dev/mapper/ssd5 /dev/mapper/ssd6 /dev/mapper/ssd7 /dev/mapper/ssd8 |
| | mkfs.btrfs -f -d raid10 -m raid1 --checksum=xxhash --nodesize=32k /dev/mapper/hdd1 /dev/mapper/hdd2 /dev/mapper/hdd3 /dev/mapper/hdd4 |
| | mount -o compress-force=zstd:3,noatime,autodefrag,space_cache=v2,discard=async,commit=120 /dev/mapper/ssd1 /mnt/vm |
| | mount -o compress=zstd:3,noatime,autodefrag,space_cache=v2,discard=async,commit=120 /dev/mapper/hdd1 /mnt/wh |
| | btrfs filesystem show /mnt/vm |
| | btrfs filesystem show /mnt/wh |
| | df -h #verify all looks right! |
| | |
| | After the first reboot I set persistent compression. I did this because I was getting errors trying to do it on initial pool build. Here's what I do for compression: |
| | |
| | btrfs property set /mnt/vm compression zstd:3 |
| | btrfs property set /mnt/wh compression zstd:3 |
| | |
| | Once that's done and you've rebooted a few times and tested things a few times, you can safely make a mount script for remote rebooting. This way, you reboot and then log in to your user and detach, run a simple script to unlock and mount the BTRFS subvolumes ... and you are done! Create ''nano /usr/local/bin/btrfs-mount-datasets.sh'' and some ''chmod 750 /usr/local/bin/btrfs-mount-datasets.sh'' and enter something like: |
| | |
| | #!/bin/bash |
| | #open SSD crypts |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a98416870 ssd1 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a98356f30 ssd2 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a983571d0 ssd3 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a98356590 ssd4 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a0840a300 ssd5 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a98356500 ssd6 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a084065d0 ssd7 --key-file /home/user/.unlock/vm.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35002538a98357220 ssd8 --key-file /home/user/.unlock/vm.key |
| | #open PLATTER crypts |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500d775df03 hdd1 --key-file /home/user/.unlock/wh.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500d7694517 hdd2 --key-file /home/user/.unlock/wh.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500d7771943 hdd3 --key-file /home/user/.unlock/wh.key |
| | cryptsetup luksOpen /dev/disk/by-id/scsi-35000c500cb1689e3 hdd4 --key-file /home/user/.unlock/wh.key |
| | #mount the btrfs r10 pool for vm |
| | mount -o compress-force=zstd:3,noatime,autodefrag,space_cache=v2,discard=async,commit=120 /dev/mapper/ssd1 /mnt/vm |
| | #mount the btrfs r10 pool for wh |
| | mount -o compress=zstd:3,noatime,autodefrag,space_cache=v2,discard=async,commit=120 /dev/mapper/hdd1 /mnt/wh |
| | |
| | |
| |
| |