This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
computing:vmserver [2024/02/17 20:45] – oemb1905 | computing:vmserver [2024/02/17 21:11] (current) – oemb1905 | ||
---|---|---|---|
Line 3: | Line 3: | ||
* **Jonathan Haack** | * **Jonathan Haack** | ||
* **Haack' | * **Haack' | ||
- | * **netcmnd@jonathanhaack.com** | + | * **webmaster@haacksnetworking.org** |
------------------------------------------- | ------------------------------------------- | ||
Line 11: | Line 11: | ||
------------------------------------------- | ------------------------------------------- | ||
- | I am currently running a Supermicro 6028U-TRTP+ w/ Dual 12-core Xeon E5-2650 at 2.2Ghz, 384GB RAM, with four two-way mirrors of Samsung enterprise SSDs for the primary vdev, and two two-way mirrors of 16TB platters for the backup vdev. All drives using SAS. I am using a 500W PSU. I determine the RAM would be about 5-10W a stick, the mobo about 100W, and the drives would consume most of the rest at roughly 18-22W per drive. The next step was to install Debian on the bare metal to control and manage the virtualization environment. The virtualization stack is virsh and kvm/qemu. As for the file system and drive formatting, I used luks and pam_mount to open an encrypted home partition and mapped home directory. I use this encrypted home directory to store keys for the zfs pool and/or other sensitive data, thus protecting them behind FDE. Additionally, | + | This tutorial covers how to set up a production server that's intended to be used as a virtualization stack for a small business or educator. |
zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh | zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh | ||
Line 138: | Line 138: | ||
systemctl restart networking.service | systemctl restart networking.service | ||
- | You should once again execute '' | + | You should once again execute '' |
- | wget https:// | + | --- //[[webmaster@haacksnetworking.org|oemb1905]] |
- | + | ||
- | Once you grab the '' | + | |
- | + | ||
- | virt-clone \ | + | |
- | --original=clean \ | + | |
- | --name=sequoia \ | + | |
- | --file=/ | + | |
- | + | ||
- | The purpose of this project was to create my own virtualized VPS infrastructure (using KVM and VMs), to run my own production environments and for clients, students, and family. Here's a few to check out: | + | |
- | + | ||
- | * [[https:// | + | |
- | * [[https:// | + | |
- | * [[http:// | + | |
- | * [[http:// | + | |
- | + | ||
- | That's all folks! Well ... except for one more thing. When I first did all of this, I was convinced that zfs should be within LUKS as it was difficult for me to let go of LUKS / full disk encryption. I've now decided that's insane because of one primary reason. Namely, by putting zfs (or any file system) within LUKS, you lose the hot swapability that you have when zfs (or regular RAID) run directly on the hardware. That would mean that replacing a hard drive would require an entire server rebuild, which is insane. However, it is arguably more secure that way, so if budget and time permits, I've retained how I put zfs inside LUKS in the passage that follows. Proceed at your own risk lol. | + | |
- | + | ||
- | -- LUKS FIRST, ZFS SECOND - (LEGACY SETUP, NOT CURRENT) -- | + | |
- | + | ||
- | My initial idea was to do LUKS first, then zfs, meaning 6 could be mirrors in zfs and I would keep 1 as a spare LUKS crypt for keys, other crap, etc. To create the LUKS crypts, I did the following 6 times, each time appending the last 4 digits of the block ID to the LUKS crypt name: | + | |
- | + | ||
- | cryptsetup luksFormat /dev/sda | + | |
- | cryptsetup luksOpen /dev/sda sdafc11 | + | |
- | + | ||
- | You then make sure to use the LUKS label names when making the zpool, not the short names, which can change at times during reboots. I did this as follows: | + | |
- | + | ||
- | sudo apt install zfs-utils bridge-utils | + | |
- | zpool create -m /mnt/vms vms -f mirror sdafc11 sdb9322 mirror sdc8a33 sdh6444 mirror sde5b55 sdf8066 | + | |
- | + | ||
- | ZFS by default executes its mount commands at boot. This is a problem if you don't use auto-unlocking and key files with LUKS to also unlock on boot (and/or a custom script that unlocks). The problem, in this use cases, is ZFS will try to mount the volumes before they are unlocked. The two other options are none/legacy modes, both of which rely on you mounting the volume using traditional methods. But, the whole point of using zfs finally was to not use traditional methods lol, so for that reason I investigated if there was a fix. The closest to a fix is setting cachefile=none boot, but this a) hosed the pool once b) requires resetting, rebooting again and/or manually re-mounting the pool - either of which defeat the point. Using key files, cache file adjustments, | + | |
- | + | ||
- | sudo -i | + | |
- | screen | + | |
- | su - user [pam_mount unlocks /home for physical host primary user and the spare 1TB vault] | + | |
- | ctrl-a-d [detaches from screen] | + | |
- | + | ||
- | After unlocking my home directory and the spare 1TB vault, the next step is to unlock each LUKS volume, which I decided a simple shell script would suffice which looks like this mount-luks.sh: | + | |
- | + | ||
- | cryptsetup luksOpen / | + | |
- | cryptsetup luksOpen / | + | |
- | cryptsetup luksOpen / | + | |
- | cryptsetup luksOpen / | + | |
- | cryptsetup luksOpen / | + | |
- | cryptsetup luksOpen / | + | |
- | + | ||
- | This script simply opens each LUKS crypt so long as you enter or copy/paste your HD password 6 times. After that, one has to re-mount the pool / rebuild the quasi RAID1 mirror/ | + | |
- | + | ||
- | zpool import pool | + | |
- | + | ||
- | Rebooting in this manner takes about 3-5 minutes for the host, and 2 minutes to screen into my user name, detach, and run the mount LUKS script to mount the pools/ | + | |
- | + | ||
- | --- //[[jonathan@haacksnetworking.org|oemb1905]] | + |