User Tools

Site Tools


computing:vmserver

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:vmserver [2024/02/17 20:45] oemb1905computing:vmserver [2024/02/17 21:11] (current) oemb1905
Line 3: Line 3:
   * **Jonathan Haack**   * **Jonathan Haack**
   * **Haack's Networking**   * **Haack's Networking**
-  * **netcmnd@jonathanhaack.com** +  * **webmaster@haacksnetworking.org** 
  
 ------------------------------------------- -------------------------------------------
Line 11: Line 11:
 ------------------------------------------- -------------------------------------------
  
-I am currently running a Supermicro 6028U-TRTP+ w/ Dual 12-core Xeon E5-2650 at 2.2Ghz, 384GB RAM, with four two-way mirrors of Samsung enterprise SSDs for the primary vdev, and two two-way mirrors of 16TB platters for the backup vdev. All drives using SAS. I am using a 500W PSU. I determine the RAM would be about 5-10W a stick, the mobo about 100W, and the drives would consume most of the rest at roughly 18-22W per drive. The next step was to install Debian on the bare metal to control and manage the virtualization environment. The virtualization stack is virsh and kvm/qemu. As for the file system and drive formatting, I used luks and pam_mount to open an encrypted home partition and mapped home directory. I use this encrypted home directory to store keys for the zfs pool and/or other sensitive data, thus protecting them behind FDE. Additionally, I create file-level encrypted zfs data sets within each of the vdevs that are unlocked by the keys on the LUKS home partition. Instead of tracking each UUID down on your initial build, do the following:+This tutorial covers how to set up a production server that's intended to be used as a virtualization stack for a small business or educator. I am currently running a Supermicro 6028U-TRTP+ w/ Dual 12-core Xeon E5-2650 at 2.2Ghz, 384GB RAM, with four two-way mirrors of Samsung enterprise SSDs for the primary vdev, and two two-way mirrors of 16TB platters for the backup vdev. All drives using SAS. I am using a 500W PSU. I determine the RAM would be about 5-10W a stick, the mobo about 100W, and the drives would consume most of the rest at roughly 18-22W per drive. The next step was to install Debian on the bare metal to control and manage the virtualization environment. The virtualization stack is virsh and kvm/qemu. As for the file system and drive formatting, I used luks and pam_mount to open an encrypted home partition and mapped home directory. I use this encrypted home directory to store keys for the zfs pool and/or other sensitive data, thus protecting them behind FDE. Additionally, I create file-level encrypted zfs data sets within each of the vdevs that are unlocked by the keys on the LUKS home partition. Instead of tracking each UUID down on your initial build, do the following:
  
   zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh   zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh
Line 138: Line 138:
   systemctl restart networking.service   systemctl restart networking.service
  
-You should once again execute ''ping 8.8.8.8'' and ''ping google.com'' to confirm you can route within the guest OS. SometimesI find a reboot is requiredAt this stage, you now have physical host configured with a virtual switch, and one VM provisioned to use the switch with its own external IP. Both the physical host and guest OS in this scenario are public facing so take precautions to properly secure each by checking services ''netstat -tulpn'' and/or utilizing a firewallThe main things to configure at this point are ssh access so you no longer need to rely on the virt-viewer console which is slowTo do that, you will need to add packages (if you use the netinst.iso)To make that easy, I keep the sources.list on my primary business server: +You should once again execute ''ping 8.8.8.8'' and ''ping google.com'' to confirm you can route within the guest OS. If it fails, reboot and try againIts good idea at this point to check ''netstat -tulpn'' on both the host and in any VMs to ensure only approved services are listening. When I first began spinning up machines, I would make template machines and then use ''virt-clone'' to make new machines which I would then tweak for the new use case. You always get ssh hash errors this way and it is just kind of cumbersome and not cleanOver time, I found out about how to pass preseed.cfg files to Debian through virt-install, and so now I simply spin up new images with desired parameters and the preseed.cfg files passes nameservers, network configuration details, and ssh keys into the newly created machine. Although related, that topic stands on its ownso I wrote up the steps I took over at [[computing:preseed]]One other thing that people might want do is enable some type of GUI-based monitoring tool for the physical host like munin, cacti, smokeping, etc., in order to monitor snmp or other characteristics of the VMs. If so, make sure you only run those web administration panels locally and/or block 443/80 in a firewall. You will want to put the physical host behind a vpnlike I've documented in [[computing:vpnserver-debian]] and then just access it by its internal IPThis completes the tutorial on setting up a virtualization stack with virsh and qemu/kvm. 
  
-  wget https://haacksnetworking.org/sources.list + --- //[[webmaster@haacksnetworking.org|oemb1905]] 2024/02/17 20:46//
- +
-Once you grab the ''sources.list'' file, install ''openssh-server'' and exchange keys, you can now use a shell to ssh into the guestOS henceforward. This means that at this point you are now in a position to create VMs and various production environments at will or start working on the one you just created. Another thing to consider is to create base VMs that have ''interfaces'' and ''ssh'' access all ready to go, and then leverage those to make new instances using ''cp''. Alternately, you can power down a base VM and then clone it as follows: +
- +
-  virt-clone \ +
-  --original=clean \ +
-  --name=sequoia \ +
-  --file=/mnt/vms/students/sequoia.img +
- +
-The purpose of this project was to create my own virtualized VPS infrastructure (using KVM and VMs), to run my own production environments and for clients, students, and family. Here's a few to check out: +
- +
-  * [[https://nextcloud.haacksnetworking.org|Haack's Networking - Nextcloud Talk Instance]] +
-  * [[https://mrhaack.org|GNU/Linux Social - Mastodon Instance]] +
-  * [[http://space.hackingclub.org|My Daughter's Space Website]] +
-  * [[http://bianca.hackingclub.org|A Student's Pentesting Website]] +
- +
-That's all folks! Well ... except for one more thing. When I first did all of this, I was convinced that zfs should be within LUKS as it was difficult for me to let go of LUKS / full disk encryption. I've now decided that's insane because of one primary reason. Namely, by putting zfs (or any file system) within LUKS, you lose the hot swapability that you have when zfs (or regular RAID) run directly on the hardware. That would mean that replacing a hard drive would require an entire server rebuild, which is insane. However, it is arguably more secure that way, so if budget and time permits, I've retained how I put zfs inside LUKS in the passage that follows. Proceed at your own risk lol. +
- +
--- LUKS FIRST, ZFS SECOND - (LEGACY SETUP, NOT CURRENT) -- +
- +
-My initial idea was to do LUKS first, then zfs, meaning 6 could be mirrors in zfs and I would keep 1 as a spare LUKS crypt for keys, other crap, etc. To create the LUKS crypts, I did the following 6 times, each time appending the last 4 digits of the block ID to the LUKS crypt name: +
- +
-  cryptsetup luksFormat /dev/sda +
-  cryptsetup luksOpen /dev/sda sdafc11 +
- +
-You then make sure to use the LUKS label names when making the zpool, not the short names, which can change at times during reboots. I did this as follows: +
- +
-  sudo apt install zfs-utils bridge-utils +
-  zpool create -m /mnt/vms vms -f mirror sdafc11 sdb9322 mirror sdc8a33 sdh6444 mirror sde5b55 sdf8066 +
-   +
-ZFS by default executes its mount commands at boot. This is a problem if you don't use auto-unlocking and key files with LUKS to also unlock on boot (and/or a custom script that unlocks). The problem, in this use cases, is ZFS will try to mount the volumes before they are unlocked. The two other options are none/legacy modes, both of which rely on you mounting the volume using traditional methods. But, the whole point of using zfs finally was to not use traditional methods lol, so for that reason I investigated if there was a fix. The closest to a fix is setting cachefile=none boot, but this a) hosed the pool once b) requires resetting, rebooting again and/or manually re-mounting the pool - either of which defeat the point. Using key files, cache file adjustments, etc., and/or none/legacy were all no-gos for me, so in the end, I decided to tolerate that zfs would fail at boot, and that I would ''zpool import'' it afterwards. +
-   +
-  sudo -i +
-  screen +
-  su - user [pam_mount unlocks /home for physical host primary user and the spare 1TB vault] +
-  ctrl-a-d [detaches from screen] +
- +
-After unlocking my home directory and the spare 1TB vault, the next step is to unlock each LUKS volume, which I decided a simple shell script would suffice which looks like this mount-luks.sh: +
- +
-  cryptsetup luksOpen /dev/disk/by-uuid/2702e690-…-0c4267a6fc11 sdafc11 +
-  cryptsetup luksOpen /dev/disk/by-uuid/e3b568ad-…-cdc5dedb9322 sdb9322 +
-  cryptsetup luksOpen /dev/disk/by-uuid/d353e727-…-e4d66a9b8a33 sdc8a33 +
-  cryptsetup luksOpen /dev/disk/by-uuid/352660ca-…-5a8beae15b44 sde5b44 +
-  cryptsetup luksOpen /dev/disk/by-uuid/fa1a6109-…-f46ce1cf8055 sdf8055 +
-  cryptsetup luksOpen /dev/disk/by-uuid/86da0b9f-…-13bc38656466 sdh6466 +
- +
-This script simply opens each LUKS crypt so long as you enter or copy/paste your HD password 6 times. After that, one has to re-mount the pool / rebuild the quasi RAID1 mirror/logical volumes with the import command as follows once the volumes are opened: +
- +
-  zpool import pool +
-  +
-Rebooting in this manner takes about 3-5 minutes for the host, and 2 minutes to screen into my user name, detach, and run the mount LUKS script to mount the pools/datasets, etc. Again, I ultimately rejected this because you cannot use zfs tools when hard drives fail with this setup. +
- +
- --- //[[jonathan@haacksnetworking.org|oemb1905]] 2022/11/12 12:39//+
computing/vmserver.1708202754.txt.gz · Last modified: 2024/02/17 20:45 by oemb1905