User Tools

Site Tools


computing:vmserver

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:vmserver [2021/11/17 15:40] oemb1905computing:vmserver [2024/02/17 21:11] (current) oemb1905
Line 3: Line 3:
   * **Jonathan Haack**   * **Jonathan Haack**
   * **Haack's Networking**   * **Haack's Networking**
-  * **netcmnd@jonathanhaack.com** +  * **webmaster@haacksnetworking.org** 
  
 ------------------------------------------- -------------------------------------------
  
-[Spoiler Alert: I changed the initial setup!  Read below to skip the pain!]+//vmserver//      
  
-Context: Why? I am a math/CS teacher for HS/college levels. I use free software to help my educational needs, and I would like to use self-hosted Big Blue Button to assist me when using the Inquirer-Presenter-Scribe thinking routine. I also figured it could replace my current business website and host 3/4 instances of NMBBB for small schools, etc. And thanks to Kilo Sierra for the kind hardware donation! +-------------------------------------------
-[Spoiler Alert: I changed the initial setup! Read below to skip the pain!] +
--- First Setup -- +
-I have a dual 8-core Xeon SuperMicro server (circa 08-12), with 8 HD bays in use, 96GBRAM, SAS to SATA SCU for the hard drives, and 1 HDD being a 120GBSSD boot volume running Debian Bullseye. I calculated that for a 500W PSU, that the RAM would be around 360W at capacity but rarely hit that or even close, that the HDs would often (especially on boot) hit up to 21.3W per drive, or around 150W excluding the boot SSD volume. The motherboard would be 100W, putting me at 610W. This is over, however, I don't expect all the RAM and all the HDs to reach peak, even on boot. After testing and confirming it worked (many times lol), I went on to install the physical host OS (Debian) and setup the basics of the system (hostname, DNS, etc., basic package installs). Again, used the 120GB SSD for Debian (with a home crypt) and kept the other 7 drives for a pool/LVM/spare crypt. My initial idea was to do LUKS first, then zfs, meaning 6 could be mirrors in zfs and I would keep 1 as a spare LUKS crypt for keys, other crap, etc. To create the LUKS crypts, I did the following 6 times, each time appending the last 4 digits of the block ID to the LUKS crypt name: +
- +
-  cryptsetup luksFormat /dev/sda +
-  cryptsetup luksOpen /dev/sda sdafc11 +
- +
-You then make sure to use the LUKS label names when making the zpool, not the short names, which can change at times during reboots. I did this as follows: +
- +
-  sudo apt install zfs-utils +
-  zpool create -m /mnt/vms vms -f mirror sdafc11 sdb9322 mirror sdc8a33 sdh6444 mirror sde5b55 sdf8066 +
- +
-ZFS by default executes its mount commands at boot. This is a problem if you don't use auto-unlocking and key files with LUKS to also unlock on boot (and/or a custom script that unlocks). The problem, in this use cases, is ZFS will try to mount the volumes before they are unlocked. The two other options are none/legacy modes, both of which rely on you mounting the volume using traditional methods. But, the whole point of using zfs finally was to not use traditional methods lol, so for that reason I investigated if there was a fix. The closest to a fix is setting cachefile=none boot, but this a) hosed the pool once b) requires resetting, rebooting again and/or manually re-mounting the pool either of which defeat the point. Using key files, cache file adjustments, etc., and/or none/legacy were all no-gos for me, so in the end, I decided to tolerate that zfs would fail at boot, and that I would ''zpool import'' it afterwards. +
-   +
-  sudo -+
-  screen +
-  su user [pam_mount unlocks /home for physical host primary user and the spare 1TB vault] +
-  ctrl-a-d [detaches from screen] +
- +
-After unlocking my home directory and the spare 1TB vault, the next step is to unlock each LUKS volume, which I decided a simple shell script would suffice which looks like this mount-luks.sh: +
- +
-  cryptsetup luksOpen /dev/disk/by-uuid/2702e690--0c4267a6fc11 sdafc11 +
-  cryptsetup luksOpen /dev/disk/by-uuid/e3b568ad--cdc5dedb9322 sdb9322 +
-  cryptsetup luksOpen /dev/disk/by-uuid/d353e727--e4d66a9b8a33 sdc8a33 +
-  cryptsetup luksOpen /dev/disk/by-uuid/352660ca--5a8beae15b44 sde5b44 +
-  cryptsetup luksOpen /dev/disk/by-uuid/fa1a6109--f46ce1cf8055 sdf8055 +
-  cryptsetup luksOpen /dev/disk/by-uuid/86da0b9f--13bc38656466 sdh6466 +
- +
-This script simply opens each LUKS crypt so long as you enter or copy/paste your HD password 6 times. After that, one has to re-mount the pool / rebuild the quasi RAID1 mirror/logical volumes with the import command as follows once the volumes are opened:+
  
-  zpool import vms +This tutorial covers how to set up a production server that's intended to be used as a virtualization stack for a small business or educator. I am currently running a Supermicro 6028U-TRTP+ w/ Dual 12-core Xeon E5-2650 at 2.2Ghz, 384GB RAM, with four two-way mirrors of Samsung enterprise SSDs for the primary vdev, and two two-way mirrors of 16TB platters for the backup vdev. All drives using SAS. I am using a 500W PSU. I determine the RAM would be about 5-10W a stickthe mobo about 100W, and the drives would consume most of the rest at roughly 18-22W per drive. The next step was to install Debian on the bare metal to control and manage the virtualization environmentThe virtualization stack is virsh and kvm/qemu. As for the file system and drive formatting, used luks and pam_mount to open an encrypted home partition and mapped home directory. I use this encrypted home directory to store keys for the zfs pool and/or other sensitive data, thus protecting them behind FDE. Additionally, I create file-level encrypted zfs data sets within each of the vdevs that are unlocked by the keys on the LUKS home partitionInstead of tracking each UUID down on your initial build, do the following:
-  +
-Rebooting in this manner takes about 3-5 minutes for the host, and 2 minutes to screen into my user namedetach, and run the mount LUKS script (which also ends the script by importing the pool). The above was the original setup. I changed that below.+
  
--- Alternate Setup --+  zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh 
 +  zpool export pool 
 +  zpool import -d /dev/disk/by-id pool
  
-Eventually, I ended up agreeing with a friend that it made no sense to do LUKS first because that would preclude me from rebuilding degraded pools using zfs tools. This is because, if a drive failed, then the pool could never be importedand thus never used without very complicated tinkeringSo, I destroyed the LUKS pools above, made a zfs pool with the same command structure but used the regular short-names only. Then, after that, I created two datasets (zfs' name for encrypted folders inside pools/mountpoints for LVM mirrors). The datasets each unlock by pulling a dd-generated key from the encrypted home partition on the SSD boot volume. set up the keys/datasets as follows:+Once the pool is createdyou can create your encrypted datasetsTo do so, I made some unlock keys with the dd command and placed the keys in a hidden directory inside that LUKS encrypted home partition I mentioned above:
  
   dd if=/dev/random of=/secure/area/example.key bs=1 count=32   dd if=/dev/random of=/secure/area/example.key bs=1 count=32
   zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///mnt/vault/example.key pool/dataset   zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///mnt/vault/example.key pool/dataset
  
-When you create this on the current running instanceit will also mount it for you as a courtesy, but upon reboot, you need to load the keythen mount the dataset using zfs commandsIn my caseI created three datasets (one for raw isosone for disk images, and a last one for backup sparse tarballs). Each one was created as follows:+When the system rebootsthe vdevs will automatically mount but the data sets won't because the LUKS keys won't be available until you mount the home partition by logging in to the user that holds the keys. For security reasonsthis must be done manually or it defeats the entire purposeSoonce the administrator has logged in to the user in a screen session (rememberit is using pam_mount)they simple detach from that session and then load the keys and datasets as follows:
  
   zfs load-key pool/dataset   zfs load-key pool/dataset
   zfs mount pool/dataset   zfs mount pool/dataset
      
-Once I created all the datasetsI made a script that would load the keys and unlock all of themthen rebooted and tested it for functionalityUpon verifying that the datasets workedI could now feel comfortable creating VMs again, since the hard drive images for those VMs would be stored in encrypted datasets with zfs. My next task was to create both snapshots within zfswhich would handle routine rollbacks and smaller errors/mistakes. did that by creating a small script that runs via cron 4 times a day, or every 6 hours:+If you have a lot of data setsyou can make simple script to load them all at onceetcSince we have zfsit's a good idea to run some snapshots. To do that, I created a small shell script with the following commands and then set it to run 4 times a day, or every 6 hours:
  
   DATE=date +"%Y%m%d-%H:%M:%S"   DATE=date +"%Y%m%d-%H:%M:%S"
Line 64: Line 35:
   /usr/sbin/zfs snapshot pool@backup_$DATE   /usr/sbin/zfs snapshot pool@backup_$DATE
  
-The snapshots allow me to perform roll backs when end-users make mistakes, e.g., delete an instructional video after a class session, etc., or what have you.  To delete all snapshots and start over, run:+Make sure to manage your snapshots and only retain as many as you can etc., as they will impact performanceIf you need to zap all of them and start over, you can use this command:
  
   zfs list -H -o name -t snapshot | xargs -n1 zfs destroy   zfs list -H -o name -t snapshot | xargs -n1 zfs destroy
  
-However, if the data center is compromised physically or their upstream goes down, I also need remote/failover options, so my next task was to find way to easily take advantage of cp'understanding of sparse files and tar so that I could easily use rsync to bring over tarballs of the VM disks that only utilized actual datainstead of the entire 1TB containerTo do this, I used the ''c'' and ''S'' flags in tar, together with bzip2 compression for speed, in order to provide myself remote/failover options. I did this as follows, and take care when adjusting this script, as most alterations will break the ability of tar to properly treat the .img file as sparse:+Off-site //full// backups are essential but they take long time to download. For that reason, it'best to have the images as small as possible. When using ''cp'' in your workflow, make sure to specify ''--sparse=always''. Before powering the virtual hard disk back up, you should run ''virt-sparsify'' on the image to free up the unused blocks on the host and that are not actually used in the VM. In order for the VM to designate those blocks as emptyensure that you are running fstrim within the VMIf you want the ls command to show the size of the virtual disk that remains after the zeroing, you will need to run ''qemu-img create'' on it, which will create a new copy of the image without listing the ballooned size. the new purged virtual hard disk image can then be copied to a backup directory where one can compress and tarball it to further reduce its size. I use BSD tar and the pbzip2 compression which makes ridiculously small images. GNU tar glitches with the script for some reason. BSD tar can be downloaded with ''sudo apt install libarchive-tools''. I made a script to automate all of those steps for a qcow2 image. I also adapted that to work for raw images.
  
-  DATE=date +"%Y%m%d-%H:%M:%S" +[[https://repo.haacksnetworking.org/haacknet/haackingclub/-/blob/main/scripts/virtualmachines/vm-bu-production-QCOW-loop.sh|vm-bu-production-QCOW-loop.sh]] \\ 
-  cd /backups +[[https://repo.haacksnetworking.org/haacknet/haackingclub/-/blob/main/scripts/virtualmachines/vm-bu-production-RAW-loop.sh|vm-bu-production-RAW-loop.sh]]
-  cp -ar /vms/vol.img /backups/vol.img_QUICK_.bak +
-  bsdtar --use-compress-program=pbzip2 -Scf vol.img_QUICK_.tar.bz2 vol.img_QUICK_.bak +
-  mv /backups/vol.img_QUICK_.tar.bz2 /backups/tbs/vol.img_QUICK_$DATE.tar.bz2 +
-  rm /backups/vol.img_QUICK_.bak +
-  find /egcy/backups/tarballs -type f -mtime +30 -delete+
  
-In addition to daily live images using the above, scriptalso run a 1/3 days version called SANE , which runs virsh shutdown domain before copying/tarballing and then runs virsh start domain at the end of the tarballing. The host is set to keep 30 days worth of images, but you can easily adjust the flag in the last line above to your use caseAfter these run, pull the changes to offsite backup ``/`` computer using rsync on the offsite host as follows:+On the off-site backup machineI originally would pull the tarballs down using a one line rsync scriptwould adjust the cron timing of the rsync script to work well with when the tarballs are created
  
-  sudo rsync -av --log-file=/home/logs/backup-of-vm.log --ignore-existing -e 'ssh -i /home/user/.ssh/id_rsa' root@domain.com:/backups/tarballs/ /media/user/Backups/+  sudo rsync -av --log-file=/home/logs/backup-of-vm-tarballs.log --ignore-existing -e 'ssh -i /home/user/.ssh/id_rsa' root@domain.com:/backups/tarballs/ /media/user/Backups/ 
 +   
 +Since then, I've switched to using rsnapshot to pull down the tarballs in some cases. The rsnapshot configurations can be found here:
  
-Since the workstation is on rsnapshot, I get redundant dailies on its backup that extend beyond the quantity on the physical host (because of space on my primary workstation). This new setup runs the following domains in production:+[[https://repo.haacksnetworking.org/haacknet/haackingclub/-/tree/main/scripts/rsnapshot|Rsnapshot Scripts]]
  
-  * [[https://netcmnd.haacksnetworking.org|Physical Host]] (The SuperMicro described above ;> ) + 
-  [[https://services.haacksnetworking.org|VM1 - Haack's Networking]] (my business lol, which includes this site/post) +****
-  [[https://nmbbb.club|VM2 - NM Big Blue Button]] project (the driving reason for this change)+
  
 -- Network Bridge Setup / VMs -- -- Network Bridge Setup / VMs --
  
-Once the physical host was setup, I created two vanilla VMs using the virt-manager GUI with X-forwarding over ssh prior to bringing the server on site. Once those were setup, headed to the DataCenter figuring I might have to tinker with bridging and network configurations a bit onsite before leaving the device there indefinitely and subject to 24 hour notice emergency KVM. Once there, I worked for about 3 hours configuring the interfaces for bridge mode, ultimately with two physical ethernet cables into the device, one on non-bridged static IP / interface and the other on static IP / interface dedicated to bridgingAfter about a week of thinking back on my Slackware phasesmy freeBSD phases and the late 90s and early 2000s, ... AND ... a lot of Stack Exchange tutorials, I decided on the manual command line approach, utilizing no desktop tools to manage interfaces, just stripped down Debian with no network-manager etc., and just manual entries for needed functionality. Here's what I came up with: +Up until now, I've covered how to provision the machines with virt-manager, how to backup the machines on the physical host, and how to pull those backups to an off-site workstationNow will discuss how to assign each VM an external IP. The first step is to provision the physical host with a virtual switch (wrongly called bridge) to which VMs can connectTo do thisI kept it simple and used ''ifup'' and ''bridge-utils'' package and some manual editing in ''/etc/network/interfaces''. 
 +   
 +  sudo apt install bridge-utils 
 +  sudo brctl addbr br0
   sudo nano /etc/network/interfaces   sudo nano /etc/network/interfaces
  
-That file should look like this (adjust to your use-case, ofc):+Now that you have added created the virtual switch, you need to reconfigure your physical host's ''/etc/network/interfaces'' file to use the switch. In my case, I used 1 IP for the host itself, and another for the switch, meaning that two ethernet cables are plugged into my physical host. I did this so that if I hose my virtual switch settings, I still have a separate connection to the box. Here's the configuration in ''interfaces'':
  
-  #eth0 (alt name ent8s0g) physical host base-connection+  #eth0  [1st physical port]
   auto ent8s0g0   auto ent8s0g0
     iface ent8s0f0 inet static     iface ent8s0f0 inet static
Line 104: Line 73:
     nameserver 8.8.8.8     nameserver 8.8.8.8
  
-  #eth1 (alt name enp8s0g1) interface for bridge+  #eth1 [2nd physical port]
   auto enp8s0g1   auto enp8s0g1
   iface enp8s0g1 inet manual   iface enp8s0g1 inet manual
Line 115: Line 84:
     bridge_ports enp8s0g1     bridge_ports enp8s0g1
     nameserver 8.8.8.8     nameserver 8.8.8.8
 +    
 +After that, either reboot or ''systemctl restart networking.service'' to make the changes current. Execute ''ip a'' and you should see both external IPs on two separate interfaces, and you should see ''br0 state UP'' in the output of the second interface ''enp8s0g1''. You should also run some ''ping 8.8.8.8'' and ''ping google.com'' tests to confirm you can route. If anyone wants to do this in a home, small business, or other non-public facing environment, you can easily use dhcp and provision the home/small business server's ''interface'' file as follows:
  
-Once that's done, run ''ip a'' to make sure your primary interface connects upstream to the Data Center, and also make sure that the interface ''br0'' appears at the bottom and that the secondary interface shows it as bound to the bridge in its output. Sometimes, I find that nameservers don't properly populate to resolv.conf, so I do the following so that my ''resolv.conf'' configurations stick and I don't lose upstream DNS. (Note: I do this because Debian - rightfully - still supports manual over-writing of /etc/resolv.conf.)+  auto eth1 
 +  iface eth1 inet manual
  
-  echo nameserver 8.8.8.8 > /etc/resolv.conf+  auto br0 
 +  iface br0 inet dhcp 
 +        bridge_ports eth1
  
-Reboot the host and ping 8.8.8.8 and google.com to ensure you have link and upstream DNS. Next up, it is time to configure the guest VM machine. I saw a lot of good tutorials online, but most of them got sloppy at this stage as far as interfaces and bridging was concerned, so I'll try to be clear where they were notWhen you set up the new VM (not covered here), instead of relying on the NAT-based default network, change the option to "Bridge" (this is in the virt-manager GUI) and enter the name of the bridge, in my case ''br0''. (You can also use ''virsh'' for this step, but why lol - I just use X forwarding and open the GUI.) This step connects the hypervisor NIC to the virtual switch of the bridge on the physical host. Once that'done, spin up the VM and open up the Terminal (the one inside the VM). In the VM's Terminalconfigure the NIC interface as follows:+The above home-version allows, for example, users to have a virtual machine that gets an ip address on your LAN and makes ssh/xrdp access far easierIf you have any trouble routing on the physical host, it could be that you do not have nameservers setupIf that's the casedo the following:
  
-  sudo nano /etc/network/interfaces+    echo nameserver 8.8.8.8 > /etc/resolv.conf 
 +    systemctl restart networking.service
  
-This file should look like this (adjust to your use-case - and again, this is inside the VM Terminaland not on the Terminal of the physical host):+Now that the virtual switch is setup, I can now provision VMs and connect them to the virtual switch ''br0'' in virt-manager. You can provision the VMs within the GUI using X passthrough, or use the command line. First, create a virtual disk to your desired size by excuting ''sudo qemu-img create -f raw new 1000G'' and then run something like this
 + 
 +  sudo virt-install --name=new.img \ 
 +  --os-type=Linux \ 
 +  --os-variant=debian10 \ 
 +  --vcpu=1 \ 
 +  --ram=2048 \ 
 +  --disk path=/mnt/vms/students/new.img \ 
 +  --graphics spice \ 
 +  --location=/mnt/vms/isos/debian-11.4.0-amd64-netinst.iso \ 
 +  --network bridge:br0 
 + 
 +The machine will open in virt-viewer, but if you lose the connection you can reconnect easily with: 
 + 
 +  virt-viewer --connect qemu:///system --wait new.img  
 +   
 +Once you finish installationconfigure the guestOS interfaces file ''sudo nano /etc/network/interfaces'' with the IP you intend to assign it. You should have something like this:
  
   auto epr1   auto epr1
Line 133: Line 124:
     nameservers 8.8.8.8     nameservers 8.8.8.8
  
-The VM interface is listed inside the guest/VM as epr1 - but remember, that's connected to the virtual switch and bridge through the previous stepsso don't worry. After this step, restart the networking service and check to see if your IP address is assigned. Also, in my use-case my VM is Ubuntu which does not allow manual over-writing of resolv.conf, so I also add upstream DNS as follows:+If you are creating VMs attached to virtual switch on the smaller home/business environmentthen adjust the guest OS by executing ''sudo nano /etc/network/interfaces'' and then something like this recipe:
  
-  sudo service networking restart +  auto epr1 
-  ip a +  iface epr1 inet dhcp
-  sudo apt install resolvconf +
-  sudo nano /etc/resolvconf/resolv.conf.d/tail+
  
-Enter the name server as follows:+If your guest OS uses Ubuntu, you will need to do extra steps to ensure that the guestOS can route. This is because Ubuntu-based distros have deprecated ''ifupdown'' in favor of ''netplan'' and disabled manual editing of ''/etc/resolv.conf''. So, either you want to learn netplan syntax and make interface changes using its YAML derivative, or you can install the optional ''resolvconf'' package to restore ''ifupdown'' functionality. To do this, adjust the VM provision script above (or use the virt-manager GUI with X passthrough) to temporarily use NAT then override Ubuntu defaults and restore ''ifupdown'' functionality as follows:
  
-  nameserver 8.8.8.8+  sudo apt install ifupdown 
 +  sudo apt remove --purge netplan.io 
 +  sudo apt install resolvconf 
 +  sudo nano /etc/resolvconf/resolv.conf.d/tail 
 +  <nameserver 8.8.8.8
 +  systemctl restart networking.service
  
-At this point, I would probably reboot and then from within the VM, ping 8.8.8.8and then ping google.com to ensure you have link and upstream DNSEverything should be rosy ;>Some folks might be concerned about ARP and suchbut virt-manager handles that with the gateway entry combined with the bridge, so no need to alter proc and pass traffic, etc. Of coursereplace Google's DNS if you so choosebut had reliability problems with Level 3 during testing myself (sad).+You should once again execute ''ping 8.8.8.8'' and ''ping google.com'' to confirm you can route within the guest OS. If it fails, reboot and try againIts a good idea at this point to check ''netstat -tulpn'' on both the host and in any VMs to ensure only approved services are listeningWhen I first began spinning up machines, I would make template machines and then use ''virt-clone'' to make new machines which I would then tweak for the new use case. You always get ssh hash errors this way and it is just kind of cumbersome and not clean. Over timeI found out about how to pass preseed.cfg files to Debian through virt-install, and so now I simply spin up new images with desired parameters and the preseed.cfg files passes nameservers, network configuration details, and ssh keys into the newly created machine. Although related, that topic stands on its own, so I wrote up the steps I took over at [[computing:preseed]]. One other thing that people might want do is enable some type of GUI-based monitoring tool for the physical host like munin, cacti, smokeping, etc., in order to monitor snmp or other characteristics of the VMs. If so, make sure you only run those web administration panels locally and/or block 443/80 in a firewall. You will want to put the physical host behind a vpn, like I've documented in [[computing:vpnserver-debian]] and then just access it by its internal IP. This completes the tutorial on setting up a virtualization stack with virsh and qemu/kvm
  
- --- //[[jonathan@haacksnetworking.org|oemb1905]] 2021/11/17 08:40//+ --- //[[webmaster@haacksnetworking.org|oemb1905]] 2024/02/17 20:46//
computing/vmserver.1637163620.txt.gz · Last modified: 2021/11/17 15:40 by oemb1905