User Tools

Site Tools


computing:vmserver

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
computing:vmserver [2022/11/12 19:39] oemb1905computing:vmserver [2023/06/17 22:44] oemb1905
Line 4: Line 4:
   * **Haack's Networking**   * **Haack's Networking**
   * **netcmnd@jonathanhaack.com**    * **netcmnd@jonathanhaack.com** 
 +
 +-------------------------------------------
 +
 +//vmserver//      
  
 ------------------------------------------- -------------------------------------------
  
 I was given a dual 8-core Xeon SuperMicro server (32 threads), with 8 HD bays in use, 96GBRAM, 8x 6TB Western Digital in Raid1 zfs mirror (24TB actual), with a 120GBSSD boot volume stuck behind the power front panel running non-GUI Debian. (Thanks to Kilo Sierra for the donation.) My first job was to calculate whether my PSU was up to the task I intended for it. I used a 500W PSU. From my calculations, I determined that the RAM would be around 360W at capacity but rarely hit that or even close, that the HDs would often (especially on boot) hit up to 21.3W per drive, or around 150W total, and that excluded the boot SSD volume. The motherboard would be 100W, putting me at 610W. Since I did not expect the RAM, HDs, and other physical components to concurrently hit peak consumption, I considered it safe to proceed, and figured no more than around 75% of that ceiling would be used at any one time. The next step was to install the physical host OS (Debian) and setup the basics of the system (hostname, DNS, etc., basic package installs). On the 120GB SSD boot volume, I used a luks / pam_mount encrypted home directory, where I could store keys for the zfs pool and/or other sensitive data. I used a nifty trick in order to first create the pools simply with short names, and then magically change them to block ids without having to make the pool creation syntax cumbersome. I was given a dual 8-core Xeon SuperMicro server (32 threads), with 8 HD bays in use, 96GBRAM, 8x 6TB Western Digital in Raid1 zfs mirror (24TB actual), with a 120GBSSD boot volume stuck behind the power front panel running non-GUI Debian. (Thanks to Kilo Sierra for the donation.) My first job was to calculate whether my PSU was up to the task I intended for it. I used a 500W PSU. From my calculations, I determined that the RAM would be around 360W at capacity but rarely hit that or even close, that the HDs would often (especially on boot) hit up to 21.3W per drive, or around 150W total, and that excluded the boot SSD volume. The motherboard would be 100W, putting me at 610W. Since I did not expect the RAM, HDs, and other physical components to concurrently hit peak consumption, I considered it safe to proceed, and figured no more than around 75% of that ceiling would be used at any one time. The next step was to install the physical host OS (Debian) and setup the basics of the system (hostname, DNS, etc., basic package installs). On the 120GB SSD boot volume, I used a luks / pam_mount encrypted home directory, where I could store keys for the zfs pool and/or other sensitive data. I used a nifty trick in order to first create the pools simply with short names, and then magically change them to block ids without having to make the pool creation syntax cumbersome.
 +
 +**Update**: I am now running a newer server with 48 threads, 12 hard drive bays, 384GB RAM, 4 two-way mirrors of Samsung enterprise SSDs for the primary vm zpool, and 2 two-way mirrors of 16TB platters for the backup zpool and for some mailservers. These are also SAS hard drives now, not SATA. The server can handle 1.5TB of RAM.
  
   zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh   zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh
Line 35: Line 41:
   zfs list -H -o name -t snapshot | xargs -n1 zfs destroy   zfs list -H -o name -t snapshot | xargs -n1 zfs destroy
  
-Of course, off-site backups are essential. To do this, I use a small script that powers down the VM, uses ''cp'' with the ''--sparse=always'' flag to preserve space, and then uses tar with pbzip2 compression to save even more space. From my research, bsdtar seems to honor sparsity better than gnutar so install that with ''sudo apt install libarchive-tools''. The ''cp'' command is not optional, moreover, for remember tar will not work directly on an ''.img'' file. Here's a small shell script with a loop for multiple VMs within the same directory. I also added a command at the end that will delete any tarballs older than 180 days.+Of course, off-site backups are essential. To do this, I use a small script that powers down the VM, uses ''cp'' with the ''--sparse=always'' flag to preserve space, and then uses tar with pbzip2 ''sudo apt install pbzip2'' compression to save even more space. From my research, bsdtar seems to honor sparsity better than gnutar so install that with ''sudo apt install libarchive-tools''. The ''cp'' command is not optional, moreover, for remember tar will not work directly on an ''.img'' file. Here's a small shell script with a loop for multiple VMs within the same directory. I also added a command at the end that will delete any tarballs older than 180 days.
  
   DATE=`date +"%Y%m%d-%H:%M:%S"`   DATE=`date +"%Y%m%d-%H:%M:%S"`
Line 110: Line 116:
     systemctl restart networking.service     systemctl restart networking.service
  
-Now that the virtual switch is setup, I can now provision VMs and connect them to the virtual switch ''br0'' in virt-manager. You can provision the VMs within the GUI using X passthrough, or use the command line with something like this: +Now that the virtual switch is setup, I can now provision VMs and connect them to the virtual switch ''br0'' in virt-manager. You can provision the VMs within the GUI using X passthrough, or use the command line. First, create a virtual disk to your desired size by excuting ''sudo qemu-img create -f raw new 1000G'' and then run something like this:
  
-  #make new vm +  sudo virt-install --name=new.img \
-  sudo virt-install --name=new \+
   --os-type=Linux \   --os-type=Linux \
   --os-variant=debian10 \   --os-variant=debian10 \
Line 123: Line 128:
   --network bridge:br0   --network bridge:br0
  
-Once the VM is created and attached to the virtual switch (either using the command line or GUI)it is now time to log into the console with X passthrough (or ssh)and configure the guest OS ''interface'' file with the IP you intend to assign it. Once inside the guest OS, execute ''nano /etc/network/interfaces'' and then edit the file with something like this:+The machine will open in virt-viewerbut if you lose the connection you can reconnect easily with
 + 
 +  virt-viewer --connect qemu:///system --wait new.img  
 +   
 +Once you finish installation, configure the guestOS interfaces file ''sudo nano /etc/network/interfaces'' with the IP you intend to assign it. You should have something like this:
  
   auto epr1   auto epr1
Line 146: Line 155:
   systemctl restart networking.service   systemctl restart networking.service
  
-You should once again execute ''ping 8.8.8.8'' and ''ping google.com'' to confirm you can route within the guest OS. Sometimes, I find a reboot is required. At this stage, you now have a physical host configured with a virtual switch, and one VM provisioned to use the switch with its own external IP. Both the physical host and guest OS in this scenario are public facing so take precautions to properly secure each by checking services ''netstat -tulpn'' and/or utilizing a firewall. You are now in a position to create VMs and various production environments at will. One thing I do is create base VMs that have ''interfaces'' and ''ssh'' access all ready to go, and then leverage those to make new instances. Make sure to power down the base image so that you don't have an IP conflict at your data center.+You should once again execute ''ping 8.8.8.8'' and ''ping google.com'' to confirm you can route within the guest OS. Sometimes, I find a reboot is required. At this stage, you now have a physical host configured with a virtual switch, and one VM provisioned to use the switch with its own external IP. Both the physical host and guest OS in this scenario are public facing so take precautions to properly secure each by checking services ''netstat -tulpn'' and/or utilizing a firewall. The main things to configure at this point are ssh access so you no longer need to rely on the virt-viewer console which is slow. To do that, you will need to add packages (if you use the netinst.iso). To make that easy, I keep the sources.list on my primary business server:  
 + 
 +  wget https://haacksnetworking.org/sources.list 
 + 
 +Once you grab the ''sources.list'' file, install ''openssh-server'' and exchange keys, you can now use a shell to ssh into the guestOS henceforward. This means that at this point you are now in a position to create VMs and various production environments at will or start working on the one you just createdAnother thing to consider is to create base VMs that have ''interfaces'' and ''ssh'' access all ready to go, and then leverage those to make new instances using ''cp''Alternately, you can power down base VM and then clone it as follows:
  
-  #clone existing 
   virt-clone \   virt-clone \
   --original=clean \   --original=clean \
Line 197: Line 209:
 Rebooting in this manner takes about 3-5 minutes for the host, and 2 minutes to screen into my user name, detach, and run the mount LUKS script to mount the pools/datasets, etc. Again, I ultimately rejected this because you cannot use zfs tools when hard drives fail with this setup. Rebooting in this manner takes about 3-5 minutes for the host, and 2 minutes to screen into my user name, detach, and run the mount LUKS script to mount the pools/datasets, etc. Again, I ultimately rejected this because you cannot use zfs tools when hard drives fail with this setup.
  
- --- //[[jonathan@haacksnetworking.org|oemb1905]] 2022/11/12 11:52//+ --- //[[jonathan@haacksnetworking.org|oemb1905]] 2022/11/12 12:39//
computing/vmserver.txt · Last modified: 2024/02/17 21:11 by oemb1905