User Tools

Site Tools


computing:proxmox

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:proxmox [2023/12/26 02:43] oemb1905computing:proxmox [2024/01/07 00:09] (current) oemb1905
Line 11: Line 11:
 ------------------------------------------- -------------------------------------------
  
-Most important thing about proxmox is knowing how to get the VMs in and out of it. Let's start with getting them inFirst, create an empty VM in the gui, then use the following command changing 500 to match your vmid.+This tutorial has my notes for setting up Proxmox on Debian GNU/Linux. Ultimately, I set up a PVE instance on my home server as a network bridge and reverse proxy that can host multiple websites/services with different domains and only one external IP address. Later, I will add a another section, which is adapting the reverse proxy and network bridge setup in the PVE instance to an external block of IPs. My first goal was to make sure I could get virsh images into PVE and PVE images out of PVE and into virshTo import virsh images into PVE, create an empty VM in the gui, then use the following command changing 500 to match your vmid.
  
   qm importdisk 500 hub.jonathanhaack.com.qcow2 <dataset>   qm importdisk 500 hub.jonathanhaack.com.qcow2 <dataset>
Line 32: Line 32:
  
   zstd -d vzdump-qemu-506-2023_12_24-04_40_17.vma.zst   zstd -d vzdump-qemu-506-2023_12_24-04_40_17.vma.zst
-   
-Optionallly, you can use the python tool [[https://github.com/jancc/vma-extractor/blob/master/vma.py|vma.py]]. I don't see any value in using this, however, as it is much slower than the native tool and is also just in raw format. Example syntax is: 
- 
-  python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma 
  
-In order to see where the virtual machine'zdevs are, run the following:+In order to see where the virtual machine'vdevs are, run the following:
  
   zfs list   zfs list
Line 51: Line 47:
   vms/vm-102-disk-2   vms/vm-102-disk-2
      
-Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. Also, I chose to install those on the ''zpool'' and PVE's default was to store those locally - so they will likely not appear if you did not pick the option to store them on the pool +Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. After getting images inside and out and spinning up the old Windows VM, I decided a year or so later that it would be worth it to spin PVE up with a network bridge and reverse proxy in order to run it to manage my home network. The home Nextcloud, airsonic, pihole, and other similar devices would be virtualized in the PVE and the PVE bridge and reverse proxy would route traffic to them. I intended to support the proxmox project with a small community license, so first I needed to verify cores:
- +
-To verify sockets and cores+
  
   grep "physical id" /proc/cpuinfo | sort -u | wc -l   grep "physical id" /proc/cpuinfo | sort -u | wc -l
      
-Had to switch to postfix with my relay, so used satellite and:+While installing proxmox on top of stock debian, postfix asked me how to set it up and I chose satellite system and sent outgoing email to my relay as follows:
  
   [relay.haacksnetworking.org]:587   [relay.haacksnetworking.org]:587
  
-Created bridge on pve instance with this recipe:+Once PVE is setup, log in locally and create the following inside interfaces. Make sure you are using ifupdown, not netplan.
  
   source /etc/network/interfaces.d/*   source /etc/network/interfaces.d/*
Line 75: Line 69:
       bridge-fd 0       bridge-fd 0
              
-Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/etc/hosts'' properly. Additionally, ensure that your router is assigning the same matching local address (with dhcp) to the PVE as you assign within the PVE. In our situation, pve.domain.com is the proxmux host and has 80/443 passed to it by a port forward on the openWRT router. Only the openWRT router is public facing. The openWRT router is 10.13.13.1 and hands out addresses to clients on the LAN in that subnet. The PVE instance is both a reverse proxy with nginx for clients upstream on the LAN, and also a bridge for the VMs that reside on the PVE hypervisor (as per the bridge config above). Therefore, we want to ensure that both the openWRT router and the PVE instance itself have a static address. +Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/etc/hosts'' properly. Make sure openWRT is assigning the same matching local address (with dhcp) to the PVE as you assign within the PVE. Traffic on 80/443 goes from the openWRT router at 10.13.13.1to the network bridge and reverse proxy PVE instance at 10.13.13.2, and from the there to the rest of upstream. Therefore, we want to ensure that both the openWRT router and the PVE instance itself have a static address so we can easily get to the management GUI for the essential services the PVE instance will be serving. Make sure the PVE instance is named properly and reboot
  
   sudo nano /etc/hosts   sudo nano /etc/hosts
Line 82: Line 76:
   <pve.domain.com>   <pve.domain.com>
  
-Set up configurations for each website on nginx as follows:+Inside the PVE instance, set up configurations for each website on nginx as follows:
  
   sudo apt install nginx   sudo apt install nginx
   sudo nano /etc/nginx/sites-available/music.domain.com.conf   sudo nano /etc/nginx/sites-available/music.domain.com.conf
      
-Enter this in the file+Enter this server block adapted as needed
  
   server {   server {
Line 101: Line 95:
   sudo nano /etc/nginx/sites-available/nextcloud.domain.com.conf   sudo nano /etc/nginx/sites-available/nextcloud.domain.com.conf
      
-Enter this in the file+And enter something like ...
  
   server {   server {
Line 162: Line 156:
  
   30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log   30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log
 +  
 +On one of my VMs, which is running apache and behind the nginx reverse proxy, I got a client sync error for the nextcloud service running on it. It was complaining about the file size. Note: the GNU/Linux clients had no such error. Anyway, the following was needed on PVE / reverse proxy configs to resolve this error:
  
-Now the reverse proxy PVE instance has matching certs to the clients its servingThis project achieves VMs that are publicly accessible, internally accessible, TLS-secured, and separate VMs (database/stack reasons), and using only one external IP.+  sudo nano /etc/nginx.proxy_params 
 +  <post_max_size 10G;> 
 +  <upload_max_filesize 10G;> 
 +  sudo nano /etc/nginx/nginx.conf 
 +  <client_max_body_size 10G;> 
 +  <client_body_buffer_size 400M;>
  
 +Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, and separate VMs (database/stack reasons), and using only one external IP. Now, for managing how you much your instance can handle, here's some tips:
 +
 +  * Physical Cores * Threads * Physical CPUs = Assignable Virtual Cores  
 +  * If using zfs for the storage, estimate 50% of RAM used by it
 +
 +If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three:
 +
 +  lscpu | grep "thread"
 +  lscpu | grep "per socket"
 +  lscpu | grep "Socket(s)"
 +  Thread(s) per core: 2
 +  Core(s) per socket: 12
 +  Socket(s): 2
 +
 +To determine vCPUs, you do threads*cores*physical-sockets, or (12*2)*(12)*(2) = 576 for a Supermicro with 2x Xeon e5-2650 v4 chips. As for RAM, I have 64GB total, so subtracting 4Gb for the OS, I have 32GB left for the VMs with roughly 32GB being allocated for zfs. PVE is very strict in interpreting allocated RAM and will OOM kill VMs if it senses it is short. This is in contrast, for example, with virsh which will allow sysadmins to run how they see fit even at risk of virsh crashing. For this reason, either expand your rig, adjust arc, or tightly provision your VMs, or all three. I personally just tightly provisioned my VMs as I want zfs to use as much RAM as it needs. If you prefer to throttle arc, here's how:
 +
 +  sudo nano /etc/modprobe.d/zfs.conf
 +  options zfs zfs_arc_max=1073741824 [1 gigabyte example]
 +  update-initramfs -u -k all
 +  reboot
 +
 +Adjust arc throttle to fit your system. If you need to stop a system at the CLI because you can't reboot/shutdown at the GUI-level, then do something like:
 +
 +  /var/lock/qemu-server/lock-101.conf
 +  
 +Next entry ...
  
- --- //[[jonathan@haacksnetworking.org|oemb1905]] 2023/12/25 04:27//+ --- //[[jonathan@haacksnetworking.org|oemb1905]] 2023/12/28 05:26//
computing/proxmox.1703558619.txt.gz · Last modified: 2023/12/26 02:43 by oemb1905