proxmox
This tutorial has my notes for setting up Proxmox on Debian GNU/Linux. Ultimately, I set up a PVE instance on my home server as a network bridge and reverse proxy that can host multiple websites/services with different domains and only one external IP address. Later, I will add a another section, which is adapting the reverse proxy and network bridge setup in the PVE instance to an external block of IPs. My first goal was to make sure I could get virsh images into PVE and PVE images out of PVE and into virsh. To import virsh images into PVE, create an empty VM in the gui, then use the following command changing 500 to match your vmid.
qm importdisk 500 hub.jonathanhaack.com.qcow2 <dataset> qm set 500 --scsi0 vms:vm-500-disk-0 [sets boot order to make this primary]
Alternately, if you just want to attach a raw image file, or VDD, to an existing proxmox VM, just do the following:
qm importdisk <vmid> obstorage.img <dataset>
After that, you can add storage in hardware within the VM settings to attach this image. In order to export a VM at the command line, execute:
vzdump <vmID> vzdump <vmID> --dumpdir /mnt/backups/machines/pve-images/<vmname>
This will create a VM in .vma format. Then, you can extract the vma to a raw image as follows:
vma extract /var/lib/vz/dump/vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma-extract
If it is compressed:
zstd -d vzdump-qemu-506-2023_12_24-04_40_17.vma.zst
In order to see where the virtual machine's vdevs are, run the following:
zfs list
To delete one (be careful), use the following
zfs destroy pool/dataset
I've migrated my Windows VM here for testing to avoid cumbersome EFI/TPM setup on virt-manager. Here's a page from ProxMux wiki on best practices for Windows VMs. Once thing I noticed when doing the Windows install is that PVE makes two more vdevs for the TPM / EFI, so you get something like the following in zfs list
:
vms/vm-102-disk-0 vms/vm-102-disk-1 vms/vm-102-disk-2
Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. After getting images inside and out and spinning up the old Windows VM, I decided a year or so later that it would be worth it to spin PVE up with a network bridge and reverse proxy in order to run it to manage my home network. The home Nextcloud, airsonic, pihole, and other similar devices would be virtualized in the PVE and the PVE bridge and reverse proxy would route traffic to them. I intended to support the proxmox project with a small community license, so first I needed to verify cores:
grep "physical id" /proc/cpuinfo | sort -u | wc -l
While installing proxmox on top of stock debian, postfix asked me how to set it up and I chose satellite system and sent outgoing email to my relay as follows:
[relay.haacksnetworking.org]:587
Once PVE is setup, log in locally and create the following inside interfaces. Make sure you are using ifupdown, not netplan.
source /etc/network/interfaces.d/* auto lo iface lo inet loopback iface enp0s31f6 inet manual auto vmbr0 iface vmbr0 inet static address 10.13.13.2/24 gateway 10.13.13.1 bridge-ports enp0s31f6 bridge-stp off bridge-fd 0
Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in /etc/hosts
properly. Make sure openWRT is assigning the same matching local address (with dhcp) to the PVE as you assign within the PVE. Traffic on 80/443 goes from the openWRT router at 10.13.13.1, to the network bridge and reverse proxy PVE instance at 10.13.13.2, and from the there to the rest of upstream. Therefore, we want to ensure that both the openWRT router and the PVE instance itself have a static address so we can easily get to the management GUI for the essential services the PVE instance will be serving. Make sure the PVE instance is named properly and reboot.
sudo nano /etc/hosts <10.13.13.2 pve.domain.com pve> sudo nano /etc/hostname <pve.domain.com>
Inside the PVE instance, set up configurations for each website on nginx as follows:
sudo apt install nginx sudo nano /etc/nginx/sites-available/music.domain.com.conf
Enter this server block adapted as needed
server { server_name music.domain.com; location / { proxy_pass http://10.13.13.3; #proxy_set_header Host music.domain.com; } }
Repeat this as needed for each VM within PVE. For example, here's another entry for an instance doing nextcloud instead of music:
sudo nano /etc/nginx/sites-available/nextcloud.domain.com.conf
And enter something like …
server { server_name nextcloud.domain.com; location / { proxy_pass http://10.13.13.4; #proxy_set_header Host nextcloud.domain.com; } }
To activate this server block,do as follows:
cd /etc/nginx/sites-enabled ln -s ../sites-available/pihole.outsidebox.vip.conf .
Once this is done, set up TLS on the PVE/RP instance as follows:
sudo apt install certbot letsencrypt python3-certbot-nginx sudo certbot --authenticator standalone --installer nginx -d example.domain.com --pre-hook "systemctl stop nginx" --post-hook "systemctl start nginx"
If you want to make a proper website and/or encrypt the connection to the PVE instance on the LAN, then make an entry like above for the other domains. This time, however, add a few lines to make it accessible only to your LAN subnet and/or that of your VPN. The allow/deny lines stop the PVE from being publicly accessible.
server { server_name pve.domain.com; location / { proxy_pass http://10.13.13.254; #lan permit url requests allow 10.13.13.0/24; #vpn permit url requests allow 10.33.33.0/24; #deny all url requests excep above deny all;
} }
After entering the above block, make sure to run certbot again for that domain. Once that's done, you need to set up NAT loopback on the openWRT router so that the instances behind the reverse proxy / PVE instance can route back out. To do that You do that by going to Firewall / Port Forward / Edit / Advanced Settings / Enable NAT Loopback. For the devices within the LAN to locate these domains, you must enter host entries on the openWRT router and that's done in DHCP and DNS / Hostnames / Add. Within each VM, make sure to set the FQDN and install your web server of choice. Make sure to also run certbot within the VMs themselves so that matching certificates are installed. Install headers within the VMs as follows, repeat for each VM:
sudo nano /etc/hosts <127.0.0.1 music.example.com music> sudo nano /etc/apache2/sites-enabled/music.example.com.conf <ServerName music.example.com> <Header add myheader "music.example.com"> <RequestHeader set myheader "music.example.com"> sudo nano /etc/hosts <127.0.0.1 nextcloud.example.com music> sudo nano /etc/apache2/sites-enabled/nextcloud.example.com.conf <ServerName nextcloud.example.com> <Header add myheader "nextcloud.example.com"> <RequestHeader set myheader "nextcloud.example.com">
Once this id done, restart your web server. In my case, the VMs are using apache and only the PVE / reverse proxy server is using nginx. After restarting the web servers inside each VM, you now can create the matching certs inside each VM. Here's the commands for VM1:
sudo apt install certbot letsencrypt python3-certbot-apache sudo certbot --authenticator standalone --installer apache -d music.example.com --pre-hook "systemctl stop apache2" --post-hook "systemctl start apache2" sudo apt install certbot letsencrypt python3-certbot-apache sudo certbot --authenticator standalone --installer apache -d nextcloud.example.com --pre-hook "systemctl stop apache2" --post-hook "systemctl start apache2"
Once that's done, make sure that you have cron jobs set for both VMs and for the reverse proxy server / PVE instance. Just enter the following in crontab for all three:
30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log
On one of my VMs, which is running apache and behind the nginx reverse proxy, I got a client sync error for the nextcloud service running on it. It was complaining about the file size. Note: the GNU/Linux clients had no such error. Anyway, the following was needed on PVE / reverse proxy configs to resolve this error:
sudo nano /etc/nginx.proxy_params <post_max_size 10G;> <upload_max_filesize 10G;> sudo nano /etc/nginx/nginx.conf <client_max_body_size 10G;> <client_body_buffer_size 400M;>
Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, and separate VMs (database/stack reasons), and using only one external IP. Now, for managing how you much your instance can handle, here's some tips:
If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three:
lscpu | grep "thread" lscpu | grep "per socket" lscpu | grep "Socket(s)" Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 2
To determine vCPUs, you do threads*cores*physical-sockets, or (12*2)*(12)*(2) = 576 for a Supermicro with 2x Xeon e5-2650 v4 chips. As for RAM, I have 64GB total, so subtracting 4Gb for the OS, I have 32GB left for the VMs with roughly 32GB being allocated for zfs. PVE is very strict in interpreting allocated RAM and will OOM kill VMs if it senses it is short. This is in contrast, for example, with virsh which will allow sysadmins to run how they see fit even at risk of virsh crashing. For this reason, either expand your rig, adjust arc, or tightly provision your VMs, or all three. I personally just tightly provisioned my VMs as I want zfs to use as much RAM as it needs. If you prefer to throttle arc, here's how:
sudo nano /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=1073741824 [1 gigabyte example] update-initramfs -u -k all reboot
Adjust arc throttle to fit your system. If you need to stop a system at the CLI because you can't reboot/shutdown at the GUI-level, then do something like:
/var/lock/qemu-server/lock-101.conf
Next entry …
— oemb1905 2023/12/28 05:26