This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
computing:proxmux [2023/12/23 22:09] – oemb1905 | computing:proxmux [2023/12/24 11:26] (current) – removed oemb1905 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ------------------------------------------- | ||
- | * **proxmux** | ||
- | * **Jonathan Haack** | ||
- | * **Haack' | ||
- | * **webmaster@haacksnetworking.org** | ||
- | ------------------------------------------- | ||
- | |||
- | // | ||
- | |||
- | ------------------------------------------- | ||
- | |||
- | Been testing proxmux on home back up server. The server is my old production SuperMicro, which is now used for offsite backups in the home office. I have two 6TB drives in a zfs mirror for vm spinning (no spare), and the other 6x 6TBs are in 3 two-way mirrors for actual production backups. I am using the Debian underbelly with the 3 two-way mirrors to run my normal rsnapshot version control scripts, etc., while I use one pool for testing proxmux functionality. So far everything is working fine, and it's fun to use. Until I am offering advanced business / small enterprise support though, there' | ||
- | |||
- | qm importdisk 500 hub.jonathanhaack.com.qcow2 < | ||
- | qm set 500 --scsi0 vms: | ||
- | | ||
- | These commands bring in the image block by block, and then re-assign the virtual disk that VM 500 uses to the image you just brought in, instead of the placeholder image you created in the GUI during the step prior. In order to export a VM at the command line, execute: | ||
- | |||
- | vzdump < | ||
- | vzdump < | ||
- | | ||
- | This will create a VM in .vma format. Then, you can extract the vma to a raw image as follows: | ||
- | |||
- | vma extract / | ||
- | | ||
- | Optionallly, | ||
- | |||
- | python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma / | ||
- | |||
- | In order to see where the virtual machine' | ||
- | |||
- | zfs list | ||
- | | ||
- | To delete one (be careful), use the following | ||
- | |||
- | zfs destroy pool/ | ||
- | |||
- | I've migrated my Windows VM here for testing to avoid cumbersome EFI/TPM setup on virt-manager. Here's a page from ProxMux wiki on best practices for [[https:// | ||
- | |||
- | vms/ | ||
- | vms/ | ||
- | vms/ | ||
- | | ||
- | Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. Also, I chose to install those on the '' | ||
- | |||
- | To verify sockets and cores | ||
- | |||
- | grep " | ||
- | | ||
- | Had to switch to postfix with my relay, so used satellite and: | ||
- | |||
- | [relay.haacksnetworking.org]: | ||
- | |||
- | Created bridge on pve instance with this recipe: | ||
- | |||
- | source / | ||
- | auto lo | ||
- | iface lo inet loopback | ||
- | iface enp0s31f6 inet manual | ||
- | auto vmbr0 | ||
- | iface vmbr0 inet static | ||
- | address 10.18.18.2/ | ||
- | gateway 10.18.18.1 | ||
- | bridge-ports enp0s31f6 | ||
- | bridge-stp off | ||
- | bridge-fd 0 | ||
- | | ||
- | Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/ | ||
- | |||
- | sudo nano /etc/hosts | ||
- | < | ||
- | sudo nano / | ||
- | < | ||
- | |||
- | Set up configurations for each website on nginx as follows: | ||
- | |||
- | sudo apt install nginx | ||
- | sudo nano / | ||
- | | ||
- | Enter this in the file | ||
- | |||
- | server { | ||
- | server_name | ||
- | location / { | ||
- | proxy_pass http:// | ||
- | # | ||
- | } | ||
- | } | ||
- | |||
- | Repeat this as needed for each VM within PVE. For example, here's another entry for an instance doing nextcloud instead of music: | ||
- | |||
- | sudo nano / | ||
- | | ||
- | Enter this in the file | ||
- | |||
- | server { | ||
- | server_name | ||
- | location / { | ||
- | proxy_pass http:// | ||
- | # | ||
- | } | ||
- | } | ||
- | |||
- | Once this is done, set up TLS on the PVE/RP instance as follows: | ||
- | |||
- | sudo apt install certbot letsencrypt python3-certbot-nginx | ||
- | sudo certbot --authenticator standalone --installer nginx -d example.domain.com --pre-hook " | ||
- | |||
- | Remember, it the reverse proxy web server does not need to match the upstream web server on the LAN at the two different clients '' | ||
- | |||
- | sudo nano /etc/hosts [inside VM1] | ||
- | < | ||
- | sudo nano / | ||
- | < | ||
- | <Header add myheader " | ||
- | < | ||
- | |||
- | That takes care of VM1, now let's do VM2: | ||
- | |||
- | sudo nano /etc/hosts [inside VM2] | ||
- | < | ||
- | sudo nano / | ||
- | < | ||
- | <Header add myheader " | ||
- | < | ||
- | | ||
- | Once this id done, restart your web server. In my case, the VMs are using apache and only the PVE / reverse proxy server is using nginx. After restarting the web servers inside each VM, you now can create the matching certs inside each VM. Here's the commands for VM1: | ||
- | |||
- | sudo apt install certbot letsencrypt python3-certbot-apache | ||
- | sudo certbot --authenticator standalone --installer apache -d music.example.com --pre-hook " | ||
- | | ||
- | Here are the commands inside VM2: | ||
- | |||
- | sudo apt install certbot letsencrypt python3-certbot-apache | ||
- | sudo certbot --authenticator standalone --installer apache -d nextcloud.example.com --pre-hook " | ||
- | | ||
- | Once that's done, make sure that you have cron jobs set for both VMs and for the reverse proxy server / PVE instance. Just enter the following in crontab for all three: | ||
- | |||
- | 30 2 * * 1 / | ||
- | |||
- | This should now make Chrome happy since the VM cert will match the cert of the reverse proxy. Why does this work you might ask? Well, when you run certbot inside the VM, it merely cares whether it can be reached externally where you've declared it to be and since it can, it creates the cert without issue. The reverse proxy / PVE instance itself is also able to handle requests for these domains so certbot likewise has no problems issuing the cert there either. This not only makes the Chrome happy, but it addresses what the Chrome developers and rfc 6844 is concerned about, namely, that without this ... connections inside the LAN could potentially be non-TLS or non-header matching. So, this is a best of both worlds. | ||
- | |||
- | |||
- | |||
- | |||
- | --- // |