This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
computing:proxmux [2023/12/23 21:45] – oemb1905 | computing:proxmux [2023/12/24 11:26] (current) – removed oemb1905 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ------------------------------------------- | ||
- | * **proxmux** | ||
- | * **Jonathan Haack** | ||
- | * **Haack' | ||
- | * **webmaster@haacksnetworking.org** | ||
- | ------------------------------------------- | ||
- | |||
- | // | ||
- | |||
- | ------------------------------------------- | ||
- | |||
- | Been testing proxmux on home back up server. The server is my old production SuperMicro, which is now used for offsite backups in the home office. I have two 6TB drives in a zfs mirror for vm spinning (no spare), and the other 6x 6TBs are in 3 two-way mirrors for actual production backups. I am using the Debian underbelly with the 3 two-way mirrors to run my normal rsnapshot version control scripts, etc., while I use one pool for testing proxmux functionality. So far everything is working fine, and it's fun to use. Until I am offering advanced business / small enterprise support though, there' | ||
- | |||
- | qm importdisk 500 hub.jonathanhaack.com.qcow2 < | ||
- | qm set 500 --scsi0 vms: | ||
- | | ||
- | These commands bring in the image block by block, and then re-assign the virtual disk that VM 500 uses to the image you just brought in, instead of the placeholder image you created in the GUI during the step prior. In order to export a VM at the command line, execute: | ||
- | |||
- | vzdump < | ||
- | vzdump < | ||
- | | ||
- | This will create a VM in .vma format. Then, you can extract the vma to a raw image as follows: | ||
- | |||
- | vma extract / | ||
- | | ||
- | Optionallly, | ||
- | |||
- | python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma / | ||
- | |||
- | In order to see where the virtual machine' | ||
- | |||
- | zfs list | ||
- | | ||
- | To delete one (be careful), use the following | ||
- | |||
- | zfs destroy pool/ | ||
- | |||
- | I've migrated my Windows VM here for testing to avoid cumbersome EFI/TPM setup on virt-manager. Here's a page from ProxMux wiki on best practices for [[https:// | ||
- | |||
- | vms/ | ||
- | vms/ | ||
- | vms/ | ||
- | | ||
- | Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. Also, I chose to install those on the '' | ||
- | |||
- | To verify sockets and cores | ||
- | |||
- | grep " | ||
- | | ||
- | Had to switch to postfix with my relay, so used satellite and: | ||
- | |||
- | [relay.haacksnetworking.org]: | ||
- | |||
- | Created bridge on pve instance with this recipe: | ||
- | |||
- | source / | ||
- | auto lo | ||
- | iface lo inet loopback | ||
- | iface enp0s31f6 inet manual | ||
- | auto vmbr0 | ||
- | iface vmbr0 inet static | ||
- | address 10.18.18.2/ | ||
- | gateway 10.18.18.1 | ||
- | bridge-ports enp0s31f6 | ||
- | bridge-stp off | ||
- | bridge-fd 0 | ||
- | | ||
- | Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/ | ||
- | |||
- | sudo nano /etc/hosts | ||
- | < | ||
- | sudo nano / | ||
- | < | ||
- | |||
- | Set up configurations for each website on nginx as follows: | ||
- | |||
- | sudo apt install nginx | ||
- | sudo nano / | ||
- | | ||
- | Enter this in the file | ||
- | |||
- | server { | ||
- | server_name | ||
- | location / { | ||
- | proxy_pass http:// | ||
- | # | ||
- | } | ||
- | } | ||
- | |||
- | Repeat this as needed for each VM within PVE. Once this is done, set up TLS on the PVE/RP instance as follows: | ||
- | |||
- | sudo apt install certbot letsencrypt python3-certbot-nginx | ||
- | sudo certbot --authenticator standalone --installer nginx -d example.domain.com --pre-hook " | ||
- | |||
- | | ||
- | |||
- | |||
- | |||
- | --- // |