User Tools

Site Tools


computing:proxmux

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:proxmux [2023/12/23 21:45] oemb1905computing:proxmux [2023/12/24 11:26] (current) – removed oemb1905
Line 1: Line 1:
-------------------------------------------- 
-  * **proxmux**  
-  * **Jonathan Haack** 
-  * **Haack's Networking** 
-  * **webmaster@haacksnetworking.org** 
  
-------------------------------------------- 
- 
-//proxmux//       
- 
-------------------------------------------- 
- 
-Been testing proxmux on home back up server. The server is my old production SuperMicro, which is now used for offsite backups in the home office. I have two 6TB drives in a zfs mirror for vm spinning (no spare), and the other 6x 6TBs are in 3 two-way mirrors for actual production backups. I am using the Debian underbelly with the 3 two-way mirrors to run my normal rsnapshot version control scripts, etc., while I use one pool for testing proxmux functionality. So far everything is working fine, and it's fun to use. Until I am offering advanced business / small enterprise support though, there's not really a need for the tools. But/and, that's not the purpose - the purpose is testing for now for a later date. The first thing I tested was how to bring in an existing virtual machine. To do that, create a machine in the GUI with no OS, and a trivial small HD size. Make sure the other resources match what you need for the VM. Then, run these commands on the command line within the proxmux host: 
- 
-  qm importdisk 500 hub.jonathanhaack.com.qcow2 <dataset> 
-  qm set 500 --scsi0 vms:vm-500-disk-0 
-   
-These commands bring in the image block by block, and then re-assign the virtual disk that VM 500 uses to the image you just brought in, instead of the placeholder image you created in the GUI during the step prior. In order to export a VM at the command line, execute: 
- 
-  vzdump <vmID> 
-  vzdump <vmID> --dumpdir /mnt/backups/machines/pve-images/<vmname> 
-   
-This will create a VM in .vma format. Then, you can extract the vma to a raw image as follows: 
- 
-  vma extract /var/lib/vz/dump/vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma-extract 
-   
-Optionallly, you can use the python tool [[https://github.com/jancc/vma-extractor/blob/master/vma.py|vma.py]]. I don't see any value in using this, however, as it is much slower than the native tool and is also just in raw format. Example syntax is: 
- 
-  python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma 
- 
-In order to see where the virtual machine's zdevs are, run the following: 
- 
-  zfs list 
-   
-To delete one (be careful), use the following 
- 
-  zfs destroy pool/dataset 
- 
-I've migrated my Windows VM here for testing to avoid cumbersome EFI/TPM setup on virt-manager. Here's a page from ProxMux wiki on best practices for [[https://pve.proxmox.com/wiki/Windows_10_guest_best_practices|Windows VMs]]. Once thing I noticed when doing the Windows install is that PVE makes two more vdevs for the TPM / EFI, so you get something like the following in ''zfs list'': 
- 
-  vms/vm-102-disk-0 
-  vms/vm-102-disk-1 
-  vms/vm-102-disk-2 
-   
-Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. Also, I chose to install those on the ''zpool'' and PVE's default was to store those locally - so they will likely not appear if you did not pick the option to store them on the pool.  
- 
-To verify sockets and cores 
- 
-  grep "physical id" /proc/cpuinfo | sort -u | wc -l 
-   
-Had to switch to postfix with my relay, so used satellite and: 
- 
-  [relay.haacksnetworking.org]:587 
- 
-Created bridge on pve instance with this recipe: 
- 
-  source /etc/network/interfaces.d/* 
-  auto lo 
-  iface lo inet loopback 
-  iface enp0s31f6 inet manual 
-  auto vmbr0 
-  iface vmbr0 inet static 
-      address 10.18.18.2/24 
-      gateway 10.18.18.1 
-      bridge-ports enp0s31f6 
-      bridge-stp off 
-      bridge-fd 0 
-       
-Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/etc/hosts'' properly. Additionally, ensure that your router is assigning the same matching local address (with dhcp) to the PVE as you assign within the PVE. In our situation, pve.domain.com is the proxmux host and has 80/443 passed to it by a port forward on the openWRT router. Only the openWRT router is public facing. The openWRT router is 10.13.13.1 and hands out addresses to clients on the LAN in that subnet. The PVE instance is both a reverse proxy with nginx for clients upstream on the LAN, and also a bridge for the VMs that reside on the PVE hypervisor (as per the bridge config above). Therefore, we want to ensure that both the openWRT router and the PVE instance itself have a static address.  
- 
-  sudo nano /etc/hosts 
-  <10.13.13.2 pve.domain.com pve> 
-  sudo nano /etc/hostname 
-  <pve.domain.com> 
- 
-Set up configurations for each website on nginx as follows: 
- 
-  sudo apt install nginx 
-  sudo nano /etc/nginx/sites-available/example.domain.com.conf 
-   
-Enter this in the file 
- 
-  server { 
-    server_name  example.domain.com; 
-    location / { 
-        proxy_pass http://10.13.13.240; 
-        #proxy_set_header Host example.domain.com; 
-    } 
-  } 
- 
-Repeat this as needed for each VM within PVE. Once this is done, set up TLS on the PVE/RP instance as follows: 
- 
-  sudo apt install certbot letsencrypt python3-certbot-nginx 
-  sudo certbot --authenticator standalone --installer nginx -d example.domain.com --pre-hook "systemctl stop nginx" --post-hook "systemctl start nginx" 
- 
-   
- 
- 
- 
- --- //[[jonathan@haacksnetworking.org|oemb1905]] 2023/06/19 17:40// 
computing/proxmux.1703367929.txt.gz · Last modified: 2023/12/23 21:45 by oemb1905