User Tools

Site Tools


computing:proxmux

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
computing:proxmux [2023/06/19 16:07] oemb1905computing:proxmux [2023/12/23 21:45] oemb1905
Line 19: Line 19:
  
   vzdump <vmID>   vzdump <vmID>
 +  vzdump <vmID> --dumpdir /mnt/backups/machines/pve-images/<vmname>
      
-This will create a VM in .vma format. To convert it to a raw disk, use the python tool [[https://github.com/jancc/vma-extractor/blob/master/vma.py|vma.py]]. Example syntax is:+This will create a VM in .vma format. Then, you can extract the vma to a raw image as follows:
  
-  python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma+  vma extract /var/lib/vz/dump/vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma-extract
      
-You can also use vma extract as follows:+Optionallly, you can use the python tool [[https://github.com/jancc/vma-extractor/blob/master/vma.py|vma.py]]. I don't see any value in using this, however, as it is much slower than the native tool and is also just in raw format. Example syntax is:
  
-  vma extract vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma-extract+  python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma /mnt/backups/vma
  
 In order to see where the virtual machine's zdevs are, run the following: In order to see where the virtual machine's zdevs are, run the following:
Line 35: Line 36:
  
   zfs destroy pool/dataset   zfs destroy pool/dataset
 +
 +I've migrated my Windows VM here for testing to avoid cumbersome EFI/TPM setup on virt-manager. Here's a page from ProxMux wiki on best practices for [[https://pve.proxmox.com/wiki/Windows_10_guest_best_practices|Windows VMs]]. Once thing I noticed when doing the Windows install is that PVE makes two more vdevs for the TPM / EFI, so you get something like the following in ''zfs list'':
 +
 +  vms/vm-102-disk-0
 +  vms/vm-102-disk-1
 +  vms/vm-102-disk-2
 +  
 +Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. Also, I chose to install those on the ''zpool'' and PVE's default was to store those locally - so they will likely not appear if you did not pick the option to store them on the pool. 
 +
 +To verify sockets and cores
 +
 +  grep "physical id" /proc/cpuinfo | sort -u | wc -l
 +  
 +Had to switch to postfix with my relay, so used satellite and:
 +
 +  [relay.haacksnetworking.org]:587
 +
 +Created bridge on pve instance with this recipe:
 +
 +  source /etc/network/interfaces.d/*
 +  auto lo
 +  iface lo inet loopback
 +  iface enp0s31f6 inet manual
 +  auto vmbr0
 +  iface vmbr0 inet static
 +      address 10.18.18.2/24
 +      gateway 10.18.18.1
 +      bridge-ports enp0s31f6
 +      bridge-stp off
 +      bridge-fd 0
 +      
 +Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/etc/hosts'' properly. Additionally, ensure that your router is assigning the same matching local address (with dhcp) to the PVE as you assign within the PVE. In our situation, pve.domain.com is the proxmux host and has 80/443 passed to it by a port forward on the openWRT router. Only the openWRT router is public facing. The openWRT router is 10.13.13.1 and hands out addresses to clients on the LAN in that subnet. The PVE instance is both a reverse proxy with nginx for clients upstream on the LAN, and also a bridge for the VMs that reside on the PVE hypervisor (as per the bridge config above). Therefore, we want to ensure that both the openWRT router and the PVE instance itself have a static address. 
 +
 +  sudo nano /etc/hosts
 +  <10.13.13.2 pve.domain.com pve>
 +  sudo nano /etc/hostname
 +  <pve.domain.com>
 +
 +Set up configurations for each website on nginx as follows:
 +
 +  sudo apt install nginx
 +  sudo nano /etc/nginx/sites-available/example.domain.com.conf
 +  
 +Enter this in the file
 +
 +  server {
 +    server_name  example.domain.com;
 +    location / {
 +        proxy_pass http://10.13.13.240;
 +        #proxy_set_header Host example.domain.com;
 +    }
 +  }
 +
 +Repeat this as needed for each VM within PVE. Once this is done, set up TLS on the PVE/RP instance as follows:
 +
 +  sudo apt install certbot letsencrypt python3-certbot-nginx
 +  sudo certbot --authenticator standalone --installer nginx -d example.domain.com --pre-hook "systemctl stop nginx" --post-hook "systemctl start nginx"
 +
 +  
 +
  
  
- --- //[[jonathan@haacksnetworking.org|oemb1905]] 2023/06/19 10:06//+ --- //[[jonathan@haacksnetworking.org|oemb1905]] 2023/06/19 17:40//