This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
computing:proxmox [2023/12/26 02:52] – oemb1905 | computing:proxmox [2024/01/06 23:35] – oemb1905 | ||
---|---|---|---|
Line 32: | Line 32: | ||
zstd -d vzdump-qemu-506-2023_12_24-04_40_17.vma.zst | zstd -d vzdump-qemu-506-2023_12_24-04_40_17.vma.zst | ||
- | | ||
- | Optionallly, | ||
- | |||
- | python3 vma.py vzdump-qemu-101-2023_06_18-12_17_40.vma / | ||
- | In order to see where the virtual machine' | + | In order to see where the virtual machine' |
zfs list | zfs list | ||
Line 51: | Line 47: | ||
vms/ | vms/ | ||
| | ||
- | Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. Also, I chose to install those on the '' | + | Be careful not to delete those thinking wrongly that they are extraneous - those are obviously required for booting, but it might not be clear upon first look. After getting images inside and out and spinning up the old Windows VM, I decided a year or so later that it would be worth it to spin PVE up with a network bridge and reverse proxy in order to run it to manage my home network. The home Nextcloud, airsonic, pihole, and other similar devices would be virtualized in the PVE and the PVE bridge and reverse proxy would route traffic |
- | + | ||
- | To verify | + | |
grep " | grep " | ||
| | ||
- | Had to switch | + | While installing proxmox on top of stock debian, postfix asked me how to set it up and I chose satellite system and sent outgoing email to my relay as follows: |
[relay.haacksnetworking.org]: | [relay.haacksnetworking.org]: | ||
- | Created bridge on pve instance with this recipe: | + | Once PVE is setup, log in locally and create the following inside interfaces. Make sure you are using ifupdown, not netplan. |
source / | source / | ||
Line 75: | Line 69: | ||
bridge-fd 0 | bridge-fd 0 | ||
| | ||
- | Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/ | + | Make sure the PVE instance, which is also our reverse proxy has the FQDN set up in ''/ |
sudo nano /etc/hosts | sudo nano /etc/hosts | ||
Line 82: | Line 76: | ||
< | < | ||
- | Set up configurations for each website on nginx as follows: | + | Inside the PVE instance, set up configurations for each website on nginx as follows: |
sudo apt install nginx | sudo apt install nginx | ||
sudo nano / | sudo nano / | ||
| | ||
- | Enter this in the file | + | Enter this server block adapted as needed |
server { | server { | ||
Line 101: | Line 95: | ||
sudo nano / | sudo nano / | ||
| | ||
- | Enter this in the file | + | And enter something like ... |
server { | server { | ||
Line 162: | Line 156: | ||
30 2 * * 1 / | 30 2 * * 1 / | ||
+ | | ||
+ | On one of my VMs, which is running apache and behind the nginx reverse proxy, I got a client sync error for the nextcloud service running on it. It was complaining about the file size. Note: the GNU/Linux clients had no such error. Anyway, the following was needed on PVE / reverse proxy configs to resolve this error: | ||
- | Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, | + | sudo nano /etc/nginx.proxy_params |
+ | < | ||
+ | < | ||
+ | sudo nano /etc/nginx/nginx.conf | ||
+ | < | ||
+ | < | ||
+ | Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, | ||
+ | |||
+ | * Physical Cores * Threads * Physical CPUs = Assignable Virtual Cores | ||
+ | * If using zfs for the storage, estimate 50% of RAM used by it | ||
+ | |||
+ | If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three: | ||
+ | |||
+ | lscpu | grep " | ||
+ | lscpu | grep "per socket" | ||
+ | lscpu | grep " | ||
+ | CPU(s): 8 | ||
+ | Core(s) per socket: 4 | ||
+ | Socket(s): 1 | ||
+ | |||
+ | This is for an 8-core i7 Dell XPS 8900. Thus, I have 8*4*1 = 32 cores to play with. As for RAM, I have 64GB total, so subtracting 4Gb for the OS, I have 32GB left for the VMs with roughly 32GB being allocated for zfs. PVE is very strict in interpreting allocated RAM and will OOM kill VMs if it senses it is short. This is in contrast, for example, with virsh which will allow sysadmins to run how they see fit even at risk of virsh crashing. For this reason, either expand your rig, adjust arc, or tightly provision your VMs, or all three. I personally just tightly provisioned my VMs as I want zfs to use as much RAM as it needs. If you prefer to throttle arc, here's how: | ||
+ | |||
+ | sudo nano / | ||
+ | options zfs zfs_arc_max=1073741824 [1 gigabyte example] | ||
+ | update-initramfs -u -k all | ||
+ | reboot | ||
+ | |||
+ | Adjust arc throttle to fit your system. If you need to stop a system at the CLI because you can't reboot/ | ||
+ | |||
+ | / | ||
+ | | ||
+ | Next entry ... | ||
- | --- // | + | --- // |