This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
computing:proxmox [2023/12/28 05:27] – oemb1905 | computing:proxmox [2024/01/07 00:09] (current) – oemb1905 | ||
---|---|---|---|
Line 156: | Line 156: | ||
30 2 * * 1 / | 30 2 * * 1 / | ||
+ | | ||
+ | On one of my VMs, which is running apache and behind the nginx reverse proxy, I got a client sync error for the nextcloud service running on it. It was complaining about the file size. Note: the GNU/Linux clients had no such error. Anyway, the following was needed on PVE / reverse proxy configs to resolve this error: | ||
+ | |||
+ | sudo nano / | ||
+ | < | ||
+ | < | ||
+ | sudo nano / | ||
+ | < | ||
+ | < | ||
Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, | Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, | ||
Line 164: | Line 173: | ||
If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three: | If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three: | ||
- | lscpu | grep "CPU(s): | + | lscpu | grep "thread" |
- | lscpu | grep " | + | lscpu | grep "per socket" |
lscpu | grep " | lscpu | grep " | ||
- | | + | |
- | Core(s) per socket: | + | Core(s) per socket: |
- | Socket(s): | + | Socket(s): |
- | This is for an 8-core i7 Dell XPS 8900. Thus, I have 8*4*1 = 32 cores to play with. As for RAM, I have 64GB total, so subtracting 4Gb for the OS, I have 32GB left for the VMs with roughly 32GB being allocated for zfs. PVE is very strict in interpreting allocated RAM and will OOM kill VMs if it senses it is short. This is in contrast, for example, with virsh which will allow sysadmins to run how they see fit even at risk of virsh crashing. For this reason, either expand your rig, adjust arc, or tightly provision your VMs, or all three. I personally just tightly provisioned my VMs as I want zfs to use as much RAM as it needs. If you prefer to throttle arc, here's how: | + | To determine vCPUs, you do threads*cores*physical-sockets, or (12*2)*(12)*(2) |
sudo nano / | sudo nano / |