User Tools

Site Tools


computing:proxmox

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:proxmox [2023/12/28 05:27] oemb1905computing:proxmox [2024/01/07 00:09] (current) oemb1905
Line 156: Line 156:
  
   30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log   30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log
 +  
 +On one of my VMs, which is running apache and behind the nginx reverse proxy, I got a client sync error for the nextcloud service running on it. It was complaining about the file size. Note: the GNU/Linux clients had no such error. Anyway, the following was needed on PVE / reverse proxy configs to resolve this error:
 +
 +  sudo nano /etc/nginx.proxy_params
 +  <post_max_size 10G;>
 +  <upload_max_filesize 10G;>
 +  sudo nano /etc/nginx/nginx.conf
 +  <client_max_body_size 10G;>
 +  <client_body_buffer_size 400M;>
  
 Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, and separate VMs (database/stack reasons), and using only one external IP. Now, for managing how you much your instance can handle, here's some tips: Now the reverse proxy / PVE instance has matching certs to the clients its serving. This project achieves VMs that are publicly accessible, internally accessible, TLS-secured, and separate VMs (database/stack reasons), and using only one external IP. Now, for managing how you much your instance can handle, here's some tips:
Line 164: Line 173:
 If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three: If ram is an issue, make sure to throttle arc. In my case, I let zfs run free and tightly provision the VMs. Each rig and use case will be different; adjust accordingly. Above, I already shared the command to determine your physical CPUs / sockets. Here's how to determine all three:
  
-  lscpu | grep "CPU(s): +  lscpu | grep "thread
-  lscpu | grep "+  lscpu | grep "per socket"
   lscpu | grep "Socket(s)"   lscpu | grep "Socket(s)"
-  CPU(s): 8 +  Thread(s) per core2 
-  Core(s) per socket: 4 +  Core(s) per socket: 12 
-  Socket(s): 1+  Socket(s): 2
  
-This is for an 8-core i7 Dell XPS 8900. ThusI have 8*4*32 cores to play with. As for RAM, I have 64GB total, so subtracting 4Gb for the OS, I have 32GB left for the VMs with roughly 32GB being allocated for zfs. PVE is very strict in interpreting allocated RAM and will OOM kill VMs if it senses it is short. This is in contrast, for example, with virsh which will allow sysadmins to run how they see fit even at risk of virsh crashing. For this reason, either expand your rig, adjust arc, or tightly provision your VMs, or all three. I personally just tightly provisioned my VMs as I want zfs to use as much RAM as it needs. If you prefer to throttle arc, here's how:+To determine vCPUs, you do threads*cores*physical-socketsor (12*2)*(12)*(2) 576 for a Supermicro with 2x Xeon e5-2650 v4 chips. As for RAM, I have 64GB total, so subtracting 4Gb for the OS, I have 32GB left for the VMs with roughly 32GB being allocated for zfs. PVE is very strict in interpreting allocated RAM and will OOM kill VMs if it senses it is short. This is in contrast, for example, with virsh which will allow sysadmins to run how they see fit even at risk of virsh crashing. For this reason, either expand your rig, adjust arc, or tightly provision your VMs, or all three. I personally just tightly provisioned my VMs as I want zfs to use as much RAM as it needs. If you prefer to throttle arc, here's how:
  
   sudo nano /etc/modprobe.d/zfs.conf   sudo nano /etc/modprobe.d/zfs.conf
computing/proxmox.1703741237.txt.gz · Last modified: 2023/12/28 05:27 by oemb1905