incus
This tutorial is for Debian users who want to set up Incus w/ a web-based management GUI. Before you read on, make sure you have a basic VM/VPS w/ hardened LAMP (or equivalent) setup, and if not, head to Apache Survival first. The first part of this tutorial shows folks the what and how of setting up an Incus server and public-facing Web management GUI. After the setup, I discuss security concerns regarding the Incus Web GUI, and two different approaches to this, one for bare metal hypervisors and another setup for nested containers and micro-services. To begin, let's make sure that your OS did not install any Incus by default and then install the Zabbly repository and Incus together:
systemctl stop incus incus.socket 2>/dev/null || true apt purge -y incus incus-ui-canonical rm -rf /var/lib/incus /opt/incus /run/incus /var/log/incus /etc/apt/sources.list.d/zabbly-incus* apt autoremove -y --purge mkdir -p /etc/apt/keyrings curl -fsSL https://pkgs.zabbly.com/key.asc | tee /etc/apt/keyrings/zabbly.asc >/dev/null cat <<EOF | tee /etc/apt/sources.list.d/zabbly-incus-stable.sources Enabled: yes Types: deb URIs: https://pkgs.zabbly.com/incus/stable Suites: $(lsb_release -cs) Components: main Signed-By: /etc/apt/keyrings/zabbly.asc EOF apt update apt install -y incus incus-ui-canonical
As for storage, I use virtiofs so that container resources (both podman or incus) stay on the physical host in a dedicated and properly ACL'd directory, accessible to the VM. If you need help setting up those mountpoints, head over to the Virt Manager Hell tutorial which has this and many other helpful virsh things. Anyway, once that directory is setup, let's establish it as the Incus storage directory:
mkdir -p /mnt/vms/incus/incushost1 chown -R root:root /mnt/vms/incus/incushost1 chmod -R 770 /mnt/vms/incus/incushost1 incus admin init incus storage create default dir source=/mnt/vms/incus/incushost1 incus profile device remove default root 2>/dev/null || true incus profile device add default root disk pool=default path=/ incus storage list incus storage show default
Now that Incus is running and setup, let's setup apache's reverse proxy. First off, make sure that you setup a regular virtual host for http, e.g., incushost1.conf, and established ServerName as incushost1.domain.com, put in a placeholder WebRoot at something like /var/www/incushost1.domain.com/public_html and that apache is running and listening properly. Once that's done, add Let's Encrypt TLS and ensure the proxy modules are enabled as follows:
sudo apt install certbot letsencrypt python3-certbot-apache sudo certbot --authenticator standalone --installer apache -d incushost1.domain.com --pre-hook "systemctl stop apache2" --post-hook "systemctl start apache2" a2enmod ssl proxy proxy_http proxy_wstunnel rewrite headers systemctl restart apache2
Now that the Let's Encrypt cert is created, swap the virtual hosts to the reverse proxy configs. Swap the contents of the http virtual host at /etc/apache2/sites-enabled/incushost1.domain.com.conf with something like the following:
<VirtualHost *:80>
ServerName incushost1.domain.com
# MAKE SURE TO SET THESE TO TRUSTED IPs
<Location />
Require ip 2745:fc91:1:88::2
Require ip 8.28.86.200
</Location>
RewriteEngine On
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
For the https block, which Let's Encrypt made and called /etc/apache2/sites-enabled/incushost1.domain.com-le-ssl.conf, open it up and swap its contents with something like this:
<VirtualHost *:443>
ServerName incushost1.domain.com
# MAKE SURE TO SET THESE TO TRUSTED IPs
<Location />
Require ip 2745:fc91:1:88::2
Require ip 8.28.86.200
</Location>
SSLProxyEngine On
SSLProxyVerify none
SSLProxyCheckPeerCN Off
SSLProxyCheckPeerName Off
SSLProxyMachineCertificateFile /etc/ssl/private/incus-proxy.pem
ProxyPreserveHost On
ProxyRequests Off
RewriteEngine On
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "wss://127.0.0.1:8443/$1" [P,L]
ProxyPass / https://127.0.0.1:8443/
ProxyPassReverse / https://127.0.0.1:8443/
SSLCertificateFile /etc/letsencrypt/live/incushost1.domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/incushost1.domain.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
You may have noticed that the TLS virtual host specifies a trusted certificate in /etc/ssl/private/incus-proxy.pem. This must be created and is the cert that apache uses to communicate upstream on 8443 to Incus. Since Incus trusts this cert too, apache can pass remote traffic and underlying management of containers to remote web GUI clients. As root, perform the following:
openssl req -x509 -newkey rsa:4096 -keyout proxy.key -out proxy.crt -days 7450 -nodes -subj "/CN=apache-proxy-for-incus" cat proxy.crt proxy.key > incus-proxy.pem mv incus-proxy.pem /etc/ssl/private/incus-proxy.pem chown root:www-data /etc/ssl/private/incus-proxy.pem chmod 640 /etc/ssl/private/incus-proxy.pem incus config trust add-certificate proxy.crt --name apache-proxy --description "Apache reverse proxy client cert" rm proxy.crt proxy.key #optional incus config set core.https_address 127.0.0.1:8443 systemctl restart incus
You've now cut the cert and key, bound them together in a .pem file that apache can use, and then trusted the cert portion of that .pem inside Incus so that it can trust apache. At the end, we optionally removed the crt and key after they served their purpose. So long as you don't revoke Incus' trust over the certificate, you will be good to go and Incus' web GUI and service will remain accessible to apache's reverse proxy. We also established https as default listener in Incus, so that apache's upstream requests will honor that and use/trust the .pem we just established as part of that handshake and exchange.
The Zably Incus web GUI, and by extension including any reverse proxy you build in front of it, has no built-in auth mechanism. In other words, the general public will be able to access your machine and build containers for it by simply visiting incus.domain.com. Since you probably don't want to grant the general public access to your underlying machine and/or to make/edit/destroy containers, you will need to do something else to restrict access. Some suggestions are to spin up supported third-party auth tools / instances such as but not limited to Keycloak. I am currently still testing some of those approaches and will update folks here when I finish.
On the bare metal Incus host covered here VM Server I put Incus (and Cockpit) behind ssh (port 22), meaning only port 22 is open on the host. One has to ssh -4 -D 8080 root@domain.com first and then pass that network traffic to Firefox with SOCKS5 either internally or w/ FoxyProxy. Every 30 days or so, I open port 80 for the certbot renewal, and then close it again when it completes. The Let's Encrypt tutorial makes the Incus GUI play nice with your web browser (no warnings, etc.), and it's good to have to TLS on the instance by default in case something goes wrong or you open 80/443 by accident. The Incus instance is also protected by apache's <Location> functionality, which restricts remote access to static and trusted IPs.
On the nested VM Incus host, which runs as a VM inside the bare metal host described above, I need port 80/443 to be open in order to run containers (incus and podman). For this reason, I chose to use apache's built in <Location> functionality, which restricts the web GUI remote access to static and trusted IPs. This ensures that users from other locations will get a “Forbidden” notice when visiting the page. And by leaving 80/443 open on the nested VM Incus host itself, bridged and nested virtual containers - both podman and incus - are accessible with the configuration of a simple reverse proxy dedicated for the target container. This setup balances security with convenience and is designed for media hosting, music, static resources, etc. I would not, however, recommend putting anything sensitive inside Incus under this setup. For secure services, whether you use bare metal or a nested VM, you should open 80/443 on the container only and leave the management panel behind ssh.
— oemb1905 2026/03/30 15:44