| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| computing:incus [2026/03/30 16:43] – oemb1905 | computing:incus [2026/03/30 16:50] (current) – oemb1905 |
|---|
| ------------------------------------------- | ------------------------------------------- |
| |
| **What is this?** | === What is this? === |
| This tutorial is for Debian users who want to set up Incus w/ a web-based management GUI. Before you read on, make sure you have a basic VM/VPS w/ hardened LAMP (or equivalent) setup, and if not, head to [[https://wiki.haacksnetworking.org/doku.php?id=computing:apachesurvival|Apache Survival]] first. The first part of this tutorial shows folks the what and how of setting up an Incus server and public-facing Web management GUI. After the setup, I discuss security concerns regarding the Incus Web GUI, and two different approaches to this, one for bare metal hypervisors and another setup for nested containers and micro-services. To begin, let's make sure that your OS did not install any Incus by default and then install the Zabbly repository and Incus together: | This tutorial is for Debian users who want to set up Incus w/ a web-based management GUI. Before you read on, make sure you have a basic VM/VPS w/ hardened LAMP (or equivalent) setup, and if not, head to [[https://wiki.haacksnetworking.org/doku.php?id=computing:apachesurvival|Apache Survival]] first. The first part of this tutorial shows folks the what and how of setting up an Incus server and public-facing Web management GUI. After the setup, I discuss security concerns regarding the Incus Web GUI, and two different approaches to this, one for bare metal hypervisors and another setup for nested containers and micro-services. To begin, let's make sure that your OS did not install any Incus by default and then install the Zabbly repository and Incus together: |
| |
| apt autoremove -y --purge | apt autoremove -y --purge |
| mkdir -p /etc/apt/keyrings | mkdir -p /etc/apt/keyrings |
| | |
| curl -fsSL https://pkgs.zabbly.com/key.asc | tee /etc/apt/keyrings/zabbly.asc >/dev/null | curl -fsSL https://pkgs.zabbly.com/key.asc | tee /etc/apt/keyrings/zabbly.asc >/dev/null |
| | |
| cat <<EOF | tee /etc/apt/sources.list.d/zabbly-incus-stable.sources | cat <<EOF | tee /etc/apt/sources.list.d/zabbly-incus-stable.sources |
| Enabled: yes | Enabled: yes |
| Signed-By: /etc/apt/keyrings/zabbly.asc | Signed-By: /etc/apt/keyrings/zabbly.asc |
| EOF | EOF |
| | |
| apt update | apt update |
| apt install -y incus incus-ui-canonical | apt install -y incus incus-ui-canonical |
| -------------------------------------------------------------- | -------------------------------------------------------------- |
| |
| **What needs to be considered before starting?** | === What needs to be considered? === |
| The Zably Incus web GUI, and by extension including any reverse proxy you build in front of it, //has no built-in auth mechanism//. In other words, the general public will be able to access your machine and build containers for it by simply visiting incus.domain.com. Since you probably don't want to grant the general public access to your underlying machine and/or to make/edit/destroy containers, you will need to do something else to restrict access. Some suggestions are to spin up supported third-party auth tools / instances such as but not limited to Keycloak. (I am currently still testing some of those approaches and will update users when I conclude testing and form recommendations.) | The Zably Incus web GUI, and by extension including any reverse proxy you build in front of it, //has no built-in auth mechanism//. In other words, the general public will be able to access your machine and build containers for it by simply visiting incus.domain.com. Since you probably don't want to grant the general public access to your underlying machine and/or to make/edit/destroy containers, you will need to do something else to restrict access. Some suggestions are to spin up supported third-party auth tools / instances such as but not limited to Keycloak. I am currently still testing some of those approaches and will update folks here when I finish. |
| |
| **Bare Metal Incus Host** | === Bare Metal Incus Host === |
| On the **bare metal Incus host** covered here [[https://wiki.haacksnetworking.org/doku.php?id=computing:vmserver|VM Server]] I put Incus (and Cockpit) behind ssh (port 22), meaning only port 22 is open on the host. One has to ''ssh -4 -D 8080 root@domain.com'' first and then pass that network traffic to Firefox with SOCKS5 either internally or w/ FoxyProxy. Every 30 days or so, I open port 80 for the certbot renewal, and then close it again when it completes. The Let's Encrypt tutorial makes the Incus GUI play nice with your web browser (no warnings, etc.), and it's good to have to TLS on the instance by default in case something goes wrong or you open 80/443 by accident. The Incus instance is also protected by apache's ''<Location>'' functionality, which restricts remote access to static and trusted IPs. | On the **bare metal Incus host** covered here [[https://wiki.haacksnetworking.org/doku.php?id=computing:vmserver|VM Server]] I put Incus (and Cockpit) behind ssh (port 22), meaning only port 22 is open on the host. One has to ''ssh -4 -D 8080 root@domain.com'' first and then pass that network traffic to Firefox with SOCKS5 either internally or w/ FoxyProxy. Every 30 days or so, I open port 80 for the certbot renewal, and then close it again when it completes. The Let's Encrypt tutorial makes the Incus GUI play nice with your web browser (no warnings, etc.), and it's good to have to TLS on the instance by default in case something goes wrong or you open 80/443 by accident. The Incus instance is also protected by apache's ''<Location>'' functionality, which restricts remote access to static and trusted IPs. |
| |
| **Nested VM Incus Host** | === Nested VM Incus Host === |
| On the **nested VM Incus host**, on which I need port 80/443 to be open in order to run containers (incus and podman), I chose to use apache's built in ''<Location>'' functionality, which restricts remote access to static and trusted IPs. This ensures that users from other locations will get a "Forbidden" notice when visiting the page. By leaving 80/443 open on the host, bridged and nested virtual containers - both podman and incus - are accessible with the configuration of a simple reverse proxy dedicated for the target container. This setup, that is for the nested VM Incus host, balances security with convenience and is designed for media hosting, music, static resources, etc. I would not recommend putting credential/auth instances in a container secured as I've described here. For something like that, you want to do something like I did above on the bare metal, and expose 80/443 only on the rootless container, leaving the container management panel entirely behind ssh. The point of the nested VM Incus host | On the **nested VM Incus host**, which runs as a VM inside the bare metal host described above, I **need** port 80/443 to be open //in order to run containers// (incus and podman). For this reason, I chose to use apache's built in ''<Location>'' functionality, which restricts the web GUI remote access to static and trusted IPs. This ensures that users from other locations will get a "Forbidden" notice when visiting the page. And by leaving 80/443 open on the nested VM Incus host itself, bridged and nested virtual containers - both podman and incus - are accessible with the configuration of a simple reverse proxy dedicated for the target container. This setup balances security with convenience and is designed for media hosting, music, static resources, etc. I would not, however, recommend putting anything sensitive inside Incus under this setup. For secure services, whether you use bare metal or a nested VM, you should open 80/443 on the container only and leave the management panel behind ssh. |
| | |
| |
| --- //[[alerts@haacksnetworking.org|oemb1905]] 2026/03/30 15:44// | --- //[[alerts@haacksnetworking.org|oemb1905]] 2026/03/30 15:44// |