This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| computing:classic-bridging [2026/01/10 17:24] – oemb1905 | computing:classic-bridging [2026/01/10 17:53] (current) – oemb1905 | ||
|---|---|---|---|
| Line 12: | Line 12: | ||
| === Introduction === | === Introduction === | ||
| - | ~NTOC~ | + | ~~NOTOC~~ |
| - | This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. By production hosts, I mean something that’s designed to run virtual appliances (VMs, containers, etc.). | + | This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. By production hosts, I mean something that's designed to run virtual appliances (VMs, containers, etc.). This tutorial assumes you have access to PTR records and/or have a block of external IPs. In this tutorial, I'll break down two differing setups I use. To be clear, the tutorial is about much more than bridging, it's just that the bridges are the most important part because they route incoming url request to the appliances. The first setup, at Brown Rice, is a co-located server. |
| - | This tutorial assumes you have access to PTR records and/or have a block of external IPs. | + | {{ : |
| - | In this tutorial, I’ll break down two differing setups I use. //To be clear//, the tutorial | + | The second setup, at Pebble Host, is a " |
| - | The first setup is at **Brown Rice** (co-located server). | + | |
| - | The second setup is at **Pebble Host** (dedicated host — actual hardware provisioned on your behalf). | + | * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting) |
| - | Here are the technical specifications for each host: | + | I prefer to use virsh+qemu/ |
| - | * **Brown Rice** — [[https:// | + | {{ :computing:screenshot_from_2026-01-10_10-52-52.png? |
| - | * **Pebble Host** — [[https:// | + | |
| - | I prefer to use **virsh | + | Although the theoretical ceiling of MAC addresses, in and of itself, provides enough combinations (280 trillion |
| - | For my machine in Taos, I run the Debian host OS on a separate SSD on a dedicated SATA port that’s not part of the SAS back-plane. At Pebble, I don’t have this luxury; everything runs on the boot volume. This limitation fits the use-case, however, as I only have Pebble setup to run 5-7 virtual appliances, while the server co-located at Brown Rice has upwards of 30 virtual appliances. | + | === Brown Rice Setup === |
| - | Each location | + | The SuperMicro |
| - | Brown Rice treats | + | * Interface 01: enp1s0f0, has a dedicated cable (IPv4) just for the physical host |
| + | * Interface 02: enp1s0f1, has a dedicated cable (IPv4) just for bridging IPv4 | ||
| + | * Interface 03: enp2s0f0, has a separate cable (IPv6) just for bridging IPv6 | ||
| + | * Interface 04: enp2s0f1, empty | ||
| + | * Interface 05: IPMI; setup via American Megatrends BIOS, access | ||
| - | Pebble Host, on the other hand, serves over 50K clients. MAC address filtering becomes practically necessary due to potential conflicts within vendor OUIs (especially virsh). Filtering | + | Okay, to setup IPMI, you need to start up your production host and tap whatever keyboard shortcut |
| - | === Brown Rice Setup === | + | * Establish the network interface |
| + | * Turn off non-essential services in the BIOS | ||
| + | * Create a firewall rule in the BIOS to limit IPMI access to trusted IPs | ||
| - | The SuperMicro has a built-in 10Gbps NIC, with 4 ports and 1 IPMI port. | + | To setup the network interface, navigate to Configuration > Network. After that, enter in your network interface information. Here's how my configuration is setup: |
| - | Before we start tinkering with the operating system on the host, we need to establish **A, AAAA, SPF, DMARC**, and **PTR** records for the host. After that, we will setup IPMI access in the SuperMicro’s BIOS. | + | {{ : |
| - | At Brown Rice, my machine has four cables plugged in, allocated as follows: | + | After establishing that, you should use your laptop to try to reach the IP. If it works, then proceed to hardening the BIOS. Hardening involves the two steps mentioned earlier, i.e., turning off non-essential services so they aren't listening publicly (Configuration > Port). Here's what that looks like: |
| - | * **Interface 01** — enp1s0f0, dedicated cable (IPv4) just for the physical host | + | {{ : |
| - | * **Interface 02** — enp1s0f1, dedicated cable (IPv4) just for bridging IPv4 | + | |
| - | * **Interface 03** — enp2s0f0, separate cable (IPv6) just for bridging IPv6 | + | |
| - | * **Interface 04** — enp2s0f1, empty | + | |
| - | * **Interface 05** — IPMI; setup via American Megatrends BIOS, access is source-IP restricted | + | |
| - | To setup IPMI, boot the production host and enter the BIOS (American Megatrends on this vintage SuperMicro). Configure: | + | The main thing to ensure is that ssh is turned off. I would only keep that on if you plan to maintain your BIOS regularly and fully understand how it uses the ssh stack. Since I don't intend to ssh into my production host's BIOS, I simply keep it off. Optionally, you can also turn off the iKVM Server Port and the Virtual Media Port until you need them. I am not monitoring my production host with external tooling, so I also keep SNMP turned off. Once that's hardened, let's create a source IP firewall rule (Configuration > IP Access Control) to limit access to a dedicated and trusted IP. |
| - | * Network interface | + | {{ : |
| - | * Turn off non-essential services | + | |
| - | * Create source-IP firewall rule to limit IPMI access | + | |
| - | After configuring | + | By adding |
| - | Use a small VPS + SOCKS proxy (e.g. ssh -D 8080 to VPS + FoxyProxy) to appear from an approved IP when accessing the firewalled IPMI panel. | + | {{ : |
| - | NOTE: Self-signed TLS certificate | + | NOTE: I am not concerned about the self-signed TLS certificate |
| - | Your primary DNS records | + | Your primary DNS records should already be setup and caching. Teaching folks how to create primary DNS records is outside this tutorial' |
| - | Now install Debian **without NetworkManager** (core utilities only). Remove it if already present: | + | {{ :computing: |
| - | < | + | At present, the Brown Rice PTR web panel only allows customers to establish PTR for IPv4, however, they allow customers to email IPv6 PTR requests, which I've also completed. So, at this point, the primary DNS records and the IPMI networking stack should be live and hardened. Now that's done, it's time to setup the Debian operating system on the host. This tutorial assumes your production host is already setup with its zfs/RAID arrays, JBOD, boot volume, SAS backplane, etc., and/or all the other goodies you need or want for your host. It also assumes that you either already installed Debian or you are just about to. If you have not installed Debian yet, make sure to install it with only core utilities. This will ensure that NetworkManager is not installed by default. If you already installed Debian, make sure to remove NetworkManager with sudo apt remove --purge NetworkManager |
| - | sudo apt remove --purge NetworkManager | + | |
| - | </code> | + | |
| - | + | ||
| - | Configure primary | + | |
| <code bash> | <code bash> | ||
| Line 83: | Line 78: | ||
| </ | </ | ||
| - | < | + | Once that's done, you should |
| - | sudo systemctl restart networking | + | |
| - | ping4 google.com | + | |
| - | </ | + | |
| - | + | ||
| - | Raise dedicated | + | |
| <code bash> | <code bash> | ||
| - | # bridge ipv4 | + | #bridge ipv4 |
| auto enp1s0f1 | auto enp1s0f1 | ||
| iface enp1s0f1 inet manual | iface enp1s0f1 inet manual | ||
| - | # bridge ipv6 | + | #bridge ipv6 |
| auto enp2s0f0 | auto enp2s0f0 | ||
| iface enp2s0f0 inet6 manual | iface enp2s0f0 inet6 manual | ||
| </ | </ | ||
| - | Install | + | This raises both interfaces for the production host operating system and specifies which networking protocol each interface is using. It also instructs that OS that these interfaces are manually configured. Next, we need to bind these interfaces together into a bridge and then assign the bridge an address. Since the IPv4 and IPv6 routes are on separate cables, we do not want to try to assign the bridge more than one address, as this could cause loops and/or break routing. You can safely do this when both protocols are on one layer 1 cable, but not when they are segregated. For that reason, we will only assign one address for the bridge. Please note that we already assigned the primary interface an IPv4 address. This means the production host is reachable via IPv4 already. So, for that reason, it makes sense to assign an IPv6 address to the bridge. This is not only required since they are different cables, but it additionally makes the production host IPv6 reachable. Although its entirely possible to configure the production host to pass IPv6 traffic upstream to the VMs without itself being IPv6 reachable, this is silly and makes no sense. It's helpful and practical to be able to reach the production host via both protocols. Alright, now that we understand the topology, let's install bridge utilities, |
| - | < | + | |
| - | sudo apt install bridge-utils | + | brctl addbr br0 |
| - | brctl addbr br0 | + | |
| - | </ | + | |
| - | Configure | + | Now that the bridge |
| <code bash> | <code bash> | ||
| Line 120: | Line 108: | ||
| </ | </ | ||
| - | < | + | Once this is saved, let's restart networking with sudo systemctl restart networking. Now that IPv6 is activated, let's ensure it's functional with a small ping6 google.com |
| - | sudo systemctl restart networking | + | |
| - | ping6 google.com | + | sudo apt install ufw |
| - | ping4 google.com | + | ufw allow 1194/udp |
| + | ufw allow from 192.168.100.0/ | ||
| + | ufw enable | ||
| + | |||
| + | This UFW setup presumes you have a properly configured VPN server running on the production host. If so, it allows ssh only from the VPN's dedicated subnet. Obviously, this is not strictly required. If you do use this approach, use a non-standard subnet so that you can provide extra protection against brute force attempts via obfuscation. If you don't want to put the production host behind a VPN, then you can optionally expose ssh publicly. It goes without saying that one should only be using ssh keypairs, not passwords. To do the simpler setup, use the following: | ||
| + | |||
| + | sudo apt install ufw | ||
| + | ufw allow 22 | ||
| + | ufw enable | ||
| + | |||
| + | That's all you need. Remember, if you need emergency access to the host, you use IPMI for that, which is not part of the host's operating system. This is the traditional approach whereby you never expose the production host on anything besides ssh or openvpn. Alternately, | ||
| + | |||
| + | sudo nano / | ||
| + | DEFAULT_FORWARD_POLICY=" | ||
| + | |||
| + | Our production host is now sufficiently hardened and our firewall tooling will allow the bridge we created above, to forward packets upstream to virtual appliances running on the host. This means we can now setup a virtual appliance. In my case, I prefer VMs over containers, pods, docker images, etc. I find most of the performance benefits to be neglible and I don't want to sacrifice control or integrity to docker image maintainers. I certainly don't want to learn another abstraction language, i.e., docker-compose, | ||
| + | |||
| + | {{ : | ||
| + | {{ : | ||
| + | |||
| + | That ncurses installer is not being passed over X11. That's running inside the shell of the production host using a TTY argument that I passed via virsh. In my case, I pass the ssh keys to the VM with the preseed. I also configure repositories, | ||
| + | |||
| + | <code bash> | ||
| + | virt-install --name=${hostname}.qcow2 \ | ||
| + | --os-variant=debian12 \ | ||
| + | --vcpu=2 \ | ||
| + | --memory 4096 \ | ||
| + | --disk path=/ | ||
| + | --check path_in_use=off \ | ||
| + | --graphics none \ | ||
| + | --location=/ | ||
| + | --network bridge:br0 \ | ||
| + | --channel unix, | ||
| + | --initrd-inject=/ | ||
| + | --extra-args=" | ||
| + | </ | ||
| + | |||
| + | If you don't want to tinker with preseed, you should at least consider running virt-install with preconfigured options. It saves time and avoids janky pass-through. At this point, your VM should be built, have ssk keys exchanged with it (or populated with the preseed), and be ready to configure. Since my preseed passes the IPv4 interface configuration into the VM, I only need to add IPv6 connectivity. (I am currently working on adding IPv6 pre-population to the preseed script.) Once you shelled into the VM, let's open up / | ||
| + | |||
| + | <code bash> | ||
| + | auto enp1s0 | ||
| + | iface enp1s0 inet static | ||
| + | address 8.28.86.122 | ||
| + | netmask 255.255.255.0 | ||
| + | gateway 8.28.86.1 | ||
| + | nameserver 8.8.8.8 | ||
| + | iface enp1s0 inet6 static | ||
| + | address 2602: | ||
| + | gateway 2602: | ||
| </ | </ | ||
| - | (Continue with UFW firewall setup, virt-install / preseed for VMs, connecting them to br0, etc. — as described in the original article.) | + | At this point, run ping4 google.com and then ping6 google.com inside |
| === Pebble Host Setup === | === Pebble Host Setup === | ||
| - | (One NIC — enp4s0 — handles both IPv4 and IPv6.) | + | In most ways, the Pebble Host setup is simpler. I don't need to setup IPMI because they have a domain controlled |
| - | Pebble requires **VMAC** (specific virtual MAC addresses) for VMs to pass their traffic filtering. | + | {{ : |
| - | Create bridge | + | This is merely a front end to their IPMI implementation. They recently upgraded it and it's very clean. Most importantly, |
| + | |||
| + | * We need to establish PTR | ||
| + | * We need to manage and clone VMACs that comply | ||
| + | |||
| + | A VMAC is a virtual MAC, which they use both to filter traffic, and presumably, to avoid potential MAC address conflicts that have a very high liklihood of occuring in a topology of their size, or roughly 50K clients or more. I'll note that the networking and PTR panel is also very clean, allowing easy configuration for each IP address and its associated VMAC. Here's a peak: | ||
| + | |||
| + | {{ : | ||
| + | {{ : | ||
| + | |||
| + | In my case, I purchased a few additional IPs, so you can see those differing prefixes below. When I first began hosting with Pebble Host, they were not fully IPv6 active. However, about a year ago, they added IPv6 support and it automagically populated in this lovely web panel. Accordingly, | ||
| + | |||
| + | * Interface 01: enp4s0, has a dedicated cable (IPv4 and IPv6) for the dedicated host | ||
| + | * Interface 02: IPMI and web-panel (managed by Pebble) | ||
| + | |||
| + | They do not build the virtualization stack for you - you have to do that. So, you will be replacing their stock interfaces file with your own configuration. Make sure you've installed bridge-utilities, | ||
| <code bash> | <code bash> | ||
| + | # Establish both interfaces | ||
| + | auto enp4s0 | ||
| + | iface enp4s0 inet manual | ||
| + | iface enp4s0 inet6 manual | ||
| + | |||
| + | # Establish bridge and ipv4 and ipv6 addresses | ||
| auto br0 | auto br0 | ||
| iface br0 inet static | iface br0 inet static | ||
| - | | + | |
| - | | + | gateway 45.143.197.65 |
| - | | + | |
| - | | + | bridge-ports |
| - | | + | |
| - | | + | |
| - | | + | |
| + | up ip route add 45.143.197.86/ | ||
| + | up ip route add 194.164.96.218/ | ||
| iface br0 inet6 static | iface br0 inet6 static | ||
| - | | + | |
| - | gateway 2a10: | + | gateway 2a10: |
| + | bridge-ports enp4s0 | ||
| + | bridge-stp off | ||
| + | bridge-fd 0 | ||
| + | up ip -6 route add 2a10: | ||
| + | up ip -6 route add 2a10: | ||
| </ | </ | ||
| - | Additional static | + | Now, in Pebble Host's topology, IPv4 and IPv6 are both on the same physical layer cable. This means we can safely assign both IPv4 and IPv6 addresses to the same bridge without issue. Also, for reasons not entirely clear to me, their topology requires users to push the routes |
| - | VMs connected to br0 must use an **approved VMAC** in their config (virt-install | + | <code bash> |
| + | 127.0.0.1 | ||
| + | 127.0.1.1 | ||
| + | # The following lines are desirable for IPv6 capable hosts | ||
| + | ::1 | ||
| + | ff02::1 ip6-allnodes | ||
| + | ff02::2 ip6-allrouters | ||
| + | </code> | ||
| - | === Closing Notes === | + | Use something like I provided above, but adapted to your use-case of course. Now, I have no idea why they put a non-standard hosts file in place, but regardless, it's your dedicated host to properly configure as you see fit. I'll let you know, however, that it took me a week or so to realize why X-passthrough was not working. So, I hope sharing this helps anyone still using those legacy install and management tools. At any rate, once your VM is installed, you need to change its MAC address to match the VMAC address in the Pebble Host web panel. To do that, shutdown the VM and then edit it with something like virsh edit vm.qcow2. Once inside the .xml file, find the line that begins with "mac address" |
| - | Both setups rely on classic **/ | + | |
| + | | ||
| + | | ||
| - | Bridges act as virtual switches, passing traffic directly | + | Once this is done, restart your VM and try to log in. If you did not setup preseeds |
| - | Security differs greatly: open exposure at Brown Rice vs. MAC filtering + restrictions at Pebble. | + | <code bash> |
| + | auto enp1s0 | ||
| + | iface enp1s0 inet static | ||
| + | address 45.143.197.87/32 | ||
| + | gateway 45.143.197.65 | ||
| + | nameservers 8.8.8.8 | ||
| - | Use preseed + virt-install scripts for fast, repeatable VM deployment. | + | iface enp1s0 inet6 static |
| - | + | | |
| - | If you run into issues or want to discuss variations, feel free to reach out on Matrix: | + | |
| + | </ | ||
| - | * [[https:// | + | Once you've got something similar inside your VM, drop the ping4 google.com and ping6 google.com tests inside it and make sure everything is routing properly. If so, you got it working and can now spin up and/or scale to more VMs to your heart' |
| - | --- // | + | --- // |