User Tools

Site Tools


computing:classic-bridging

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:classic-bridging [2026/01/10 17:39] oemb1905computing:classic-bridging [2026/01/10 17:53] (current) oemb1905
Line 14: Line 14:
 ~~NOTOC~~ ~~NOTOC~~
  
-This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. By production hosts, I mean something that's designed to run virtual appliances (VMs, containers, etc.). This tutorial assumes you have access to PTR records and/or have a block of external IPs. In this tutorial, I'll break down two differing setups I use. To be clear, the tutorial is about much more than bridging, it's just that the bridges are the most important part because they route incoming url request to the appliances. The first setup, at Brown Rice, is a co-located server. The second setup, at Pebble Host, is a "Dedicated Host," which means the company provisions actual hardware on your behalf, and drops it in a server shed. Dedicated Hosting is a step above VPSs because they are not virtualized hosts on shared pools; they are dedicated machines. Pebble Host does not offer machines with as much compute as I can afford and provision myself, however, they are still large enough for smaller virtualization stacks. Here's the technical specifications for each host:+This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. By production hosts, I mean something that's designed to run virtual appliances (VMs, containers, etc.). This tutorial assumes you have access to PTR records and/or have a block of external IPs. In this tutorial, I'll break down two differing setups I use. To be clear, the tutorial is about much more than bridging, it's just that the bridges are the most important part because they route incoming url request to the appliances. The first setup, at Brown Rice, is a co-located server.  
 + 
 +{{ :computing:screenshot_from_2026-01-10_10-49-44.png?direct&600 |}} 
 + 
 +The second setup, at Pebble Host, is a "Dedicated Host," which means the company provisions actual hardware on your behalf, and drops it in a server shed. Dedicated Hosting is a step above VPSs because they are not virtualized hosts on shared pools; they are dedicated machines. Pebble Host does not offer machines with as much compute as I can afford and provision myself, however, they are still large enough for smaller virtualization stacks. Here's the technical specifications for each host:
  
   * Brown Rice: Super Micro (Xeon Silver), 384GB RAM, 10.4TB zfs R10 JBOD (Co-Located)   * Brown Rice: Super Micro (Xeon Silver), 384GB RAM, 10.4TB zfs R10 JBOD (Co-Located)
   * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting)   * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting)
  
-I prefer to use virsh+qemu/kvm on the physical host, or bare metal of the server. For my machine in Taos, I run the Debian host OS on a separate SSD on a dedicated SATA port that's not part of the SAS back-plane. At Pebble, I don't have this luxury; everything runs on the boot volume. This limitation fits the use-case, however, as I only have Pebble setup to run 5-7 virtual appliances, while the server co-located at Brown Rice has upwards of 30 virtual appliances. Each location, however, has a different set of requirements due to how they broadcast IP/prefixes, whether they filter traffic or not, and how they allocate IPs. For example, Brown Rice treats the prefixes/blocks they provide me as public and, according to that logic, everything is open and exposed. The onus of responsibility on using the server legally and fairly rests exclusively on the client. Since their entire co-located infrastructure fits in one 800 square foot room, with at most 6 full racks, the chance of conflict is negligible. Pebble Host, on the other hand, offers hosting services to over 50K clients. Although the theoretical ceiling of MAC addresses, in and of itself, provides enough combinations (280 trillion +), the reality is that vendors, e.g., virsh, leverage addresses solely within their Organizationally Unique Identifier (OUI), which limits the unique addresses to about 16.7 million (varies by vendor, that estimate is for virsh). Therefore, once you have around 500 or more clients, you start to have a non-neglible chance (1%) of conflict. Since Pebble Host likely has over 50K clients, conflict alone is a reason to filter by MAC address. There are, of course, other reasons to filter, e.g., security, compliance, accountability, possibly GDPR, etc., but it's worth noting that filtering is a technical and practical requirement for Pebble Host.+I prefer to use virsh+qemu/kvm on the physical host, or bare metal of the server. For my machine in Taos, I run the Debian host OS on a separate SSD on a dedicated SATA port that's not part of the SAS back-plane. At Pebble, I don't have this luxury; everything runs on the boot volume. This limitation fits the use-case, however, as I only have Pebble setup to run 5-7 virtual appliances, while the server co-located at Brown Rice has upwards of 30 virtual appliances. Each location, however, has a different set of requirements due to how they broadcast IP/prefixes, whether they filter traffic or not, and how they allocate IPs. For example, Brown Rice treats the prefixes/blocks they provide me as public and, according to that logic, everything is open and exposed. The onus of responsibility on using the server legally and fairly rests exclusively on the client. Since their entire co-located infrastructure fits in one 800 square foot room, with at most 6 full racks, the chance of conflict is negligible. Pebble Host, on the other hand, offers hosting services to over 50K clients.  
 + 
 +{{ :computing:screenshot_from_2026-01-10_10-52-52.png?direct&600 |}} 
 + 
 +Although the theoretical ceiling of MAC addresses, in and of itself, provides enough combinations (280 trillion +), the reality is that vendors, e.g., virsh, leverage addresses solely within their Organizationally Unique Identifier (OUI), which limits the unique addresses to about 16.7 million (varies by vendor, that estimate is for virsh). Therefore, once you have around 500 or more clients, you start to have a non-neglible chance (1%) of conflict. Since Pebble Host likely has over 50K clients, conflict alone is a reason to filter by MAC address. There are, of course, other reasons to filter, e.g., security, compliance, accountability, possibly GDPR, etc., but it's worth noting that filtering is a technical and practical requirement for Pebble Host.
  
 === Brown Rice Setup === === Brown Rice Setup ===
Line 38: Line 46:
  
 To setup the network interface, navigate to Configuration > Network. After that, enter in your network interface information. Here's how my configuration is setup: To setup the network interface, navigate to Configuration > Network. After that, enter in your network interface information. Here's how my configuration is setup:
 +
 +{{ :computing:a11.png?direct&600 |}}
  
 After establishing that, you should use your laptop to try to reach the IP. If it works, then proceed to hardening the BIOS. Hardening involves the two steps mentioned earlier, i.e., turning off non-essential services so they aren't listening publicly (Configuration > Port). Here's what that looks like: After establishing that, you should use your laptop to try to reach the IP. If it works, then proceed to hardening the BIOS. Hardening involves the two steps mentioned earlier, i.e., turning off non-essential services so they aren't listening publicly (Configuration > Port). Here's what that looks like:
 +
 +{{ :computing:a12.png?direct&600 |}}
  
 The main thing to ensure is that ssh is turned off. I would only keep that on if you plan to maintain your BIOS regularly and fully understand how it uses the ssh stack. Since I don't intend to ssh into my production host's BIOS, I simply keep it off. Optionally, you can also turn off the iKVM Server Port and the Virtual Media Port until you need them. I am not monitoring my production host with external tooling, so I also keep SNMP turned off. Once that's hardened, let's create a source IP firewall rule (Configuration > IP Access Control) to limit access to a dedicated and trusted IP.  The main thing to ensure is that ssh is turned off. I would only keep that on if you plan to maintain your BIOS regularly and fully understand how it uses the ssh stack. Since I don't intend to ssh into my production host's BIOS, I simply keep it off. Optionally, you can also turn off the iKVM Server Port and the Virtual Media Port until you need them. I am not monitoring my production host with external tooling, so I also keep SNMP turned off. Once that's hardened, let's create a source IP firewall rule (Configuration > IP Access Control) to limit access to a dedicated and trusted IP. 
 +
 +{{ :computing:a13.png?direct&600 |}}
  
 By adding the 0.0.0.0/0 with DROP specified at the end, you are specifying that all other requests besides the whitelisted requests above it, should be dropped. The blacked out entries should be replaced with your approved external IP, e.g., 98.65.124.88/32. The rules are processed in order, top to bottom. In order to "appear" to be using the whitelisted IP, I use a very tiny VPS from Digital Ocean and I proxy my traffic through the VPS using the FoxyProxy plugin. To do this, create a configuration in FoxyProxy that uses SOCKS and listens on localhost on port 8080. Once that's done, just ssh into your VPS with the -D 8080 flag and port specified, and your web browser will run its traffic through the IP of the VPS, thus allowing you to access the firewalled IPMI panel. This is also very useful for many other cases where you might want to keep your traffic away from prying eyes. By adding the 0.0.0.0/0 with DROP specified at the end, you are specifying that all other requests besides the whitelisted requests above it, should be dropped. The blacked out entries should be replaced with your approved external IP, e.g., 98.65.124.88/32. The rules are processed in order, top to bottom. In order to "appear" to be using the whitelisted IP, I use a very tiny VPS from Digital Ocean and I proxy my traffic through the VPS using the FoxyProxy plugin. To do this, create a configuration in FoxyProxy that uses SOCKS and listens on localhost on port 8080. Once that's done, just ssh into your VPS with the -D 8080 flag and port specified, and your web browser will run its traffic through the IP of the VPS, thus allowing you to access the firewalled IPMI panel. This is also very useful for many other cases where you might want to keep your traffic away from prying eyes.
 +
 +{{ :computing:a14.png?direct&600 |}}
  
 NOTE: I am not concerned about the self-signed TLS certificate due to the source IP and other hardening measures. One can, however, optionally configure this if they so desire. NOTE: I am not concerned about the self-signed TLS certificate due to the source IP and other hardening measures. One can, however, optionally configure this if they so desire.
  
 Your primary DNS records should already be setup and caching. Teaching folks how to create primary DNS records is outside this tutorial's scope. However, many folks reading this might be entering co-located and/or dedicated hosting space for the first time. For that reason, you might not be familiar with what setting PTR records looks like. This is handled by whoever provides the IPs to you because only the owner of the IP block can prove ownership of the IP. In Brown Rice's case, they also offer and sell IP-space. Even better, they offer a billing and control panel that allows you to establish and manage your PTR records: Your primary DNS records should already be setup and caching. Teaching folks how to create primary DNS records is outside this tutorial's scope. However, many folks reading this might be entering co-located and/or dedicated hosting space for the first time. For that reason, you might not be familiar with what setting PTR records looks like. This is handled by whoever provides the IPs to you because only the owner of the IP block can prove ownership of the IP. In Brown Rice's case, they also offer and sell IP-space. Even better, they offer a billing and control panel that allows you to establish and manage your PTR records:
 +
 +{{ :computing:a15.png?direct&600 |}}
  
 At present, the Brown Rice PTR web panel only allows customers to establish PTR for IPv4, however, they allow customers to email IPv6 PTR requests, which I've also completed. So, at this point, the primary DNS records and the IPMI networking stack should be live and hardened. Now that's done, it's time to setup the Debian operating system on the host. This tutorial assumes your production host is already setup with its zfs/RAID arrays, JBOD, boot volume, SAS backplane, etc., and/or all the other goodies you need or want for your host. It also assumes that you either already installed Debian or you are just about to. If you have not installed Debian yet, make sure to install it with only core utilities. This will ensure that NetworkManager is not installed by default. If you already installed Debian, make sure to remove NetworkManager with sudo apt remove --purge NetworkManager or reinstall Debian, this time without the bloat. The installer might prompt you for nameservers, and if so, use a public recursive DNS resolver (Cloudflare, Google, etc.). If the installer did not prompt you for nameservers, your first task is to specify them in /etc/resolv.conf where, for example, you enter nameserver 8.8.8.8. Now that the OS knows what server to query for DNS entries, we can shift our focus to connecting the production host to the internet. To do that, we will configure the first interface. Open up /etc/network/interfaces and establish your primary interface: At present, the Brown Rice PTR web panel only allows customers to establish PTR for IPv4, however, they allow customers to email IPv6 PTR requests, which I've also completed. So, at this point, the primary DNS records and the IPMI networking stack should be live and hardened. Now that's done, it's time to setup the Debian operating system on the host. This tutorial assumes your production host is already setup with its zfs/RAID arrays, JBOD, boot volume, SAS backplane, etc., and/or all the other goodies you need or want for your host. It also assumes that you either already installed Debian or you are just about to. If you have not installed Debian yet, make sure to install it with only core utilities. This will ensure that NetworkManager is not installed by default. If you already installed Debian, make sure to remove NetworkManager with sudo apt remove --purge NetworkManager or reinstall Debian, this time without the bloat. The installer might prompt you for nameservers, and if so, use a public recursive DNS resolver (Cloudflare, Google, etc.). If the installer did not prompt you for nameservers, your first task is to specify them in /etc/resolv.conf where, for example, you enter nameserver 8.8.8.8. Now that the OS knows what server to query for DNS entries, we can shift our focus to connecting the production host to the internet. To do that, we will configure the first interface. Open up /etc/network/interfaces and establish your primary interface:
Line 109: Line 127:
  
 Our production host is now sufficiently hardened and our firewall tooling will allow the bridge we created above, to forward packets upstream to virtual appliances running on the host. This means we can now setup a virtual appliance. In my case, I prefer VMs over containers, pods, docker images, etc. I find most of the performance benefits to be neglible and I don't want to sacrifice control or integrity to docker image maintainers. I certainly don't want to learn another abstraction language, i.e., docker-compose, in order to both maintain the underlying system in addition to docker. VMs provide near-equivalent performance, isolated kernels, and full control over the OS. It might be overkill for some smaller services/instances, but the benefits far outweight the drawbacks. Now, when I first began this project, I used ssh -X user@domain.com to log into the host, and then would run virt-manager on the host, which passed over X11 to my desktop. This is archaic, slow, and inefficient. As I was not ready to hand over control to Cockpit or other tools, I dug deep into preseed.cfgs, Debian's native auto-installer. After some work, I was able to create a fully automatic way to install new VMs - with desired networking and/or other parameters - within the shell of the production host. That approach is covered in detail in this blog post. I recently revised these configs and scripts to work with Trixie and everything is purring. It takes roughly 5 mins to build a new VM. Here's what this setup looks like: Our production host is now sufficiently hardened and our firewall tooling will allow the bridge we created above, to forward packets upstream to virtual appliances running on the host. This means we can now setup a virtual appliance. In my case, I prefer VMs over containers, pods, docker images, etc. I find most of the performance benefits to be neglible and I don't want to sacrifice control or integrity to docker image maintainers. I certainly don't want to learn another abstraction language, i.e., docker-compose, in order to both maintain the underlying system in addition to docker. VMs provide near-equivalent performance, isolated kernels, and full control over the OS. It might be overkill for some smaller services/instances, but the benefits far outweight the drawbacks. Now, when I first began this project, I used ssh -X user@domain.com to log into the host, and then would run virt-manager on the host, which passed over X11 to my desktop. This is archaic, slow, and inefficient. As I was not ready to hand over control to Cockpit or other tools, I dug deep into preseed.cfgs, Debian's native auto-installer. After some work, I was able to create a fully automatic way to install new VMs - with desired networking and/or other parameters - within the shell of the production host. That approach is covered in detail in this blog post. I recently revised these configs and scripts to work with Trixie and everything is purring. It takes roughly 5 mins to build a new VM. Here's what this setup looks like:
 +
 +{{ :computing:a16.png?direct&600 |}}
 +{{ :computing:a17.png?direct&600 |}}
  
 That ncurses installer is not being passed over X11. That's running inside the shell of the production host using a TTY argument that I passed via virsh. In my case, I pass the ssh keys to the VM with the preseed. I also configure repositories, networking, install qemu-guest-agent, and other tools the VMs need. Additionally - and this is a crucial step - I also attach the VM's virtualized NIC to the bridge I created above, br0. Although the preseed tutorial is outside the scope of this tutorial, I'll share the virsh command I use - within the greater build script - to populate the VM: That ncurses installer is not being passed over X11. That's running inside the shell of the production host using a TTY argument that I passed via virsh. In my case, I pass the ssh keys to the VM with the preseed. I also configure repositories, networking, install qemu-guest-agent, and other tools the VMs need. Additionally - and this is a crucial step - I also attach the VM's virtualized NIC to the bridge I created above, br0. Although the preseed tutorial is outside the scope of this tutorial, I'll share the virsh command I use - within the greater build script - to populate the VM:
Line 146: Line 167:
  
 In most ways, the Pebble Host setup is simpler. I don't need to setup IPMI because they have a domain controlled and public facing web panel that they use to route external url request to the client's IPMI panel. This is common for hosting providers, of course. Here's what their web panel looks like: In most ways, the Pebble Host setup is simpler. I don't need to setup IPMI because they have a domain controlled and public facing web panel that they use to route external url request to the client's IPMI panel. This is common for hosting providers, of course. Here's what their web panel looks like:
 +
 +{{ :computing:a18.png?direct&600 |}}
  
 This is merely a front end to their IPMI implementation. They recently upgraded it and it's very clean. Most importantly, this web panel includes DNS and networking management, which we'll need for PTR records and managing VMACs.  This is merely a front end to their IPMI implementation. They recently upgraded it and it's very clean. Most importantly, this web panel includes DNS and networking management, which we'll need for PTR records and managing VMACs. 
  
-We need to establish PTR +  * We need to establish PTR 
- +  We need to manage and clone VMACs that comply with their MAC address filtering
-We need to manage and clone VMACs that comply with their MAC address filtering+
  
 A VMAC is a virtual MAC, which they use both to filter traffic, and presumably, to avoid potential MAC address conflicts that have a very high liklihood of occuring in a topology of their size, or roughly 50K clients or more. I'll note that the networking and PTR panel is also very clean, allowing easy configuration for each IP address and its associated VMAC. Here's a peak: A VMAC is a virtual MAC, which they use both to filter traffic, and presumably, to avoid potential MAC address conflicts that have a very high liklihood of occuring in a topology of their size, or roughly 50K clients or more. I'll note that the networking and PTR panel is also very clean, allowing easy configuration for each IP address and its associated VMAC. Here's a peak:
 +
 +{{ :computing:a19.png?direct&600 |}}
 +{{ :computing:a20.png?direct&600 |}}
  
 In my case, I purchased a few additional IPs, so you can see those differing prefixes below. When I first began hosting with Pebble Host, they were not fully IPv6 active. However, about a year ago, they added IPv6 support and it automagically populated in this lovely web panel. Accordingly, you can see that I have PTR setup, for both IPv4 and IPv6, on the dedicated host, mail.haacksnetworking.one. The other IPs besides mail.haacksnetworking.one are either in use for clients, reserved and waiting for allocation, and/or being used for testing servers I am working on. For example, you can see a slew of mail servers at the bottom, which I am slowly working on and intend to run doveadm and therefore serve as backups to my primary selfhosted email servers. But, I digress ... let's continue discussing the stack and setup requirements instead of what I use the stack for! So, once you purchased your dedicated hosting package, you get something like my above screen shots. Once that's done, and you can shell into the dedicated host and have made sure ssh keypair auth is enforced, we can configure the interfaces file. Remember, they drop both IPv4 and IPv6 on one cable/interface and IPMI, routed through their web panel, on the other interface: In my case, I purchased a few additional IPs, so you can see those differing prefixes below. When I first began hosting with Pebble Host, they were not fully IPv6 active. However, about a year ago, they added IPv6 support and it automagically populated in this lovely web panel. Accordingly, you can see that I have PTR setup, for both IPv4 and IPv6, on the dedicated host, mail.haacksnetworking.one. The other IPs besides mail.haacksnetworking.one are either in use for clients, reserved and waiting for allocation, and/or being used for testing servers I am working on. For example, you can see a slew of mail servers at the bottom, which I am slowly working on and intend to run doveadm and therefore serve as backups to my primary selfhosted email servers. But, I digress ... let's continue discussing the stack and setup requirements instead of what I use the stack for! So, once you purchased your dedicated hosting package, you get something like my above screen shots. Once that's done, and you can shell into the dedicated host and have made sure ssh keypair auth is enforced, we can configure the interfaces file. Remember, they drop both IPv4 and IPv6 on one cable/interface and IPMI, routed through their web panel, on the other interface:
Line 204: Line 229:
 Use something like I provided above, but adapted to your use-case of course. Now, I have no idea why they put a non-standard hosts file in place, but regardless, it's your dedicated host to properly configure as you see fit. I'll let you know, however, that it took me a week or so to realize why X-passthrough was not working. So, I hope sharing this helps anyone still using those legacy install and management tools. At any rate, once your VM is installed, you need to change its MAC address to match the VMAC address in the Pebble Host web panel. To do that, shutdown the VM and then edit it with something like virsh edit vm.qcow2. Once inside the .xml file, find the line that begins with "mac address" and edit the address to match the VMAC. Make sure the VMAC address for the VM is different than the dedicated and pre-assigned VMAC for the dedicated host. The web panel has a generate option, where you can make VMACs at will. Again, it is presumed that you already connected the VM's NIC to the bridge during the installation of the VM. In summary, we've ensured that: Use something like I provided above, but adapted to your use-case of course. Now, I have no idea why they put a non-standard hosts file in place, but regardless, it's your dedicated host to properly configure as you see fit. I'll let you know, however, that it took me a week or so to realize why X-passthrough was not working. So, I hope sharing this helps anyone still using those legacy install and management tools. At any rate, once your VM is installed, you need to change its MAC address to match the VMAC address in the Pebble Host web panel. To do that, shutdown the VM and then edit it with something like virsh edit vm.qcow2. Once inside the .xml file, find the line that begins with "mac address" and edit the address to match the VMAC. Make sure the VMAC address for the VM is different than the dedicated and pre-assigned VMAC for the dedicated host. The web panel has a generate option, where you can make VMACs at will. Again, it is presumed that you already connected the VM's NIC to the bridge during the installation of the VM. In summary, we've ensured that:
  
-The bridge, br0, has a matching MAC address as that of the dedicated host's primary NIC +  * The bridge, br0, has a matching MAC address as that of the dedicated host's primary NIC 
- +  The VM has been connected to br0 
-The VM has been connected to br0 +  The VM's NIC has been changed to match the approved VMAC address specified in the panel
- +
-The VM's NIC has been changed to match the approved VMAC address specified in the panel+
  
 Once this is done, restart your VM and try to log in. If you did not setup preseeds with virt-install that auto-populate your interface, then use X-passthrough on the dedicated host, along with the virt-manager console, and type in the interface configuration manually. Again, in my case, the preseed passes my interfaces file into the VM so, as soon as it can route, I can reach it via ssh. As I mentioned earlier, I don't yet have the preseed configs setup to pass IPv6. So, I enter that information manually after connecting via IPv4. However, at the end of the day, your VM should have something like the following: Once this is done, restart your VM and try to log in. If you did not setup preseeds with virt-install that auto-populate your interface, then use X-passthrough on the dedicated host, along with the virt-manager console, and type in the interface configuration manually. Again, in my case, the preseed passes my interfaces file into the VM so, as soon as it can route, I can reach it via ssh. As I mentioned earlier, I don't yet have the preseed configs setup to pass IPv6. So, I enter that information manually after connecting via IPv4. However, at the end of the day, your VM should have something like the following:
Line 226: Line 249:
 Once you've got something similar inside your VM, drop the ping4 google.com and ping6 google.com tests inside it and make sure everything is routing properly. If so, you got it working and can now spin up and/or scale to more VMs to your heart's content.  Once you've got something similar inside your VM, drop the ping4 google.com and ping6 google.com tests inside it and make sure everything is routing properly. If so, you got it working and can now spin up and/or scale to more VMs to your heart's content. 
  
- --- //[[alerts@haacksnetworking.org|oemb1905]] 2025/11/09 03:07//+ --- //[[alerts@haacksnetworking.org|oemb1905]] 2026/01/10 17:52//
computing/classic-bridging.1768066798.txt.gz · Last modified: by oemb1905