This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| computing:classic-bridging [2026/01/10 17:43] – oemb1905 | computing:classic-bridging [2026/01/10 17:53] (current) – oemb1905 | ||
|---|---|---|---|
| Line 14: | Line 14: | ||
| ~~NOTOC~~ | ~~NOTOC~~ | ||
| - | This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. By production hosts, I mean something that's designed to run virtual appliances (VMs, containers, etc.). This tutorial assumes you have access to PTR records and/or have a block of external IPs. In this tutorial, I'll break down two differing setups I use. To be clear, the tutorial is about much more than bridging, it's just that the bridges are the most important part because they route incoming url request to the appliances. The first setup, at Brown Rice, is a co-located server. The second setup, at Pebble Host, is a " | + | This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. By production hosts, I mean something that's designed to run virtual appliances (VMs, containers, etc.). This tutorial assumes you have access to PTR records and/or have a block of external IPs. In this tutorial, I'll break down two differing setups I use. To be clear, the tutorial is about much more than bridging, it's just that the bridges are the most important part because they route incoming url request to the appliances. The first setup, at Brown Rice, is a co-located server. |
| + | |||
| + | {{ : | ||
| + | |||
| + | The second setup, at Pebble Host, is a " | ||
| * Brown Rice: Super Micro (Xeon Silver), 384GB RAM, 10.4TB zfs R10 JBOD (Co-Located) | * Brown Rice: Super Micro (Xeon Silver), 384GB RAM, 10.4TB zfs R10 JBOD (Co-Located) | ||
| * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting) | * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting) | ||
| - | I prefer to use virsh+qemu/ | + | I prefer to use virsh+qemu/ |
| + | |||
| + | {{ : | ||
| + | |||
| + | Although the theoretical ceiling of MAC addresses, in and of itself, provides enough combinations (280 trillion +), the reality is that vendors, e.g., virsh, leverage addresses solely within their Organizationally Unique Identifier (OUI), which limits the unique addresses to about 16.7 million (varies by vendor, that estimate is for virsh). Therefore, once you have around 500 or more clients, you start to have a non-neglible chance (1%) of conflict. Since Pebble Host likely has over 50K clients, conflict alone is a reason to filter by MAC address. There are, of course, other reasons to filter, e.g., security, compliance, accountability, | ||
| === Brown Rice Setup === | === Brown Rice Setup === | ||
| Line 42: | Line 50: | ||
| After establishing that, you should use your laptop to try to reach the IP. If it works, then proceed to hardening the BIOS. Hardening involves the two steps mentioned earlier, i.e., turning off non-essential services so they aren't listening publicly (Configuration > Port). Here's what that looks like: | After establishing that, you should use your laptop to try to reach the IP. If it works, then proceed to hardening the BIOS. Hardening involves the two steps mentioned earlier, i.e., turning off non-essential services so they aren't listening publicly (Configuration > Port). Here's what that looks like: | ||
| + | |||
| + | {{ : | ||
| The main thing to ensure is that ssh is turned off. I would only keep that on if you plan to maintain your BIOS regularly and fully understand how it uses the ssh stack. Since I don't intend to ssh into my production host's BIOS, I simply keep it off. Optionally, you can also turn off the iKVM Server Port and the Virtual Media Port until you need them. I am not monitoring my production host with external tooling, so I also keep SNMP turned off. Once that's hardened, let's create a source IP firewall rule (Configuration > IP Access Control) to limit access to a dedicated and trusted IP. | The main thing to ensure is that ssh is turned off. I would only keep that on if you plan to maintain your BIOS regularly and fully understand how it uses the ssh stack. Since I don't intend to ssh into my production host's BIOS, I simply keep it off. Optionally, you can also turn off the iKVM Server Port and the Virtual Media Port until you need them. I am not monitoring my production host with external tooling, so I also keep SNMP turned off. Once that's hardened, let's create a source IP firewall rule (Configuration > IP Access Control) to limit access to a dedicated and trusted IP. | ||
| + | |||
| + | {{ : | ||
| By adding the 0.0.0.0/0 with DROP specified at the end, you are specifying that all other requests besides the whitelisted requests above it, should be dropped. The blacked out entries should be replaced with your approved external IP, e.g., 98.65.124.88/ | By adding the 0.0.0.0/0 with DROP specified at the end, you are specifying that all other requests besides the whitelisted requests above it, should be dropped. The blacked out entries should be replaced with your approved external IP, e.g., 98.65.124.88/ | ||
| + | |||
| + | {{ : | ||
| NOTE: I am not concerned about the self-signed TLS certificate due to the source IP and other hardening measures. One can, however, optionally configure this if they so desire. | NOTE: I am not concerned about the self-signed TLS certificate due to the source IP and other hardening measures. One can, however, optionally configure this if they so desire. | ||
| Your primary DNS records should already be setup and caching. Teaching folks how to create primary DNS records is outside this tutorial' | Your primary DNS records should already be setup and caching. Teaching folks how to create primary DNS records is outside this tutorial' | ||
| + | |||
| + | {{ : | ||
| At present, the Brown Rice PTR web panel only allows customers to establish PTR for IPv4, however, they allow customers to email IPv6 PTR requests, which I've also completed. So, at this point, the primary DNS records and the IPMI networking stack should be live and hardened. Now that's done, it's time to setup the Debian operating system on the host. This tutorial assumes your production host is already setup with its zfs/RAID arrays, JBOD, boot volume, SAS backplane, etc., and/or all the other goodies you need or want for your host. It also assumes that you either already installed Debian or you are just about to. If you have not installed Debian yet, make sure to install it with only core utilities. This will ensure that NetworkManager is not installed by default. If you already installed Debian, make sure to remove NetworkManager with sudo apt remove --purge NetworkManager or reinstall Debian, this time without the bloat. The installer might prompt you for nameservers, | At present, the Brown Rice PTR web panel only allows customers to establish PTR for IPv4, however, they allow customers to email IPv6 PTR requests, which I've also completed. So, at this point, the primary DNS records and the IPMI networking stack should be live and hardened. Now that's done, it's time to setup the Debian operating system on the host. This tutorial assumes your production host is already setup with its zfs/RAID arrays, JBOD, boot volume, SAS backplane, etc., and/or all the other goodies you need or want for your host. It also assumes that you either already installed Debian or you are just about to. If you have not installed Debian yet, make sure to install it with only core utilities. This will ensure that NetworkManager is not installed by default. If you already installed Debian, make sure to remove NetworkManager with sudo apt remove --purge NetworkManager or reinstall Debian, this time without the bloat. The installer might prompt you for nameservers, | ||
| Line 111: | Line 127: | ||
| Our production host is now sufficiently hardened and our firewall tooling will allow the bridge we created above, to forward packets upstream to virtual appliances running on the host. This means we can now setup a virtual appliance. In my case, I prefer VMs over containers, pods, docker images, etc. I find most of the performance benefits to be neglible and I don't want to sacrifice control or integrity to docker image maintainers. I certainly don't want to learn another abstraction language, i.e., docker-compose, | Our production host is now sufficiently hardened and our firewall tooling will allow the bridge we created above, to forward packets upstream to virtual appliances running on the host. This means we can now setup a virtual appliance. In my case, I prefer VMs over containers, pods, docker images, etc. I find most of the performance benefits to be neglible and I don't want to sacrifice control or integrity to docker image maintainers. I certainly don't want to learn another abstraction language, i.e., docker-compose, | ||
| + | |||
| + | {{ : | ||
| + | {{ : | ||
| That ncurses installer is not being passed over X11. That's running inside the shell of the production host using a TTY argument that I passed via virsh. In my case, I pass the ssh keys to the VM with the preseed. I also configure repositories, | That ncurses installer is not being passed over X11. That's running inside the shell of the production host using a TTY argument that I passed via virsh. In my case, I pass the ssh keys to the VM with the preseed. I also configure repositories, | ||
| Line 148: | Line 167: | ||
| In most ways, the Pebble Host setup is simpler. I don't need to setup IPMI because they have a domain controlled and public facing web panel that they use to route external url request to the client' | In most ways, the Pebble Host setup is simpler. I don't need to setup IPMI because they have a domain controlled and public facing web panel that they use to route external url request to the client' | ||
| + | |||
| + | {{ : | ||
| This is merely a front end to their IPMI implementation. They recently upgraded it and it's very clean. Most importantly, | This is merely a front end to their IPMI implementation. They recently upgraded it and it's very clean. Most importantly, | ||
| - | We need to establish PTR | + | * We need to establish PTR |
| - | + | | |
| - | We need to manage and clone VMACs that comply with their MAC address filtering | + | |
| A VMAC is a virtual MAC, which they use both to filter traffic, and presumably, to avoid potential MAC address conflicts that have a very high liklihood of occuring in a topology of their size, or roughly 50K clients or more. I'll note that the networking and PTR panel is also very clean, allowing easy configuration for each IP address and its associated VMAC. Here's a peak: | A VMAC is a virtual MAC, which they use both to filter traffic, and presumably, to avoid potential MAC address conflicts that have a very high liklihood of occuring in a topology of their size, or roughly 50K clients or more. I'll note that the networking and PTR panel is also very clean, allowing easy configuration for each IP address and its associated VMAC. Here's a peak: | ||
| + | |||
| + | {{ : | ||
| + | {{ : | ||
| In my case, I purchased a few additional IPs, so you can see those differing prefixes below. When I first began hosting with Pebble Host, they were not fully IPv6 active. However, about a year ago, they added IPv6 support and it automagically populated in this lovely web panel. Accordingly, | In my case, I purchased a few additional IPs, so you can see those differing prefixes below. When I first began hosting with Pebble Host, they were not fully IPv6 active. However, about a year ago, they added IPv6 support and it automagically populated in this lovely web panel. Accordingly, | ||
| Line 206: | Line 229: | ||
| Use something like I provided above, but adapted to your use-case of course. Now, I have no idea why they put a non-standard hosts file in place, but regardless, it's your dedicated host to properly configure as you see fit. I'll let you know, however, that it took me a week or so to realize why X-passthrough was not working. So, I hope sharing this helps anyone still using those legacy install and management tools. At any rate, once your VM is installed, you need to change its MAC address to match the VMAC address in the Pebble Host web panel. To do that, shutdown the VM and then edit it with something like virsh edit vm.qcow2. Once inside the .xml file, find the line that begins with "mac address" | Use something like I provided above, but adapted to your use-case of course. Now, I have no idea why they put a non-standard hosts file in place, but regardless, it's your dedicated host to properly configure as you see fit. I'll let you know, however, that it took me a week or so to realize why X-passthrough was not working. So, I hope sharing this helps anyone still using those legacy install and management tools. At any rate, once your VM is installed, you need to change its MAC address to match the VMAC address in the Pebble Host web panel. To do that, shutdown the VM and then edit it with something like virsh edit vm.qcow2. Once inside the .xml file, find the line that begins with "mac address" | ||
| - | The bridge, br0, has a matching MAC address as that of the dedicated host's primary NIC | + | * The bridge, br0, has a matching MAC address as that of the dedicated host's primary NIC |
| - | + | | |
| - | The VM has been connected to br0 | + | |
| - | + | ||
| - | The VM's NIC has been changed to match the approved VMAC address specified in the panel | + | |
| Once this is done, restart your VM and try to log in. If you did not setup preseeds with virt-install that auto-populate your interface, then use X-passthrough on the dedicated host, along with the virt-manager console, and type in the interface configuration manually. Again, in my case, the preseed passes my interfaces file into the VM so, as soon as it can route, I can reach it via ssh. As I mentioned earlier, I don't yet have the preseed configs setup to pass IPv6. So, I enter that information manually after connecting via IPv4. However, at the end of the day, your VM should have something like the following: | Once this is done, restart your VM and try to log in. If you did not setup preseeds with virt-install that auto-populate your interface, then use X-passthrough on the dedicated host, along with the virt-manager console, and type in the interface configuration manually. Again, in my case, the preseed passes my interfaces file into the VM so, as soon as it can route, I can reach it via ssh. As I mentioned earlier, I don't yet have the preseed configs setup to pass IPv6. So, I enter that information manually after connecting via IPv4. However, at the end of the day, your VM should have something like the following: | ||
| Line 228: | Line 249: | ||
| Once you've got something similar inside your VM, drop the ping4 google.com and ping6 google.com tests inside it and make sure everything is routing properly. If so, you got it working and can now spin up and/or scale to more VMs to your heart' | Once you've got something similar inside your VM, drop the ping4 google.com and ping6 google.com tests inside it and make sure everything is routing properly. If so, you got it working and can now spin up and/or scale to more VMs to your heart' | ||
| - | --- // | + | --- // |