This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| computing:classic-bridging [2025/11/09 03:39] – oemb1905 | computing:classic-bridging [2026/01/10 17:53] (current) – oemb1905 | ||
|---|---|---|---|
| Line 7: | Line 7: | ||
| ------------------------------------------- | ------------------------------------------- | ||
| - | //apachesurvival// | + | //classic-bridging// |
| ------------------------------------------- | ------------------------------------------- | ||
| - | This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. | + | === Introduction === |
| + | ~~NOTOC~~ | ||
| + | |||
| + | This tutorial is for Debian users who want to create network bridges, or virtual switches, on production hosts. | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | The second setup, at Pebble Host, is a " | ||
| * Brown Rice: Super Micro (Xeon Silver), 384GB RAM, 10.4TB zfs R10 JBOD (Co-Located) | * Brown Rice: Super Micro (Xeon Silver), 384GB RAM, 10.4TB zfs R10 JBOD (Co-Located) | ||
| * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting) | * Pebble Host: Ryzen 7 8700G, 64GB RAM, 2TB NVME (Dedicated Hosting) | ||
| - | Each of these machines uses virsh+qemu | + | I prefer to use virsh+qemu/kvm on the physical host, or bare metal of the server. For my machine in Taos, I run the Debian host OS on a separate SSD on a dedicated SATA port that's not part of the SAS back-plane. At Pebble, I don' |
| + | |||
| + | {{ : | ||
| + | |||
| + | Although the theoretical ceiling of MAC addresses, in and of itself, provides enough combinations (280 trillion +), the reality is that vendors, e.g., virsh, leverage addresses solely within their Organizationally Unique Identifier (OUI), which limits the unique addresses to about 16.7 million (varies by vendor, that estimate is for virsh). Therefore, once you have around 500 or more clients, you start to have a non-neglible chance (1%) of conflict. Since Pebble Host likely has over 50K clients, conflict alone is a reason to filter by MAC address. There are, of course, other reasons to filter, e.g., security, compliance, accountability, | ||
| + | |||
| + | === Brown Rice Setup === | ||
| + | |||
| + | The SuperMicro has a built-in 10Gbps NIC, with 4 ports and 1 IPMI port. Before we start tinkering with the operating system on the host, we need to establish A, AAAA, SPF, DMARC, and PTR records for the host. After that, we will setup IPMI access in the SuperMicro' | ||
| + | |||
| + | * Interface 01: enp1s0f0, has a dedicated cable (IPv4) just for the physical host | ||
| + | * Interface 02: enp1s0f1, has a dedicated cable (IPv4) just for bridging IPv4 | ||
| + | * Interface 03: enp2s0f0, has a separate cable (IPv6) just for bridging IPv6 | ||
| + | * Interface 04: enp2s0f1, empty | ||
| + | * Interface 05: IPMI; setup via American Megatrends BIOS, access is source-IP restricted | ||
| + | |||
| + | Okay, to setup IPMI, you need to start up your production host and tap whatever keyboard shortcut is required after it POSTS. SuperMicros of this vintage use American Megatrends BIOS. In the BIOS, we need to configure the following: | ||
| + | |||
| + | * Establish the network interface | ||
| + | * Turn off non-essential services in the BIOS | ||
| + | * Create a firewall rule in the BIOS to limit IPMI access to trusted IPs | ||
| + | |||
| + | To setup the network interface, navigate to Configuration > Network. After that, enter in your network interface information. Here's how my configuration is setup: | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | After establishing that, you should use your laptop to try to reach the IP. If it works, then proceed to hardening the BIOS. Hardening involves the two steps mentioned earlier, i.e., turning off non-essential services so they aren' | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | The main thing to ensure is that ssh is turned off. I would only keep that on if you plan to maintain your BIOS regularly and fully understand how it uses the ssh stack. Since I don't intend to ssh into my production host's BIOS, I simply keep it off. Optionally, you can also turn off the iKVM Server Port and the Virtual Media Port until you need them. I am not monitoring my production host with external tooling, so I also keep SNMP turned off. Once that's hardened, let's create a source IP firewall rule (Configuration > IP Access Control) to limit access to a dedicated and trusted IP. | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | By adding the 0.0.0.0/0 with DROP specified at the end, you are specifying that all other requests besides the whitelisted requests above it, should be dropped. The blacked out entries should be replaced with your approved external IP, e.g., 98.65.124.88/ | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | NOTE: I am not concerned about the self-signed TLS certificate due to the source IP and other hardening measures. One can, however, optionally configure this if they so desire. | ||
| + | |||
| + | Your primary | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | At present, the Brown Rice PTR web panel only allows customers to establish PTR for IPv4, however, they allow customers to email IPv6 PTR requests, which I've also completed. So, at this point, the primary DNS records and the IPMI networking stack should be live and hardened. Now that's done, it's time to setup the Debian operating system on the host. This tutorial assumes your production host is already setup with its zfs/RAID arrays, JBOD, boot volume, SAS backplane, etc., and/or all the other goodies you need or want for your host. It also assumes that you either already installed Debian or you are just about to. If you have not installed Debian yet, make sure to install it with only core utilities. This will ensure that NetworkManager is not installed by default. If you already installed Debian, make sure to remove NetworkManager with sudo apt remove --purge NetworkManager or reinstall Debian, this time without the bloat. The installer might prompt you for nameservers, | ||
| + | |||
| + | <code bash> | ||
| + | auto enp1s0f0 | ||
| + | iface enp1s0f0 inet static | ||
| + | address 8.28.86.100 | ||
| + | netmask 255.255.255.0 | ||
| + | gateway 8.28.86.1 | ||
| + | nameservers 8.8.8.8 | ||
| + | </ | ||
| + | |||
| + | Once that's done, you should sudo systemctl restart networking and then run ping4 google.com and ensure that you are routing correctly. If you can't ping Google, then you need to stop and problem solve further before proceeding. Once that's done, we can now focus on setting up the other interfaces. First, let's raise the dedicated interfaces for the IPv4 and IPv6 bridges. Remember, the production host can already be reached via IPv4 on the primary interface, so it's not necessary or recommended to assign these interfaces addresses. | ||
| + | |||
| + | <code bash> | ||
| + | #bridge ipv4 | ||
| + | auto enp1s0f1 | ||
| + | iface enp1s0f1 inet manual | ||
| + | |||
| + | #bridge ipv6 | ||
| + | auto enp2s0f0 | ||
| + | iface enp2s0f0 inet6 manual | ||
| + | </code> | ||
| + | |||
| + | This raises both interfaces for the production host operating system and specifies which networking protocol each interface is using. It also instructs that OS that these interfaces are manually configured. Next, we need to bind these interfaces together into a bridge and then assign the bridge an address. Since the IPv4 and IPv6 routes are on separate cables, we do not want to try to assign the bridge more than one address, as this could cause loops and/or break routing. You can safely do this when both protocols are on one layer 1 cable, but not when they are segregated. For that reason, we will only assign one address for the bridge. Please note that we already assigned the primary interface an IPv4 address. This means the production host is reachable via IPv4 already. So, for that reason, it makes sense to assign an IPv6 address to the bridge. This is not only required since they are different cables, but it additionally makes the production host IPv6 reachable. Although its entirely possible to configure the production host to pass IPv6 traffic upstream to the VMs without itself being IPv6 reachable, this is silly and makes no sense. It's helpful and practical to be able to reach the production host via both protocols. Alright, now that we understand the topology, let's install bridge utilities, create the bridge, and then raise the interface. | ||
| + | |||
| + | sudo apt install bridge-utils | ||
| + | brctl addbr br0 | ||
| + | |||
| + | Now that the bridge utilities are installed and the bridge is created, we will raise the interface. To do that, we need to enter the bridge interface configuration in / | ||
| + | |||
| + | <code bash> | ||
| + | auto br0 | ||
| + | iface br0 inet6 static | ||
| + | address 2602: | ||
| + | gateway 2602: | ||
| + | bridge_ports enp1s0f1 enp2s0f0 | ||
| + | accept_ra 2 | ||
| + | bridge_stp off | ||
| + | bridge_fd 0 | ||
| + | </ | ||
| + | |||
| + | Once this is saved, let's restart networking with sudo systemctl restart networking. Now that IPv6 is activated, let's ensure it's functional with a small ping6 google.com test. Don't proceed unless this works. Additionally, | ||
| + | |||
| + | sudo apt install ufw | ||
| + | ufw allow 1194/udp | ||
| + | ufw allow from 192.168.100.0/ | ||
| + | ufw enable | ||
| + | |||
| + | This UFW setup presumes you have a properly configured VPN server running on the production host. If so, it allows ssh only from the VPN's dedicated subnet. Obviously, this is not strictly required. If you do use this approach, use a non-standard subnet so that you can provide extra protection against brute force attempts via obfuscation. If you don't want to put the production host behind a VPN, then you can optionally expose ssh publicly. It goes without saying that one should only be using ssh keypairs, not passwords. To do the simpler setup, use the following: | ||
| + | |||
| + | sudo apt install ufw | ||
| + | ufw allow 22 | ||
| + | ufw enable | ||
| + | |||
| + | That's all you need. Remember, if you need emergency access to the host, you use IPMI for that, which is not part of the host's operating system. This is the traditional approach whereby you never expose the production host on anything besides ssh or openvpn. Alternately, | ||
| + | |||
| + | sudo nano / | ||
| + | DEFAULT_FORWARD_POLICY=" | ||
| + | |||
| + | Our production host is now sufficiently hardened and our firewall tooling will allow the bridge we created above, to forward packets upstream to virtual appliances | ||
| + | |||
| + | {{ : | ||
| + | {{ : | ||
| + | |||
| + | That ncurses installer is not being passed over X11. That's running inside the shell of the production host using a TTY argument that I passed via virsh. In my case, I pass the ssh keys to the VM with the preseed. I also configure repositories, | ||
| + | |||
| + | <code bash> | ||
| + | virt-install --name=${hostname}.qcow2 \ | ||
| + | --os-variant=debian12 \ | ||
| + | --vcpu=2 \ | ||
| + | --memory 4096 \ | ||
| + | --disk path=/ | ||
| + | --check path_in_use=off \ | ||
| + | --graphics none \ | ||
| + | --location=/ | ||
| + | --network bridge:br0 \ | ||
| + | --channel unix,target_type=virtio,name=org.qemu.guest_agent.0 \ | ||
| + | --initrd-inject=/ | ||
| + | --extra-args=" | ||
| + | </ | ||
| + | |||
| + | If you don't want to tinker with preseed, you should at least consider running virt-install with preconfigured options. It saves time and avoids janky pass-through. At this point, your VM should be built, have ssk keys exchanged with it (or populated with the preseed), and be ready to configure. Since my preseed passes the IPv4 interface configuration into the VM, I only need to add IPv6 connectivity. (I am currently working on adding IPv6 pre-population to the preseed script.) Once you shelled into the VM, let's open up /etc/ | ||
| + | |||
| + | <code bash> | ||
| + | auto enp1s0 | ||
| + | iface enp1s0 inet static | ||
| + | address 8.28.86.122 | ||
| + | netmask 255.255.255.0 | ||
| + | gateway 8.28.86.1 | ||
| + | nameserver 8.8.8.8 | ||
| + | iface enp1s0 inet6 static | ||
| + | address 2602: | ||
| + | gateway 2602: | ||
| + | </ | ||
| + | |||
| + | At this point, run ping4 google.com and then ping6 google.com inside the VM. If you did everything right, both will respond in kind. So, now that this is all configured, I can spin up VMs and the associated instances within minutes! That's all you need to do! ;O | ||
| + | |||
| + | === Pebble Host Setup === | ||
| + | |||
| + | In most ways, the Pebble Host setup is simpler. I don't need to setup IPMI because they have a domain controlled and public facing web panel that they use to route external url request to the client' | ||
| + | |||
| + | {{ : | ||
| + | |||
| + | This is merely a front end to their IPMI implementation. They recently upgraded it and it' | ||
| + | |||
| + | * We need to establish PTR | ||
| + | * We need to manage and clone VMACs that comply with their MAC address filtering | ||
| + | |||
| + | A VMAC is a virtual MAC, which they use both to filter traffic, and presumably, to avoid potential MAC address conflicts that have a very high liklihood of occuring in a topology of their size, or roughly 50K clients or more. I'll note that the networking and PTR panel is also very clean, allowing easy configuration for each IP address and its associated VMAC. Here's a peak: | ||
| + | |||
| + | {{ : | ||
| + | {{ : | ||
| + | |||
| + | In my case, I purchased a few additional IPs, so you can see those differing prefixes below. When I first began hosting with Pebble Host, they were not fully IPv6 active. However, about a year ago, they added IPv6 support and it automagically populated in this lovely web panel. Accordingly, | ||
| + | |||
| + | * Interface 01: enp4s0, has a dedicated cable (IPv4 and IPv6) for the dedicated host | ||
| + | * Interface 02: IPMI and web-panel (managed by Pebble) | ||
| + | |||
| + | They do not build the virtualization stack for you - you have to do that. So, you will be replacing their stock interfaces file with your own configuration. Make sure you've installed bridge-utilities, | ||
| + | |||
| + | <code bash> | ||
| + | # Establish both interfaces | ||
| + | auto enp4s0 | ||
| + | iface enp4s0 inet manual | ||
| + | iface enp4s0 inet6 manual | ||
| + | |||
| + | # Establish bridge and ipv4 and ipv6 addresses | ||
| + | auto br0 | ||
| + | iface br0 inet static | ||
| + | address 45.143.197.68/ | ||
| + | gateway 45.143.197.65 | ||
| + | dns-nameservers 8.8.8.8 | ||
| + | bridge-ports enp4s0 | ||
| + | bridge-stp off | ||
| + | bridge-fd 0 | ||
| + | bridge-hw enp4s0 | ||
| + | up ip route add 45.143.197.86/ | ||
| + | up ip route add 194.164.96.218/ | ||
| + | |||
| + | iface br0 inet6 static | ||
| + | address 2a10: | ||
| + | gateway 2a10: | ||
| + | bridge-ports enp4s0 | ||
| + | bridge-stp off | ||
| + | bridge-fd 0 | ||
| + | up ip -6 route add 2a10: | ||
| + | up ip -6 route add 2a10: | ||
| + | </ | ||
| + | |||
| + | Now, in Pebble Host's topology, IPv4 and IPv6 are both on the same physical layer cable. This means we can safely assign both IPv4 and IPv6 addresses to the same bridge without issue. Also, for reasons not entirely clear to me, their topology requires users to push the routes to the VMs inside the dedicated host's interface file. I've tested the stack without these routes and they don't work. I'm not privy to their entire network stack, but it makes sense generally. Since it's a filtered and managed network, they don't allow "rogue nodes" to connect directly to the assigned gateway. You have to publish the route yourself. After that, they check requests to the gateway against the approved VMACs and drop all other requests. Again, their topology is not published publicly, but this seems to be what's going on under the hoods. Going back to the above interfaces block, it should be noted that although most of this configuration is standard, there' | ||
| + | |||
| + | <code bash> | ||
| + | 127.0.0.1 | ||
| + | 127.0.1.1 | ||
| + | # The following lines are desirable for IPv6 capable hosts | ||
| + | ::1 | ||
| + | ff02::1 ip6-allnodes | ||
| + | ff02::2 ip6-allrouters | ||
| + | </ | ||
| + | |||
| + | Use something like I provided above, but adapted to your use-case of course. Now, I have no idea why they put a non-standard hosts file in place, but regardless, it's your dedicated host to properly configure as you see fit. I'll let you know, however, that it took me a week or so to realize why X-passthrough was not working. So, I hope sharing this helps anyone still using those legacy install | ||
| + | |||
| + | * The bridge, br0, has a matching MAC address as that of the dedicated host's primary NIC | ||
| + | * The VM has been connected to br0 | ||
| + | * The VM's NIC has been changed to match the approved VMAC address specified in the panel | ||
| + | |||
| + | Once this is done, restart your VM and try to log in. If you did not setup preseeds with virt-install that auto-populate your interface, then use X-passthrough on the dedicated host, along with the virt-manager console, and type in the interface configuration manually. Again, in my case, the preseed passes my interfaces file into the VM so, as soon as it can route, I can reach it via ssh. As I mentioned earlier, I don't yet have the preseed configs setup to pass IPv6. So, I enter that information manually after connecting via IPv4. However, at the end of the day, your VM should have something like the following: | ||
| + | |||
| + | <code bash> | ||
| + | auto enp1s0 | ||
| + | iface enp1s0 inet static | ||
| + | address 45.143.197.87/ | ||
| + | gateway 45.143.197.65 | ||
| + | nameservers 8.8.8.8 | ||
| - | ====== Brown Rice ====== | + | iface enp1s0 inet6 static |
| + | address 2a10: | ||
| + | gateway 2a10: | ||
| + | </ | ||
| - | ====== Pebble Host ====== | + | Once you've got something similar inside your VM, drop the ping4 google.com and ping6 google.com tests inside it and make sure everything is routing properly. If so, you got it working and can now spin up and/or scale to more VMs to your heart' |
| - | --- // | + | --- // |