Home Network Upgrade: Part 3

Published: Apr 2, 2026

networkingproject
So the FCC won't let me be...

🔍 Important

This post is part of a series. Also see parts 1 and 2.

FCC Bans New Foreign Routers

If you pay attention to tech news, you might’ve heard that the FCC recently "…updated its Covered List to include all consumer-grade routers produced in foreign countries." You can view a PDF of the news release here.

Essentially, this means that foreign routers are banned. Preexisting ones are fine, but new ones aren’t. Indeed, following the link to their list reveals that all routers produced in a foreign country "…are deemed to pose an unacceptable risk to the national security of the United States" except in cases where conditional approval has been granted by the DoD or DHS. According to the executive branch determination that the FCC followed:

…foreign-produced routers (1) introduce a “supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense” and (2) pose “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”

So, If the government is concerned with our critical infrastructure then why put restrictions on consumer-grade routers rather than the commercial-grade equipment that businesses use?

One could argue that they’re attempting to tighten up security for employees that work from home, as there have been cases of APTs leveraging vulnerable home routers to assist in their malicious activities. For example, according to a 2023 CISA advisory, groups such as Volt Typhoon have been observed using compromised SOHO (small office/home office) network devices as part of their efforts to target critical infrastructure (see network artifacts section). These activities were mentioned directly in the FCC news release, so the decision can at least be partially attributed to them (personally, I find the reasoning somewhat weak and doubt the ban’s effectiveness).

In any case, I think decisions like this highlight the importance of staying on top of tech news. Whether you think the decision is justified or not is up to you, but it’s undeniable that this makes open-source/DIY router solutions like OPNsense more appealing for tech enthusiasts. The government didn’t come right out and say it, but as someone with a cybersecurity degree, what I heard was: “You are only allowed to buy technology that contains OUR backdoors.”

Replacement Motherboard

I previously mentioned that none of my spare M.2 NVMe drives were being detected by the motherboard I used in my DIY router, so I would need to change it later.

Sourcing a motherboard sounded easy at first, but there were a variety of factors that complicated this:

Unfortunately, I had to settle for another JGINYUE board since there were no other options (the old/used motherboard market leaves a lot to be desired). I returned the one I had and ordered a replacement with an older B350 chipset in hopes that this would have better support for the aging hardware I’m repurposing.

I waited nearly a month for my new motherboard to get delivered from mainland China, so you can imagine my reaction when I found out that it still didn’t support any of my spare M.2 drives (NVMe or SATA)! At this point, I honestly believe that this brand just puts fake M.2 connectors on their boards. This was disappointing, but I had an even bigger problem on my hands at this point: after installing the new motherboard, Proxmox would not boot.

Proxmox Reinstall

No Proxmox meant no OPNsense, and no OPNsense meant no Internet connection. Furtunately, I planned for a scenario like this.

I kept our old router when I deployed my DIY replacement in case the new machine ever experienced a problem. Getting Internet working again was as simple as booting the old router back up, disabling the wireless radio (the access points handle Wi-Fi now) and connecting the LAN/WAN cables. This essentially reverts my network back to its original state from before I began the project (poor logging, no VLANs, etc…), but it was at least a start until I could get Proxmox booting again.

Situations like this are also one of the primary reasons that I waited to upgrade my home network until after I finished my degree. Having more control over my network would mean nothing if I wasn’t able to keep it up, and spending time on troubleshooting my side project wasn’t something I could easily do during school.

I knew that I had backups of all my important settings, so it would be easy to start from a fresh install if necessary. I spent ~30 minutes checking the obvious things like UEFI/BIOS boot settings and physical connections before deciding that it would be simpler to wipe, reinstall everything, and then import my configurations. Unfortunately, this ended up taking nearly an entire day after unexpected issues occurred.

No Device with Valid ISO Found

Performing OS installs is something I can do with my eyes closed at this point, so I was surprised when Proxmox started giving me an error I hadn’t seen before when I attempted a reinstall:

[ERROR] no device with valid ISO found, please check your installation medium
unable to continue (type exit or CTRL-D to reboot)

Even more strange was the fact that I used the same USB drive with the same ISO image as when I originally installed Proxmox.

I used balenaEtcher to quickly re-flash the ISO image, but was met with the same result when I tried installing Proxmox again. Rufus gave me the exact same problem, and I started to become concerned that my new motherboard might be defective or incompatible with the other hardware. I decided that I would try to manually flash an ISO to my USB with dd as a last resort:

sudo dd if=~/Downloads/proxmox-ve_9.1-1.iso of=/dev/sda bs=1M conv=fdatasync

This at least allowed Proxmox to recognize the ISO when I attempted another install, but now I was getting a new error.

Mount(2) System Call Failed

Proxmox found the ISO and started installing, but then came back with this:

testing device '/dev/sda' for ISO
found Proxmox VE ISO
switching root from initrd to actual installation system
Starting Proxmox installation
EFI boot mode detected, mounting efivars filesystem
mount:	/sys/firmware/efi/efivars: mount(2) system call failed: Operation not supported.
	dmesg(1) may have more information after failed mount system call.

Installation aborted - unable to continue (type exit or CTRL-D to reboot)

Seeing that the mount operation was “not supported” didn’t make sense to me. This would mean I wouldn’t be able to use a filesystem and that the device would essentially be useless. As soon as I read the message, I figured that something had to be amiss in the UEFI/BIOS settings.

My thinking was that there was likely some low-level security feature that prevented the Proxmox install from continuing. The only setting I was aware of that might cause this was secure boot, which I had already disabled during prior troubleshooting. Proxmox is also supposed to have secure boot support, so I wasn’t getting my hopes up.

Then I found this forum thread while searching that suggested to enable NX mode in UEFI/BIOS, and it actually solved the problem! The commenter mentioned that “this setting has caused issues to many mini PCs running sketchy BIOSs.” Since I’m now on my second mini JGINYUE board that has experienced strange issues (not to mention that is has the longest initialization time I’ve ever seen), I would definitely say their comment applies to me.

Unable to Initialize Physical Volume

I was now able to work my way through the entire Proxmox installation wizard as expected, but I encountered another error just after confirming my options at the end:

unable to initialize physical volume /dev/sda3

This ended up being an easy fix, although I’m still somewhat confused about why it happened in the first place.

All I had to do was reduce my hdsize using the options window on the installer page where you pick your target harddisk. I have a 525 GB SSD installed. Proxmox defaulted to using 489 GB, so I lowered this amount to 480 GB and was able to complete the installation afterwards. It seems like it would be easy to account for this strange behavior in the installer, but what do I know?

VM Rebuilding

I’m closer to getting OPNsense back up and running now that Proxmox has been reinstalled, but there are a few tasks to take care of first:

Updating the Proxmox repositories and system are both relatively easy. I don’t have an enterprise subscription, so I need to disable the enterprise repositories and add the no-subscription one. I updated the package database, installed any updates, and rebooted the system.

Next up is downloading an OPNsense ISO. You could save the file to your PC and then upload it to Proxmox, but in my case it’s faster to do it directly from the host machine:

root@pve:~# wget -P /var/lib/vz/template/iso/ https://pkg.opnsense.org/releases/26.1.2/OPNsense-26.1.2-dvd-amd64.iso.bz2
--2026-03-30 18:11:34--  https://pkg.opnsense.org/releases/26.1.2/OPNsense-26.1.2-dvd-amd64.iso.bz2
Resolving pkg.opnsense.org (pkg.opnsense.org)... 89.149.222.99, 2001:1af8:5300:a010:1::1
Connecting to pkg.opnsense.org (pkg.opnsense.org)|89.149.222.99|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 527057268 (503M) [application/x-bzip2]
Saving to: '/var/lib/vz/template/iso/OPNsense-26.1.2-dvd-amd64.iso.bz2'

OPNsense-26.1.2-dvd-amd64.iso. 100%[==================================================>] 502.64M  21.6MB/s    in 25s     

2026-03-30 18:12:00 (20.4 MB/s) - '/var/lib/vz/template/iso/OPNsense-26.1.2-dvd-amd64.iso.bz2' saved [527057268/527057268]

The file needs to be unzipped after downloading:

root@pve:~# bzip2 -d /var/lib/vz/template/iso/OPNsense-26.1.2-dvd-amd64.iso.bz2 

I also need to create two Linux bridges in Proxmox to pass to my OPNsense VM later (for LAN and WAN). The LAN bridge needs to be VLAN aware.

Now I can create a VM according to OPNsense hardware requirements. A screenshot of what I configured can be viewed below:

OPNsense Configuration

OPNsense is finally ready to get installed and configured. I booted up the VM to proceed with installation and encountered one final error when attempting to select the UFS installer option. You can get around this issue by selecting the auto-guided UFS option.

Once OPNsense was installed I performed system updates, downloaded the QEMU Guest Agent service, and spoofed the WAN interface’s MAC address again before taking the old router offline. Now OPNsense is routing all the traffic on my network, so it’s time to (re)configure the VLANs. Here’s an overview of what needs to get done:

I started creating and assigning the VLAN interfaces, then saved and applied my changes:

You might be wondering why I didn’t create a management VLAN. I messed with configuration settings for a significant amount of time, but it appears that my switches and access points simply don’t support this out of the box. There is no way for me to restrict access to management interfaces based on VLAN alone. I decided that this is acceptable for my use case in a home network, but I would definitely be more concerned if this was a small business network (ironic, considering that’s the market my devices target). I think that my current best option here is to use the OPNsense firewall to lock management access down as much as possible based on device IPs.

I need to enable and configure each interface once the changes are applied. Here’s a summary of the settings for each VLAN interface:

VLAN? Enabled? IPv4 Configuration? IPv4 Address?
VLAN10_User Yes Static 192.168.10.1/24
VLAN20_Guest Yes Static 192.168.20.1/24
VLAN30_IoT Yes Static 192.168.30.1/24
VLAN40_Secure Yes Static 192.168.40.1/24

Now that the VLANs themselves are set up I need to get DHCP working and create some firewall rules to pass traffic. Otherwise, none of the devices on my VLANs will be able to lease IPs automatically, reach the router’s DNS server, or communicate with things out on the Internet.

I added all of my VLANs to the Interface list in Dnsmasq general settings and then defined the DHCP ranges visible below:

In the OPNsense firewall I created a network alias called privnets and added the private address blocks from RFC1918. Now I can create some baseline rules to get started with passing traffic for my VLANs. It’s important to note that this will not be my final configuration—this is just to get started. Specific real-world examples of useful rules will come in a later post. I copied the VLAN10_User firewall rules visible below over to my other VLAN interfaces and modified them accordingly. VLAN40_Secure has an additional rule that allows it to access certain private network addresses (e.g., to configure network devices).

Switch Configuration

Next, I need to configure my “core” switch to handle the VLAN traffic. I need to tell my switch that it should be expecting VLAN tagged frames between OPNsense and the wireless access points. Here’s a screenshot of the management page and a table to summarize what’s going on:

Port? 1 2 3 4 5 6 7 8
For? Wi-Fi access point Wi-Fi access point N/A N/A Proxmox MGMT OPNsense LAN N/A N/A
Tagged? 10, 20, 30, 40 10, 20, 30, 40 N/A N/A N/A 10, 20, 30, 40 N/A N/A
Untagged? 1 1 1 1 1 1 1 1

Access Point Configuration

Lastly, there are some finishing touches to configure in my Wi-Fi access points. This involves setting an appropriate VLAN for each SSID and physical network port to ensure that traffic is getting tagged like I expect.

Testing

Now it’s time to test the configurations to make sure that everything works the way I’ve set it up. Here’s the expected behavior:

When I deployed the second access point, I also added a network switch between it and the computer lab (not pictured in the diagram I made). This allows me to very easily hop to different VLANs for testing by simply plugging devices into different network ports, assuming that my switch is configured correctly. Here’s a summary of what’s happening on the switch:

Port? 1 2 3 4 5 6 7 8
For? LAN VLAN 1 VLAN 10 VLAN 20 VLAN 30 VLAN 40 VLAN 40 VLAN 40
Tagged? 10, 20, 30, 40 N/A N/A N/A N/A N/A N/A N/A
Untagged? 1 1 1, 10 1, 20 1 , 30 1, 40 1, 40 1, 40
PVID? 1 1 10 20 30 40 40 40

You can view some testing screenshots below:

This should give me a good base to build upon. Now I have network segmentation, much better logging and controls, and I can be reasonably confident that devices won’t be able to access management interfaces at all unless they find a way to hop onto the secure VLAN. From here, I can continue tweaking device configurations and firewall rules to lock the network down even more. This is always an ongoing process—expect a post at some point in the future when I have enough to report!

Next Post

The next part of this project will leave network configuration behind for a while to look at a simple NAS (network attached storage) deployment.