The networking infrastructure provides reliable communication, high bandwidth, and efficient traffic management for the Proxmox cluster, student containers, and core infrastructure devices. This design ensures seamless connectivity between containers, physical servers, and external networks while enforcing strict segmentation and security policies.
The network is divided into dedicated subnets for containers and management/infrastructure:
- Container Subnet (
10.42.0.0/16
):
- Supports up to 65,536 IP addresses for student VPS containers.
- Each container is provisioned with a unique, statically assigned IP via automated processes.
- Management / Server Subnet (
10.10.0.0/24
):
- Reserved for physical servers, switches, firewalls, and critical infrastructure devices.
- Each server is assigned a consistent static IP for identification and management.
- This subnet also carries management traffic (e.g., shared NFS storage, administrative services).
¶ DNS and DHCP
- DNS:
- Managed centrally by Fantasia, resolving hostnames for both containers and servers.
- Hostnames follow a consistent format, e.g.,
${hostname}.containers.netsoc.com
.
- Automated updates (via Ansible) keep DNS records current as new servers or containers are provisioned.
- DHCP:
- Initially used for dynamic IP allocation until full static IP assignment is implemented with dnsmasq.
- The MikroTik routerboard runs separate DHCP servers for each VLAN:
- One for VLAN10 (management, serving
10.10.0.0/24
).
- One for VLAN20 (production/container, serving
10.42.0.0/16
).
- MAC-based filtering ensures that only authorized devices receive IPs on the management network.
- WAN Interface:
- Connects directly to the ISP (currently 1Gbps; future upgrades possible).
- LAN Interface (192.168.100.0/24):
- Provides connectivity to internal devices, including the MikroTik routerboard.
- The Fortigate’s LAN IP (e.g.,
192.168.100.1
) serves as the primary gateway.
- VPN Interface (192.168.101.0/24):
- Offers secure VPN access for remote users.
- Routing & Security:
- Routes external and VPN traffic to internal subnets (
10.10.0.0/24
and 10.42.0.0/16
).
- Enforces firewall policies to block unauthorized inbound traffic toward internal networks.
- WAN Port:
- Receives a static IP (e.g.,
192.168.100.2
) on the Fortigate LAN.
- LAN Ports and VLAN Tagging:
- Uses aggregated links (dual or more physical ports) to interface with the Proxmox cluster.
- Implements VLAN tagging to segregate traffic:
- VLAN10 (Management):
- Carries the
10.10.0.0/24
subnet.
- Dedicated to physical servers and core services.
- VLAN20 (Production/Container):
- Carries the
10.42.0.0/16
subnet.
- Assigned to student VPS containers.
- DHCP, DNS, and QoS:
- Runs separate DHCP servers for each VLAN, ensuring proper IP assignment.
- Automates DNS and DHCP updates via Ansible scripts.
- Enforces QoS policies, prioritizing management (VLAN10) traffic over production (VLAN20) where needed.
- Routing and Firewall:
- Forwards traffic appropriately between the Fortigate and the internal VLANs.
- Blocks upward traffic on VLANs unless explicitly initiated from within the network.
Each Proxmox server in the cluster is equipped with dual 1Gbps network interfaces, configured as follows:
- Bonded Interfaces (Link Aggregation):
- Aggregates to provide up to 2Gbps of bandwidth per server, ensuring smooth performance for NFS storage, container, and management traffic.
- VLAN-Aware Bridge (
vmbr0
):
- Carries both VLAN10 and VLAN20 traffic over a single physical (or aggregated) link.
- Management Interface (VLAN10):
- The server’s primary NIC (using its real hardware MAC) connects to VLAN10.
- Receives a static IP from the
10.10.0.0/24
subnet.
- Production Interface (VLAN20):
- Virtual interfaces or tagged sub-interfaces on the bridge provide connectivity for VPS containers.
- These interfaces are assigned IPs from the
10.42.0.0/16
subnet.
- Virtualization Integration:
- Each LXC container (or VPS) is connected via a virtual NIC attached to
vmbr0
.
- Containers can have their network traffic isolated by VLAN tag (typically joining VLAN20), while the host maintains its management IP on VLAN10.
¶ Switches and Physical Infrastructure
- Current Setup:
- A rack-mounted TP-Link TL-SG116, an unmanaged 16-port Gigabit switch, provides initial connectivity.
- MikroTik RB3011UiAS-RM Managed Switch:
- Offers SFP support, a built-in display, and advanced routing/VLAN capabilities.
- Planned Upgrades:
- Future Pro 48 PoE Switch:
- 10Gbps capable switch to support increased traffic loads and expansion.
- Patch Panel:
- For improved cable management and organized network infrastructure.
¶ Public Access and Service Exposure
¶ Reverse Proxy and Port Forwarding
- Reverse Proxy:
- The container management dashboard is hosted at containers.netsoc.com.
- Student containers are accessible via wildcard domains (e.g.,
${hostname}.containers.netsoc.com
).
- Port Forwarding:
- Public-facing ports can be dynamically requested and configured through the management dashboard.
- The Fortigate and MikroTik work in tandem to securely map forwarded ports to the correct container or server services.
¶ Automation, Scalability, and Future Enhancements
- Automation with Ansible:
- Critical network configurations (DHCP, DNS, VLAN assignments, static IP reservations) are automated via Ansible.
- New servers and containers are provisioned with automated updates to the routerboard’s configuration, ensuring consistency and reducing manual intervention.
- Scalability:
- The infrastructure is designed to scale horizontally; new servers can be added to the Proxmox cluster seamlessly.
- Supports exponential growth in the number of containers, with plans to upgrade routerboard performance (e.g., 10G interfaces) and eventually decentralize DHCP/DNS services.
- Security and QoS:
- Upward traffic is blocked by default, ensuring only established internal connections are allowed.
- Quality of Service rules prioritize management traffic (VLAN10) to maintain performance for critical services.
- MAC-based filtering and other security measures are in place to ensure that only authorized devices can access sensitive networks.
This networking setup ensures efficient, secure, and scalable communication across the infrastructure, supporting the Proxmox cluster, student VPS containers, and essential administrative services. For more information on related topics, please refer to: