Introduction: Achieving True Cross-Network Redundancy
In today’s distributed computing landscape, having your infrastructure in a single location is a recipe for disaster. Power outages, network failures, natural disasters – any of these can take your entire operation offline. But what if every server in your primary data center had an identical, synchronized twin in a secondary location? What if both your 1TB SSDs (operating systems and applications) and your 8TB HDDs (data storage) were replicated in real-time across two separate networks?
In this comprehensive guide, I’ll walk you through building a 10-server cluster using a WireGuard hub-and-spoke VPN architecture for secure networking and GlusterFS for distributed, self-healing storage. We’ll cover everything from cryptographic key generation to final testing, with a special focus on the simplified hub-and-spoke model that makes management practical at scale.
Architecture Overview: Visualizing the Hub-and-Spoke Design
Let me start by showing you what we’re building – a clear, manageable architecture that scales well:
┌─────────────────────────────────────────────────────────────────────────────────────────┐ │ HUB-AND-SPOKE CLUSTER ARCHITECTURE │ ├─────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │ NETWORK 1 (Primary Site) NETWORK 2 (Secondary Site) │ │ ┌─────────────────┐ ┌─────────────────┐ │ │ │ HUB SERVER │ │ HUB SERVER │ │ │ │ Server 1 │◄──WireGuard Tunnel───►│ Server 6 │ │ │ │ 10.0.0.1/24 │ │ 10.0.0.6/24 │ │ │ │ Port: 51820 │ │ Port: 51820 │ │ │ └────────┬────────┘ └────────┬────────┘ │ │ │ │ │ │ ┌──────┴──────┐ ┌──────┴──────┐ │ │ │ SPOKES │ │ SPOKES │ │ │ │ Servers │ │ Servers │ │ │ │ 2,3,4,5 │ │ 7,8,9,10 │ │ │ └─────────────┘ └─────────────┘ │ │ │ │ GLUSTERFS STORAGE TOPOLOGY: │ │ ┌──────────────────────────────────────────────────────────────────────────────┐ │ │ │ SSD Volume: Replica 2 across networks HDD Volume: Dispersed across all │ │ │ │ [S1]──[S6] [S2]──[S7] [S3]──[S8] [S1][S2][S3][S4][S5][S6]... │ │ │ │ Each file exists in BOTH networks Data striped with redundancy │ │ │ └──────────────────────────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────────────────────────────┘
Why Hub-and-Spoke?
- Simplified Management: 2 hubs to configure vs. 45 peer connections in a full mesh
- Clear Traffic Flow: All cross-network traffic goes through hub-to-hub tunnel
- Easier Troubleshooting: Problems are isolated to specific hub-spoke relationships
- Scalable: Add new spokes without reconfiguring existing nodes
Phase 1: WireGuard Key Generation – The Cryptographic Foundation
Understanding WireGuard Cryptography
Before we configure anything, we need to understand WireGuard’s security model. WireGuard uses:
- Curve25519 for key exchange (elliptic curve cryptography)
- ChaCha20 for symmetric encryption
- Poly1305 for message authentication
- BLAKE2s for hashing
Each server needs two keys:
- Private Key: A 256-bit random number – NEVER SHARE THIS!
- Public Key: Derived from the private key – SHARE WITH OTHER SERVERS
Step 1: Generate All Server Keys
Location: Run these commands on each of the 10 servers individually
# On EACH server (Servers 1-10), run: sudo apt update sudo apt install wireguard-tools -y # Generate the private key (keep this secret!) wg genkey | sudo tee /etc/wireguard/private.key sudo chmod 600 /etc/wireguard/private.key # Only root can read # Derive the public key from the private key sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key # View the public key (you'll need this for configuration) echo "Public Key for this server:" sudo cat /etc/wireguard/public.key echo "" echo "Save this public key - you'll need it for other servers' configurations"
Step 2: Create a Key Management Spreadsheet
Since we have 10 servers, I recommend creating a spreadsheet to track all keys:
| Server | Role | IP Address | Public Key (truncated) | Notes |
|---|---|---|---|---|
| Server 1 | Hub 1 | 10.0.0.1 | uJ6L...Q3M= | Network 1 Hub |
| Server 2 | Spoke | 10.0.0.2 | vK7M...R4N= | Network 1 Spoke |
| Server 3 | Spoke | 10.0.0.3 | wL8N...S5O= | Network 1 Spoke |
| Server 4 | Spoke | 10.0.0.4 | xM9O...T6P= | Network 1 Spoke |
| Server 5 | Spoke | 10.0.0.5 | yN0P...U7Q= | Network 1 Spoke |
| Server 6 | Hub 2 | 10.0.0.6 | zO1Q...V8R= | Network 2 Hub |
| Server 7 | Spoke | 10.0.0.7 | aP2R...W9S= | Network 2 Spoke |
| Server 8 | Spoke | 10.0.0.8 | bQ3S...X0T= | Network 2 Spoke |
| Server 9 | Spoke | 10.0.0.9 | cR4T...Y1U= | Network 2 Spoke |
| Server 10 | Spoke | 10.0.0.10 | dS5U...Z2V= | Network 2 Spoke |
Pro Tip: For added security, consider using a password manager or dedicated secrets management tool instead of a plain spreadsheet.
Step 3: Automated Key Generation Script
Location: /usr/local/bin/generate-all-keys.sh (run on a secure management workstation)
#!/bin/bash
# Script: /usr/local/bin/generate-all-keys.sh
# Description: Generates WireGuard keys for all servers via SSH
# WARNING: Run this from a secure management workstation
# Usage: ./generate-all-keys.sh
echo "WIREGUARD KEY GENERATION FOR HUB-AND-SPOKE CLUSTER"
echo "=================================================="
echo "This script will generate keys on all 10 servers"
echo ""
# Define all servers (update with your actual IPs/hostnames)
SERVERS=(
"[email protected]" # Hub 1
"[email protected]" # Spoke 1
"[email protected]" # Spoke 2
"[email protected]" # Spoke 3
"[email protected]" # Spoke 4
"[email protected]" # Hub 2
"[email protected]" # Spoke 5
"[email protected]" # Spoke 6
"[email protected]" # Spoke 7
"[email protected]" # Spoke 8
)
# Create key directory
mkdir -p ~/wireguard-keys/$(date +%Y%m%d)
cd ~/wireguard-keys/$(date +%Y%m%d)
echo "Generating keys for ${#SERVERS[@]} servers..."
echo ""
for SERVER in "${SERVERS[@]}"; do
SERVER_NAME=$(echo $SERVER | cut -d'@' -f2 | cut -d'.' -f1)
echo "Processing ${SERVER_NAME}..."
# SSH to server and generate keys
ssh -o StrictHostKeyChecking=no $SERVER << 'EOF'
# Install WireGuard if not present
if ! command -v wg > /dev/null; then
apt update && apt install -y wireguard-tools
fi
# Create directory
mkdir -p /etc/wireguard
# Generate private key
PRIVATE_KEY=$(wg genkey)
echo "$PRIVATE_KEY" > /etc/wireguard/private.key
chmod 600 /etc/wireguard/private.key
# Generate public key
PUBLIC_KEY=$(echo "$PRIVATE_KEY" | wg pubkey)
echo "$PUBLIC_KEY" > /etc/wireguard/public.key
# Output for collection
echo "SERVER: $(hostname)"
echo "PRIVATE_KEY: $PRIVATE_KEY"
echo "PUBLIC_KEY: $PUBLIC_KEY"
echo "---"
EOF > "${SERVER_NAME}-keys.txt" 2>&1
echo " Keys generated and saved to ${SERVER_NAME}-keys.txt"
done
echo ""
echo "KEY GENERATION COMPLETE"
echo "======================"
echo "All keys saved in: ~/wireguard-keys/$(date +%Y%m%d)/"
echo ""
echo "NEXT STEPS:"
echo "1. Review each *-keys.txt file"
echo "2. Create a master key spreadsheet"
echo "3. Proceed with WireGuard configuration"
Phase 2: WireGuard Hub-and-Spoke Configuration
Now that we have our cryptographic keys, let’s configure the WireGuard VPN using the hub-and-spoke model.
Step 1: Configure Hub Server 1 (Network 1 Hub)
Location: /etc/wireguard/wg0.conf on Server 1
First, let’s create a configuration script that uses our generated keys:
#!/bin/bash
# Script: /usr/local/bin/configure-hub1.sh
# Description: Configures Hub Server 1 with all necessary keys
# Usage: Run on Server 1 after collecting all public keys
# Prerequisite: All public keys must be collected in /etc/wireguard/peers/
echo "CONFIGURING HUB SERVER 1 (10.0.0.1)"
echo "==================================="
# Load Hub 1's private key (generated earlier)
HUB1_PRIVATE_KEY=$(cat /etc/wireguard/private.key)
# Load public keys (you should have these from Phase 1)
# Example - replace with your actual keys:
SERVER2_PUBKEY="vK7M...R4N=" # Server 2's public key
SERVER3_PUBKEY="wL8N...S5O=" # Server 3's public key
SERVER4_PUBKEY="xM9O...T6P=" # Server 4's public key
SERVER5_PUBKEY="yN0P...U7Q=" # Server 5's public key
SERVER6_PUBKEY="zO1Q...V8R=" # Server 6's public key (Hub 2)
# Network configuration
HUB1_IP="10.0.0.1"
HUB1_PORT="51820"
NETWORK2_PUBLIC_IP="198.51.100.1" # CHANGE THIS to your actual IP
# Create the WireGuard configuration
cat > /etc/wireguard/wg0.conf << EOF
# ==================== HUB SERVER 1 CONFIGURATION ====================
# Network 1 Hub - Server 1
# Generated: $(date)
[Interface]
# Our WireGuard address
Address = ${HUB1_IP}/24
# Listening port (open this on firewall)
ListenPort = ${HUB1_PORT}
# Private key (generated in Phase 1)
PrivateKey = ${HUB1_PRIVATE_KEY}
# Enable IP forwarding (critical for hub functionality)
PostUp = sysctl -w net.ipv4.ip_forward=1
PostUp = sysctl -w net.ipv6.conf.all.forwarding=1
# Firewall rules for routing
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
SaveConfig = true
# ==================== NETWORK 1 SPOKE SERVERS ====================
# Spoke Server 2 (10.0.0.2)
[Peer]
PublicKey = ${SERVER2_PUBKEY}
AllowedIPs = 10.0.0.2/32
# Note: No Endpoint - spoke initiates connection to hub
# Spoke Server 3 (10.0.0.3)
[Peer]
PublicKey = ${SERVER3_PUBKEY}
AllowedIPs = 10.0.0.3/32
# Spoke Server 4 (10.0.0.4)
[Peer]
PublicKey = ${SERVER4_PUBKEY}
AllowedIPs = 10.0.0.4/32
# Spoke Server 5 (10.0.0.5)
[Peer]
PublicKey = ${SERVER5_PUBKEY}
AllowedIPs = 10.0.0.5/32
# ==================== NETWORK 2 HUB (CROSS-NETWORK) ====================
# Hub Server 6 (10.0.0.6) - Network 2 Hub
[Peer]
PublicKey = ${SERVER6_PUBKEY}
# Hub 6 and all its spokes
AllowedIPs = 10.0.0.6/32, 10.0.0.7/32, 10.0.0.8/32, 10.0.0.9/32, 10.0.0.10/32
# Public endpoint of Network 2 hub
Endpoint = ${NETWORK2_PUBLIC_IP}:51820
# Keep connection alive across NAT/firewalls
PersistentKeepalive = 25
EOF
echo "Configuration written to /etc/wireguard/wg0.conf"
echo ""
echo "IMPORTANT: Replace the public key variables with your actual keys!"
echo ""
echo "To start WireGuard:"
echo " sudo systemctl enable wg-quick@wg0"
echo " sudo systemctl start wg-quick@wg0"
Step 2: Configure Spoke Servers (Example: Server 2)
Location: /etc/wireguard/wg0.conf on each spoke server
#!/bin/bash
# Script: /usr/local/bin/configure-spoke.sh
# Description: Configures a spoke server
# Usage: ./configure-spoke.sh <spoke-number> <hub-public-key>
# Example: ./configure-spoke.sh 2 "uJ6L...Q3M="
SPOKE_NUM=$1
HUB_PUBKEY=$2
# Network configuration
SPOKE_IP="10.0.0.${SPOKE_NUM}"
SPOKE_PORT="5182${SPOKE_NUM}" # Each spoke gets unique port
# Determine which hub to connect to
if [ $SPOKE_NUM -le 5 ]; then
# Network 1 spokes connect to Hub 1
HUB_PUBLIC_IP="203.0.113.1" # CHANGE THIS
HUB_WG_IP="10.0.0.1"
HUB_PORT="51820"
else
# Network 2 spokes connect to Hub 2
HUB_PUBLIC_IP="198.51.100.1" # CHANGE THIS
HUB_WG_IP="10.0.0.6"
HUB_PORT="51820"
fi
# Load this spoke's private key (generated earlier)
SPOKE_PRIVATE_KEY=$(cat /etc/wireguard/private.key)
# Create spoke configuration
cat > /etc/wireguard/wg0.conf << EOF
# ==================== SPOKE SERVER ${SPOKE_NUM} CONFIGURATION ====================
# Generated: $(date)
[Interface]
Address = ${SPOKE_IP}/24
ListenPort = ${SPOKE_PORT}
PrivateKey = ${SPOKE_PRIVATE_KEY}
# Route all WireGuard traffic through the hub
PostUp = ip route add 10.0.0.0/24 via ${HUB_WG_IP} dev wg0
PostDown = ip route del 10.0.0.0/24 via ${HUB_WG_IP} dev wg0 2>/dev/null || true
SaveConfig = true
# ==================== HUB CONNECTION ====================
# Only one peer: the local hub
[Peer]
PublicKey = ${HUB_PUBKEY}
# Hub can send us traffic
AllowedIPs = ${HUB_WG_IP}/32
# Hub's public endpoint
Endpoint = ${HUB_PUBLIC_IP}:${HUB_PORT}
# Keep connection alive
PersistentKeepalive = 25
EOF
echo "Spoke ${SPOKE_NUM} configuration complete"
echo "IP: ${SPOKE_IP}"
echo "Connects to hub at: ${HUB_PUBLIC_IP}:${HUB_PORT}"
Step 3: Configure Hub Server 6 (Network 2 Hub)
Location: /etc/wireguard/wg0.conf on Server 6
#!/bin/bash
# Script: /usr/local/bin/configure-hub6.sh
# Description: Configures Hub Server 6
# Usage: Run on Server 6 after collecting all public keys
echo "CONFIGURING HUB SERVER 6 (10.0.0.6)"
echo "==================================="
# Load Hub 6's private key
HUB6_PRIVATE_KEY=$(cat /etc/wireguard/private.key)
# Load public keys (replace with your actual keys)
SERVER1_PUBKEY="uJ6L...Q3M=" # Server 1's public key
SERVER7_PUBKEY="aP2R...W9S=" # Server 7's public key
SERVER8_PUBKEY="bQ3S...X0T=" # Server 8's public key
SERVER9_PUBKEY="cR4T...Y1U=" # Server 9's public key
SERVER10_PUBKEY="dS5U...Z2V=" # Server 10's public key
# Network configuration
HUB6_IP="10.0.0.6"
HUB6_PORT="51820"
NETWORK1_PUBLIC_IP="203.0.113.1" # CHANGE THIS
cat > /etc/wireguard/wg0.conf << EOF
# ==================== HUB SERVER 6 CONFIGURATION ====================
[Interface]
Address = ${HUB6_IP}/24
ListenPort = ${HUB6_PORT}
PrivateKey = ${HUB6_PRIVATE_KEY}
PostUp = sysctl -w net.ipv4.ip_forward=1
PostUp = sysctl -w net.ipv6.conf.all.forwarding=1
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
SaveConfig = true
# ==================== NETWORK 2 SPOKE SERVERS ====================
[Peer]
PublicKey = ${SERVER7_PUBKEY}
AllowedIPs = 10.0.0.7/32
[Peer]
PublicKey = ${SERVER8_PUBKEY}
AllowedIPs = 10.0.0.8/32
[Peer]
PublicKey = ${SERVER9_PUBKEY}
AllowedIPs = 10.0.0.9/32
[Peer]
PublicKey = ${SERVER10_PUBKEY}
AllowedIPs = 10.0.0.10/32
# ==================== NETWORK 1 HUB (CROSS-NETWORK) ====================
[Peer]
PublicKey = ${SERVER1_PUBKEY}
AllowedIPs = 10.0.0.1/32, 10.0.0.2/32, 10.0.0.3/32, 10.0.0.4/32, 10.0.0.5/32
Endpoint = ${NETWORK1_PUBLIC_IP}:51820
PersistentKeepalive = 25
EOF
echo "Hub 6 configuration complete"
Step 4: Firewall Configuration for Hubs
Location: /usr/local/bin/setup-hub-firewall.sh on both hubs
#!/bin/bash
# Script: /usr/local/bin/setup-hub-firewall.sh
# Description: Configures firewall for hub servers
# Usage: Run on both Hub 1 and Hub 6
echo "CONFIGURING FIREWALL FOR HUB SERVER"
echo "==================================="
# Install UFW if not present
if ! command -v ufw > /dev/null; then
apt install -y ufw
fi
# Reset to default
ufw --force reset
# Default policies
ufw default deny incoming
ufw default allow outgoing
# Allow SSH
ufw allow 22/tcp comment 'SSH'
# Allow WireGuard (critical!)
ufw allow 51820/udp comment 'WireGuard Hub'
# Allow GlusterFS ports
ufw allow 24007:24008/tcp comment 'GlusterFS Management'
ufw allow 49152:49251/tcp comment 'GlusterFS Bricks'
# Enable the firewall
ufw --force enable
echo ""
echo "Firewall configured:"
ufw status verbose
echo ""
echo "IMPORTANT: Ensure port 51820/udp is also forwarded on your network router!"
echo "Router should forward 51820/udp to this server's local IP address"
Phase 3: Testing WireGuard Connectivity
Step 1: Start WireGuard on All Servers
Location: Run on each server after configuration
# Enable and start WireGuard sudo systemctl enable wg-quick@wg0 sudo systemctl start wg-quick@wg0 # Check status sudo wg show
Step 2: Comprehensive Connectivity Test
Location: /usr/local/bin/test-wireguard-connectivity.sh (run on Hub 1)
#!/bin/bash
# Script: /usr/local/bin/test-wireguard-connectivity.sh
# Description: Tests WireGuard connectivity in hub-and-spoke topology
# Usage: Run on Hub 1 (Server 1)
echo "WIREGUARD HUB-AND-SPOKE CONNECTIVITY TEST"
echo "========================================="
echo "Testing from Hub 1 (10.0.0.1)"
echo "Date: $(date)"
echo ""
# Test 1: Local WireGuard interface
echo "1. LOCAL WIREGUARD INTERFACE"
echo "---------------------------"
if ip link show wg0 > /dev/null 2>&1; then
echo "✓ wg0 interface exists"
echo " IP Address: $(ip -4 addr show wg0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')"
else
echo "✗ wg0 interface missing"
fi
echo ""
# Test 2: Hub-to-Hub connection
echo "2. HUB-TO-HUB CONNECTION (CROSS-NETWORK)"
echo "---------------------------------------"
echo "Pinging Hub 2 (10.0.0.6)..."
if ping -c 4 -W 2 10.0.0.6 > /dev/null 2>&1; then
LATENCY=$(ping -c 1 10.0.0.6 | grep "time=" | cut -d'=' -f4)
echo "✓ Success! Latency: $LATENCY"
# Test WireGuard handshake
WG_HANDSHAKE=$(sudo wg show | grep "peer:" -A2 | grep "latest handshake")
if [ -n "$WG_HANDSHAKE" ]; then
echo " WireGuard handshake: $WG_HANDSHAKE"
else
echo " ⚠ No recent WireGuard handshake"
fi
else
echo "✗ Failed to reach Hub 2"
echo " Troubleshooting steps:"
echo " 1. Check Hub 2's WireGuard status"
echo " 2. Verify firewall rules"
echo " 3. Check hub-to-hub public IP configuration"
fi
echo ""
# Test 3: Local spokes connectivity
echo "3. LOCAL SPOKE CONNECTIVITY (Network 1)"
echo "--------------------------------------"
LOCAL_SPOKES=("10.0.0.2" "10.0.0.3" "10.0.0.4" "10.0.0.5")
for SPOKE in "${LOCAL_SPOKES[@]}"; do
echo -n " $SPOKE: "
if ping -c 2 -W 1 "$SPOKE" > /dev/null 2>&1; then
echo "✓ Online"
else
echo "✗ Offline"
fi
done
echo ""
# Test 4: Remote spokes through Hub 2
echo "4. REMOTE SPOKE CONNECTIVITY (Through Hub 2)"
echo "-------------------------------------------"
REMOTE_SPOKES=("10.0.0.7" "10.0.0.8" "10.0.0.9" "10.0.0.10")
for SPOKE in "${REMOTE_SPOKES[@]}"; do
echo -n " $SPOKE: "
if ping -c 2 -W 2 "$SPOKE" > /dev/null 2>&1; then
LATENCY=$(ping -c 1 "$SPOKE" | grep "time=" | cut -d'=' -f4 | cut -d' ' -f1)
echo "✓ Online ($LATENCY)"
else
echo "✗ Offline"
fi
done
echo ""
# Test 5: WireGuard status summary
echo "5. WIREGUARD STATUS SUMMARY"
echo "--------------------------"
sudo wg show
echo ""
# Test 6: Routing table
echo "6. ROUTING TABLE"
echo "---------------"
echo "Routes to 10.0.0.0/24 network:"
ip route show | grep "10.0.0" || echo " No routes found"
echo ""
echo "CONNECTIVITY TEST COMPLETE"
echo "=========================="
echo "If all tests pass, your WireGuard hub-and-spoke network is working!"
Phase 4: Storage Preparation for GlusterFS
Step 1: Prepare Disks on All Servers
Location: /usr/local/bin/prepare-disks.sh on each server
#!/bin/bash # Script: /usr/local/bin/prepare-disks.sh # Description: Prepares SSD and HDD for GlusterFS # WARNING: This formats disks! Backup data first! echo "PREPARING DISKS FOR GLUSTERFS" echo "=============================" # SSD Preparation (/dev/sda - 1TB) echo "1. Preparing 1TB SSD (/dev/sda)..." parted /dev/sda --script mklabel gpt parted /dev/sda --script mkpart primary 0% 100% mkfs.xfs -f /dev/sda1 mkdir -p /gluster/ssd-brick echo "/dev/sda1 /gluster/ssd-brick xfs defaults 0 0" >> /etc/fstab # HDD Preparation (/dev/md0 - 8TB RAID) echo "2. Preparing 8TB RAID (/dev/md0)..." mkfs.xfs -f /dev/md0 mkdir -p /gluster/hdd-brick echo "/dev/md0 /gluster/hdd-brick xfs defaults 0 0" >> /etc/fstab # Mount all mount -a # Create brick directories mkdir -p /gluster/ssd-brick/brick1 mkdir -p /gluster/hdd-brick/brick1 echo "Disk preparation complete"
Step 2: Install GlusterFS
Location: /usr/local/bin/install-glusterfs.sh on each server
#!/bin/bash # Script: /usr/local/bin/install-glusterfs.sh # Description: Installs GlusterFS on a server echo "INSTALLING GLUSTERFS" echo "===================" # Add repository add-apt-repository ppa:gluster/glusterfs-10 -y apt update # Install apt install -y glusterfs-server glusterfs-client # Start and enable systemctl start glusterd systemctl enable glusterd echo "GlusterFS installed and started"
Phase 5: GlusterFS Volume Configuration
Step 1: Create Trusted Pool
Location: Run on Hub 1 (Server 1)
#!/bin/bash
# Script: /usr/local/bin/create-gluster-pool.sh
# Description: Creates GlusterFS trusted pool
echo "CREATING GLUSTERFS TRUSTED POOL"
echo "==============================="
# Probe all other servers
for i in 2 3 4 5 6 7 8 9 10; do
gluster peer probe "10.0.0.$i"
done
# Verify
gluster pool list
Step 2: Create SSD Volume with Cross-Network Replication
Location: Run on Hub 1
#!/bin/bash
# Script: /usr/local/bin/create-ssd-volume.sh
# Description: Creates SSD volume with replica 2 across networks
echo "CREATING SSD VOLUME"
echo "==================="
# Create volume with cross-network replication
gluster volume create ssd_volume replica 2 \
10.0.0.1:/gluster/ssd-brick/brick1 \
10.0.0.6:/gluster/ssd-brick/brick1 \
10.0.0.2:/gluster/ssd-brick/brick1 \
10.0.0.7:/gluster/ssd-brick/brick1 \
10.0.0.3:/gluster/ssd-brick/brick1 \
10.0.0.8:/gluster/ssd-brick/brick1 \
10.0.0.4:/gluster/ssd-brick/brick1 \
10.0.0.9:/gluster/ssd-brick/brick1 \
10.0.0.5:/gluster/ssd-brick/brick1 \
10.0.0.10:/gluster/ssd-brick/brick1
# Configure
gluster volume set ssd_volume network.ping-timeout 20
gluster volume set ssd_volume performance.cache-size 2GB
# Start volume
gluster volume start ssd_volume
echo "SSD volume created"
Step 3: Create HDD Volume with Dispersed Storage
Location: Run on Hub 1
#!/bin/bash
# Script: /usr/local/bin/create-hdd-volume.sh
# Description: Creates HDD volume with dispersal
echo "CREATING HDD VOLUME"
echo "==================="
# Create dispersed volume
gluster volume create hdd_volume disperse 10 redundancy 2 \
10.0.0.1:/gluster/hdd-brick/brick1 \
10.0.0.2:/gluster/hdd-brick/brick1 \
10.0.0.3:/gluster/hdd-brick/brick1 \
10.0.0.4:/gluster/hdd-brick/brick1 \
10.0.0.5:/gluster/hdd-brick/brick1 \
10.0.0.6:/gluster/hdd-brick/brick1 \
10.0.0.7:/gluster/hdd-brick/brick1 \
10.0.0.8:/gluster/hdd-brick/brick1 \
10.0.0.9:/gluster/hdd-brick/brick1 \
10.0.0.10:/gluster/hdd-brick/brick1
# Configure
gluster volume set hdd_volume performance.cache-size 1GB
gluster volume set hdd_volume performance.read-ahead on
# Start volume
gluster volume start hdd_volume
echo "HDD volume created"
Step 4: Mount Volumes on All Servers
Location: Run on each server
#!/bin/bash # Script: /usr/local/bin/mount-gluster-volumes.sh # Description: Mounts GlusterFS volumes echo "MOUNTING GLUSTERFS VOLUMES" echo "==========================" # Create mount points mkdir -p /mnt/gluster-ssd mkdir -p /mnt/gluster-hdd # Add to fstab echo "10.0.0.1:/ssd_volume /mnt/gluster-ssd glusterfs defaults,_netdev 0 0" >> /etc/fstab echo "10.0.0.1:/hdd_volume /mnt/gluster-hdd glusterfs defaults,_netdev 0 0" >> /etc/fstab # Mount mount -a echo "Volumes mounted"
Phase 6: Testing and Verification
Step 1: Comprehensive Cluster Test
Location: /usr/local/bin/test-complete-cluster.sh (run on any server)
#!/bin/bash
# Script: /usr/local/bin/test-complete-cluster.sh
# Description: Tests complete cluster functionality
echo "COMPLETE CLUSTER TEST"
echo "===================="
echo "Date: $(date)"
echo "Host: $(hostname)"
echo ""
# Test 1: WireGuard connectivity
echo "1. WIREGUARD CONNECTIVITY"
echo "-----------------------"
wg show
echo ""
# Test 2: GlusterFS volumes
echo "2. GLUSTERFS VOLUMES"
echo "-------------------"
gluster volume list
gluster volume status
echo ""
# Test 3: Mount points
echo "3. MOUNT POINTS"
echo "--------------"
df -h /mnt/gluster-ssd /mnt/gluster-hdd
echo ""
# Test 4: Write test
echo "4. WRITE TEST"
echo "------------"
TEST_FILE="/mnt/gluster-ssd/test-$(hostname)-$(date +%s).txt"
echo "Test from $(hostname) at $(date)" > "$TEST_FILE"
if [ -f "$TEST_FILE" ]; then
echo "✓ Write successful"
rm "$TEST_FILE"
else
echo "✗ Write failed"
fi
echo ""
# Test 5: Cross-network access
echo "5. CROSS-NETWORK ACCESS"
echo "----------------------"
if [[ "$(hostname)" == "server1" ]]; then
echo "Testing access to Network 2..."
ssh server6 "hostname" && echo "✓ Can reach Network 2"
else
echo "Testing access to Network 1..."
ssh server1 "hostname" && echo "✓ Can reach Network 1"
fi
echo ""
echo "TEST COMPLETE"
Key Management Best Practices
1. Regular Key Rotation
Location: /usr/local/bin/rotate-wireguard-keys.sh
#!/bin/bash # Script: /usr/local/bin/rotate-wireguard-keys.sh # Description: Rotates WireGuard keys quarterly for security # Usage: Run quarterly on all servers echo "ROTATING WIREGUARD KEYS" echo "======================" # Backup old keys cp /etc/wireguard/private.key /etc/wireguard/private.key.backup.$(date +%Y%m%d) cp /etc/wireguard/public.key /etc/wireguard/public.key.backup.$(date +%Y%m%d) # Generate new keys wg genkey | tee /etc/wireguard/private.key | wg pubkey | tee /etc/wireguard/public.key chmod 600 /etc/wireguard/private.key echo "New keys generated. Manual steps required:" echo "1. Update other servers' configurations with new public key" echo "2. Restart WireGuard: systemctl restart wg-quick@wg0" echo "3. Test connectivity"
2. Secure Key Storage
Always follow these security practices:
- Private keys stay on the server where they’re generated
- Public keys are shared via secure channels (SSH, encrypted email)
- Regular backups of all configurations
- Key rotation every 90 days for production systems
- Access controls – only root should read private keys
Conclusion: Your Complete Redundant Cluster
You’ve now built a fully redundant 10-server cluster with:
What You’ve Achieved:
- Secure Networking: WireGuard hub-and-spoke VPN with proper key management
- Complete Redundancy: Both SSD and HDD storage replicated across networks
- Simplified Management: Hub-and-spoke architecture reduces complexity
- Self-Healing Storage: GlusterFS automatically repairs from failures
- Cross-Network Transparency: Applications see unified storage
Storage Capacity Summary:
| Volume Type | Raw Capacity | Usable Capacity | Redundancy |
|---|---|---|---|
| SSD Volume | 10TB (10×1TB) | 5TB | Replica 2 across networks |
| HDD Volume | 80TB (10×8TB) | ~66TB | Dispersed (10+2) |
Next Steps for Production:
- Monitoring Setup: Implement monitoring for WireGuard and GlusterFS
- Backup Strategy: Regular backups of configurations and critical data
- Disaster Recovery Testing: Quarterly failover tests
- Documentation: Keep your key spreadsheet and configurations updated
- Security Updates: Regular updates of WireGuard and GlusterFS
Troubleshooting Tips:
- Connection Issues: Check
wg showfor handshake status - Mount Problems: Verify GlusterFS volume is started
- Performance Issues: Tune GlusterFS cache settings based on usage
- Replication Delays: Check network latency between hubs
This architecture provides true cross-network redundancy where you can lose an entire data center and continue operating from the other location. The hub-and-spoke design keeps management simple while WireGuard ensures all traffic is encrypted in transit.
Remember: The key to success with distributed systems is testing. Regularly test failover scenarios, monitor performance, and keep your configurations backed up. With this setup, you have a resilient foundation that can grow with your needs while maintaining high availability across multiple locations.






















Leave a Comment
Your email address will not be published. Required fields are marked with *