Building a 3-Node Proxmox Cluster with Mixed Hardware - Complete Setup Guide
Introduction: Why Proxmox Clustering?
Proxmox Virtual Environment (VE) is an open-source server virtualization management platform that combines KVM for virtual machines and Linux Containers (LXC) for lightweight virtualization. While a single Proxmox node serves home labs well, a 3-node cluster unlocks enterprise features: high availability, live migration, distributed storage with Ceph, and true redundancy for production workloads.
This guide walks through building a 3-node Proxmox cluster, addresses the realities of mixed hardware (different CPU generations, RAM sizes, storage configurations), and provides production-ready configuration steps.
Can You Mix Hardware in a Proxmox Cluster?
Short answer: Yes, with important caveats.
Proxmox does not require identical hardware across nodes. However, significant differences impact functionality:
Supported Variations
| Component | Variation Allowed | Impact |
|---|---|---|
| CPU Manufacturer | Must match (Intel vs AMD) | Live migration requires identical CPU families |
| CPU Generation | Yes, with baseline | Newer features require baseline configuration |
| RAM Amount | Yes | Each node’s capacity limits its VMs |
| Storage Type | Yes | Affects performance, not functionality |
| Storage Size | Yes | Ceph distributes based on smallest OSD |
| Network Cards | Yes | Performance varies by NIC quality |
Critical Constraints
CPU Architecture: All nodes must share the same architecture (x86_64). Mixing Intel and AMD requires CPU type “kvm64” which disables optimizations.
Ceph OSD Sizing: In Ceph clusters, the smallest OSD capacity determines the per-node maximum. A node with 2TB SSDs in a cluster with 4TB nodes wastes 50% capacity.
Memory Overhead: HA and live migration require reserved memory. Plan 10-15%% overhead on the smallest node.
Hardware Recommendations for 3-Node Cluster
Minimum Viable Setup
| Node | CPU | RAM | Boot | Ceph OSDs | Network |
|---|---|---|---|---|---|
| Node 1 | 4c/8t Intel/AMD | 32GB | 128GB SSD | 2x 500GB | 2x 1GbE |
| Node 2 | 4c/8t Intel/AMD | 32GB | 128GB SSD | 2x 500GB | 2x 1GbE |
| Node 3 | 4c/8t Intel/AMD | 32GB | 128GB SSD | 2x 500GB | 2x 1GbE |
Recommended Mixed Setup
| Node | CPU | RAM | Boot Storage | VM Storage | Ceph OSDs | Network |
|---|---|---|---|---|---|---|
| Node 1 | i5-12400 | 64GB | 1TB NVMe | 2TB NVMe | 4x 2TB SSD | 2x 10GbE |
| Node 2 | i3-10100 | 32GB | 512GB SATA | 1TB NVMe | 2x 2TB SSD | 2x 1GbE |
| Node 3 | Ryzen 5 3600 | 48GB | 512GB NVMe | 2TB NVMe | 4x 4TB HDD | 2x 2.5GbE |
Note: This heterogeneous example works but requires careful CPU type selection and Ceph planning.
Prerequisites Checklist
Before starting, ensure you have:
- 3 physical servers or VMs (nested virtualization for testing)
- Static IP addresses for each node (e.g., 192.168.1.10-12)
- DNS names configured (proxmox-1.local, proxmox-2.local, proxmox-3.local)
- 10Gb/s or 1Gb/s dedicated network for cluster communication
- USB stick or IPMI for OS installation
- Valid email for Let’s Encrypt (if using ACME)
Step 1: Install Proxmox VE on All Nodes
Download and Prepare Installation Media
1 | # Download latest ISO |
Installation Process (Repeat on Each Node)
Boot from USB and select “Install Proxmox VE”
Target Harddisk: Select your boot drive (typically smallest/fastest SSD)
- Advanced: Configure ZFS RAID if multiple boot drives
Country, Time Zone, Keyboard: Set appropriate values
Password and Email: Set strong root password, provide valid email
Management Network Configuration:
- Hostname:
proxmox-1.yourdomain.com(unique per node) - IP Address:
192.168.1.10/24(increment for each node) - Gateway:
192.168.1.1 - DNS:
192.168.1.1or your DNS server
- Hostname:
Complete installation and reboot
Repeat for Node 2 (192.168.1.11) and Node 3 (192.168.1.12).
Post-Installation Network Setup
Access each node’s web UI (https://192.168.1.10:8006) and complete initial configuration:
Configure Additional Network Interfaces:
Navigate to System → Network → Create → Linux Bridge:
1 | Name: vmbr1 |
Create Cluster Network (vmbr1) on all nodes.
Step 2: Create the Cluster
On Node 1 (your primary/designated cluster master):
Via Web Interface
Navigate to Datacenter → Cluster → Create Cluster
Enter cluster name:
homelab-clusterSelect cluster network:
10.0.0.0/24(vmbr1 interface)Click Create - this generates cluster join information
Via Command Line
1 | ssh root@192.168.1.10 |
Expected output:
1 | Quorum information |
Step 3: Join Additional Nodes to Cluster
Get Join Information from Node 1
In Node 1’s web UI: Datacenter → Cluster → Join Information → Copy Information
Or via CLI:
1 | pvecm addnode proxmox-2.homelab.local |
Join Node 2
On Node 2 (192.168.1.11):
Via Web UI:
- Access Node 2’s web UI:
https://192.168.1.11:8006 - Datacenter → Cluster → Join Cluster
- Paste join information from Node 1
- Enter Node 1’s root password
- Select cluster network interface:
vmbr1 (10.0.0.2) - Click Join
Via Command Line:
1 | ssh root@192.168.1.11 |
Join Node 3
Repeat the process for Node 3 (192.168.1.12, ring0_addr 10.0.0.3).
Verify 3-Node Cluster
On any node:
1 | pvecm status |
Expected output showing 3 nodes and Quorate: Yes:
1 | Quorum information |
Critical: The cluster is only quorate with 2+ nodes online. With 1 node, HA stops working.
Step 4: Configure Cluster Storage
Option A: Ceph Distributed Storage (Recommended)
Ceph provides distributed, replicated storage across all nodes.
Install Ceph on All Nodes
Via Web UI (Node 1):
- Node → Shell (or SSH)
- ```bash
pveceph install –repository no-subscription
pveceph init