Building a 3-Node Proxmox Cluster with Mixed Hardware - Complete Setup Guide

Introduction: Why Proxmox Clustering?

Proxmox Virtual Environment (VE) is an open-source server virtualization management platform that combines KVM for virtual machines and Linux Containers (LXC) for lightweight virtualization. While a single Proxmox node serves home labs well, a 3-node cluster unlocks enterprise features: high availability, live migration, distributed storage with Ceph, and true redundancy for production workloads.

This guide walks through building a 3-node Proxmox cluster, addresses the realities of mixed hardware (different CPU generations, RAM sizes, storage configurations), and provides production-ready configuration steps.

Can You Mix Hardware in a Proxmox Cluster?

Short answer: Yes, with important caveats.

Proxmox does not require identical hardware across nodes. However, significant differences impact functionality:

Supported Variations

Component Variation Allowed Impact
CPU Manufacturer Must match (Intel vs AMD) Live migration requires identical CPU families
CPU Generation Yes, with baseline Newer features require baseline configuration
RAM Amount Yes Each node’s capacity limits its VMs
Storage Type Yes Affects performance, not functionality
Storage Size Yes Ceph distributes based on smallest OSD
Network Cards Yes Performance varies by NIC quality

Critical Constraints

  1. CPU Architecture: All nodes must share the same architecture (x86_64). Mixing Intel and AMD requires CPU type “kvm64” which disables optimizations.

  2. Ceph OSD Sizing: In Ceph clusters, the smallest OSD capacity determines the per-node maximum. A node with 2TB SSDs in a cluster with 4TB nodes wastes 50% capacity.

  3. Memory Overhead: HA and live migration require reserved memory. Plan 10-15%% overhead on the smallest node.

Hardware Recommendations for 3-Node Cluster

Minimum Viable Setup

Node CPU RAM Boot Ceph OSDs Network
Node 1 4c/8t Intel/AMD 32GB 128GB SSD 2x 500GB 2x 1GbE
Node 2 4c/8t Intel/AMD 32GB 128GB SSD 2x 500GB 2x 1GbE
Node 3 4c/8t Intel/AMD 32GB 128GB SSD 2x 500GB 2x 1GbE
Node CPU RAM Boot Storage VM Storage Ceph OSDs Network
Node 1 i5-12400 64GB 1TB NVMe 2TB NVMe 4x 2TB SSD 2x 10GbE
Node 2 i3-10100 32GB 512GB SATA 1TB NVMe 2x 2TB SSD 2x 1GbE
Node 3 Ryzen 5 3600 48GB 512GB NVMe 2TB NVMe 4x 4TB HDD 2x 2.5GbE

Note: This heterogeneous example works but requires careful CPU type selection and Ceph planning.

Prerequisites Checklist

Before starting, ensure you have:

  • 3 physical servers or VMs (nested virtualization for testing)
  • Static IP addresses for each node (e.g., 192.168.1.10-12)
  • DNS names configured (proxmox-1.local, proxmox-2.local, proxmox-3.local)
  • 10Gb/s or 1Gb/s dedicated network for cluster communication
  • USB stick or IPMI for OS installation
  • Valid email for Let’s Encrypt (if using ACME)

Step 1: Install Proxmox VE on All Nodes

Download and Prepare Installation Media

1
2
3
4
5
6
7
# Download latest ISO
wget https://enterprise.proxmox.com/iso/proxmox-ve_8.1-2.iso

# Create bootable USB (Linux)
sudo dd if=proxmox-ve_8.1-2.iso of=/dev/sdX bs=1M status=progress

# Or use Ventoy for multi-ISO USB

Installation Process (Repeat on Each Node)

  1. Boot from USB and select “Install Proxmox VE”

  2. Target Harddisk: Select your boot drive (typically smallest/fastest SSD)

    • Advanced: Configure ZFS RAID if multiple boot drives
  3. Country, Time Zone, Keyboard: Set appropriate values

  4. Password and Email: Set strong root password, provide valid email

  5. Management Network Configuration:

    • Hostname: proxmox-1.yourdomain.com (unique per node)
    • IP Address: 192.168.1.10/24 (increment for each node)
    • Gateway: 192.168.1.1
    • DNS: 192.168.1.1 or your DNS server
  6. Complete installation and reboot

Repeat for Node 2 (192.168.1.11) and Node 3 (192.168.1.12).

Post-Installation Network Setup

Access each node’s web UI (https://192.168.1.10:8006) and complete initial configuration:

Configure Additional Network Interfaces:

Navigate to System → Network → Create → Linux Bridge:

1
2
3
4
Name: vmbr1
IPv4/CIDR: 10.0.0.1/24 (cluster network, no gateway)
Bridge ports: eno2 (your second NIC)
Comment: Cluster/Ceph Network

Create Cluster Network (vmbr1) on all nodes.

Step 2: Create the Cluster

On Node 1 (your primary/designated cluster master):

Via Web Interface

  1. Navigate to Datacenter → Cluster → Create Cluster

  2. Enter cluster name: homelab-cluster

  3. Select cluster network: 10.0.0.0/24 (vmbr1 interface)

  4. Click Create - this generates cluster join information

Via Command Line

1
2
3
4
5
6
7
ssh root@192.168.1.10

# Create cluster
pvecm create homelab-cluster --ring0_addr 10.0.0.1

# Verify cluster status
pvecm status

Expected output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Quorum information
------------------
Date: Fri Mar 1 08:30:00 2025
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1.44
Quorate: Yes

Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.0.1 (local)

Step 3: Join Additional Nodes to Cluster

Get Join Information from Node 1

In Node 1’s web UI: Datacenter → Cluster → Join Information → Copy Information

Or via CLI:

1
2
pvecm addnode proxmox-2.homelab.local
pvecm addnode proxmox-3.homelab.local

Join Node 2

On Node 2 (192.168.1.11):

Via Web UI:

  1. Access Node 2’s web UI: https://192.168.1.11:8006
  2. Datacenter → Cluster → Join Cluster
  3. Paste join information from Node 1
  4. Enter Node 1’s root password
  5. Select cluster network interface: vmbr1 (10.0.0.2)
  6. Click Join

Via Command Line:

1
2
3
4
5
6
7
ssh root@192.168.1.11

pvecm add 192.168.1.10 --ring0_addr 10.0.0.2
# Enter Node 1's root password when prompted

# Verify
pvecm status

Join Node 3

Repeat the process for Node 3 (192.168.1.12, ring0_addr 10.0.0.3).

Verify 3-Node Cluster

On any node:

1
pvecm status

Expected output showing 3 nodes and Quorate: Yes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Quorum information
------------------
Date: Fri Mar 1 09:00:00 2025
Nodes: 3
Node ID: 0x00000001
Ring ID: 1.452
Quorate: Yes

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.0.1 (local)
0x00000002 1 10.0.0.2
0x00000003 1 10.0.0.3

Critical: The cluster is only quorate with 2+ nodes online. With 1 node, HA stops working.

Step 4: Configure Cluster Storage

Ceph provides distributed, replicated storage across all nodes.

Install Ceph on All Nodes

Via Web UI (Node 1):

  1. Node → Shell (or SSH)
  2. ```bash

pveceph install –repository no-subscription
pveceph init