Proxmox VE Ceph Cluster

Proxmox VE Ceph Cluster

A Proxmox VE server cluster combined with a Ceph distributed storage system allows you to create an hyperconverged virtualization infrastructure in high availability, with loadn balancing and very easy horizontal scalability.

What is a cluster?

A cluster in computing refers to a group of interconnected computers or nodes working together as if they were interconnected computers or nodes that work together as if they were a single entity. Clusters are used to improve the availability, performance and scalability of applications and services. There are different types of clusters, but in general, they share the common goal of providing increased processing capacity and redundancy.

What is Ceph?

Ceph is a distributed storage system designed to provide object, block and file storage in a single unified cluster. Proxmox can use Ceph as a storage pool for virtual machines.

What is a Proxmox VE Ceph Cluster?

There are three or more servers forming part of a Proxmox cluster and using Ceph as a distributed storage system, all managed from the Proxmox web interface, thanks to which we achieve a hyperconverged virtualization infrastructure.

A hyper-converged virtualization infrastructure is an integrated system that combines compute, storage, and networking in a single environment. This simplifies management, improves efficiency, and enables easy scalability, making it easy to create and manage virtual machines in a single cluster.


Although the architecture may vary depending on the needs and requirements of each customer, it is always very similar. Broadly speaking, a Proxmox VE Ceph Cluster consists of:

  • Three or more servers.
  • Two mirrored disks using ZFS for the installation of the Proxmox hypervisor.
  • NVMe disks for Ceph on which to create pools to host virtual machines.
  • Ceph network with 50Gbps or 20Gbps backbones.
  • Service network for access to virtual machines.

Mesh network

For three nodes, a mesh network can be used, thus avoiding the need for a switch stack. For more than three nodes, it is recommended to use a switch stack.


Cost reduction

There is no need for an expensive disk cabinet. Proxmox subscriptions are inexpensive compared to licences from other manufacturers.

High availability and fault tolerance

A failure of any kind in one of the cluster nodes does not prevent the cluster from ceasing to be operational.

High scalability

Adding new nodes to our cluster to increase compute power, memory and/or storage is very simple.

Live migration

It is possible to move virtual machines between cluster nodes without affecting the services provided by these machines.

Storage migration

We can move virtual machines between storages without shutting them down or affecting the service provided by them.

Load balancing

We can spread the virtual machines among all the servers of the clsuter to distribute the computational load among them.

Hardware upgrades without affecting production

If our hardware becomes obsolete, we can add new servers to the cluster with new hardware, move the virtual machines without stopping them to the new nodes and, once emptied, remove the old servers.

Centralised management

The entire cluster is managed from an intuitive web interface accessible through any of the cluster nodes.

On premises or in the cloud

It is possible to set up clusters of this type either on premises with your own physical servers or in a cloud cloud provider that provides IaaS.

We recommend using a cloud provider such as our partner OVHcloud, since its IaaS service allows you to create Proxmox VE Ceph Clusters exactly as you would have them in your own facilities.


You can get in touch with us by phoning us, sending us an email or by filling in the following form:


Commercial Department: Technical Department:


SOLTECSIS SOLUCIONES TECNOLÓGICAS, S.L. B54368451 C/Carrasca, 7 local 3 03590 Altea (Alicante) - España (+34) 966 446 046 / 966 919 929

"*" indicates required fields

This field is for validation purposes and should be left unchanged.