Ceph as a Storage Provider on Proxmox

Ceph: My Storage Solution of Choice

As a DevOps and virtualization enthusiast, I’ve been exploring various storage solutions for my projects. Recently, I discovered Ceph, an open-source distributed object store that has captured my interest. In this blog post, I’ll share my experience with Ceph, its benefits, and how to deploy it on Proxmox.

Why Ceph?

I’ve always been fascinated by distributed systems, and Ceph fits the bill. It allows me to have multiple machines working together as a single storage cluster, providing excellent performance and scalability. With Ceph, I can easily add more machines to my cluster as needed, making it an ideal solution for projects with growing storage needs.

Moreover, Ceph is designed to be highly fault-tolerant, meaning that even if one or more machines in the cluster fail, the data remains accessible and usable. This is particularly useful in environments where hardware failures are common or expected.

Deploying Ceph on Proxmox

Proxmox VE is a hypervisor that supports Ceph out of the box. Deploying Ceph on Proxmox is a straightforward process that can be completed in just a few clicks. The Proxmox documentation provides detailed instructions on how to set up a Ceph cluster, which I followed to deploy my own Ceph cluster.

My Experience with Ceph

I started by setting up a two-node Ceph cluster with Proxmox. At first, the state of Ceph was faulty, and the crush_map created by Proxmox was a 3-host configuration, which added at least one OSD to the cluster. Once I added a third node to the cluster, it started replicating data across all OSDs to meet the crush_map policy.

Here’s what the PGs looked like as they were being moved across the OSDs:

[insert image]

One thing I noticed about the storage usage on Proxmox is that thin provisioning is not similar to VMware VMFS. The thin provisioning depends on the backend and the format of the virtual drive, which took some getting used to. However, once I understood how it worked, I was able to configure my storage effectively.

This is the current state of the storage side of my Proxmox cluster:

[insert image]

As you can see, I have two nodes with a total of four OSDs, providing plenty of storage space for my VMs. I plan to move more VMs into this storage and see how Ceph performs under heavy I/O demand.

Hardware Used in the Cluster

I’ve documented the hardware used in my Ceph cluster on my website. The hardware includes two servers with Intel Xeon E5-2630 v4 processors, 128 GB of RAM, and 4 x 1 TB SSDs for the OSDs. I also have a third server with an Intel Xeon E5-2630 v4 processor, 64 GB of RAM, and 2 x 2 TB NVMe SSDs for the client.

Conclusion

Ceph has been an excellent choice for my storage needs. Its distributed architecture, fault tolerance, and scalability make it an ideal solution for projects with growing storage demands. Deploying Ceph on Proxmox is straightforward, and the resulting cluster provides high performance and reliability. I’m excited to continue exploring the capabilities of Ceph and see how it performs under heavy I/O demand.

Leave a Reply