On Thu, Apr 17, 2025 at 11:43:01AM +0200, Karli Sjöberg wrote: > On Thu, 2025-04-17 at 14:44 +0530, gagan tiwari wrote: > > HI Alexander, > > Thanks for the update. Initially, I > > also thought of deploying Ceph but ceph is quite difficult to set-up > > and manage. Moreover, it's also hardware demanding. > > You are of course entitled to your own opinion but I'd like to point > out that ZFS+Gluster carries a lot of considerations and foot-guns of > their own. Saying that Ceph is hardware demanding is also quite > misleading as you can install a Ceph cluster on VM's or RPi's with 2 GB > of RAM but that's hardly the case for any HPC environment so the > argument kind of falls flat.
Having run a strictly experimental (as in: if all the data suddenly disappears, I'll be at most mildly irritated) on the lowest end hardware I could find: yes, one can do that. That setup: - 3x ODroid HC-4 (4G RAM) for OSDs (2x HDD each) - 4x Raspberry Pi 4 with 8G RAM for MON, MDS, manager etc. Note the lack of ECC anywhere (but hey, experiment). The OSDs would have user space lock-ups due to running out of memory every few weeks to months (no swap, because that would have had to go on Micro-SD) but powercycle brought them back. As wonky as this was, Ceph worked reliably throughout and always quickly recovered from OSDs winking in and out of existence. Running this setup for 2 years and "throwing stuff at it" was the argument that convinced me to set up a Ceph cluster with proper hardware (ECC RAM - and enough of it, 10G networking, ...) later for actual production use in the homelab. > Personally I'd argue that having two intermingling systems is more > complex than having one (bigger) system to learn and manage. Now, is > Ceph perfect? No. But is it the most consistent and well-documented in > all aspects? Also no :) Is it the safest choice of the two? Definitely > yes! I found Ceph to be solidly reliable, even when dealing with low-end and not-overly reliable hardware. Also well documented. Kind regards, Alex. -- "Opportunity is missed by most people because it is dressed in overalls and looks like work." -- Thomas A. Edison ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users