Ceph uses data replicas, so even if you only use 2 replicas (3 is recommend), 
you'd basically have best case of the IO of a single drive.  You also need to 
have a minimum of 3 management nodes for Ceph, so personally, I'd stick with 
local storage if you're focused on speed.
You're also running quite a risk by using RAID-0. A single drive failure and 
you'll lose all of your data. Is there a reason you are using NFS and not just 
direct local storage?


________________________________
From: Fariborz Navidan <mdvlinqu...@gmail.com>
Sent: Tuesday, April 7, 2020 10:36 AM
To: users@cloudstack.apache.org <users@cloudstack.apache.org>
Subject: I/O speed in local NFS vs local Ceph

Hello,

I have a single physical host running  CloudStack. Primary storage is
currently mounted as a NFS share. The underlying filesystem is XFS
running on top of Linux Soft RAID-0. The underlying hardware consists of 2
SSD-NVMe drives.

The question is that, could I reach faster I/O on VMs if I would use Ceph
adding 2 physical devices directly to the cluster and expose it via RBD?
How much could it make the I/O faster?

Thanks

Reply via email to