Re: [ceph-users] Single disk per OSD ?

2017-12-01 Thread Piotr Dałek

On 17-12-01 12:23 PM, Maged Mokhtar wrote:

Hi all,

I believe most exiting setups use 1 disk per OSD. Is this going to be the 
most common setup in the future ? With the move to lvm, will this prefer the 
use of multiple disks per OSD ? On the other side i also see nvme vendors 
recommending multiple OSDs ( 2,4 ) per disk as disks are getting faster for 
a single OSD process.


Can anyone shed some light/recommendations into this please ?


You don't put more than one OSD on spinning disk because access times will 
kill your performance - they already do [kill your performance] and asking 
hdds to do double/triple/quadruple/... duty is only going to make it far 
more worse. On the other hand, SSD drives have access time so short that 
they're most often bottlenecked by SSD users and not SSD itself, so it makes 
perfect sense to put 2-4 OSDs on one OSD.
LVM isn't going to change much in that pattern, it may be easier to setup 
RAID0 HDD OSDs, but that's questionable use case, and OSDs with JBODs under 
them are counterproductive (single disk failure would be caught by Ceph, but 
replacing failed drives will be more difficult -- plus, JBOD OSDs 
significantly extend the damage area once such OSD fails).


--
Piotr Dałek
piotr.da...@corp.ovh.com
https://www.ovh.com/us/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Single disk per OSD ?

2017-12-01 Thread Maged Mokhtar
Hi all, 

I believe most exiting setups use 1 disk per OSD. Is this going to be
the most common setup in the future ? With the move to lvm, will this
prefer the use of multiple disks per OSD ? On the other side i also see
nvme vendors recommending multiple OSDs ( 2,4 ) per disk as disks are
getting faster for a single OSD process. 

Can anyone shed some light/recommendations into this please ? 

Thanks a lot. 

Maged___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com