> 
> 1. shoud i use a raid controller a create for example a raid 5 with all disks 
> on each osd server? or should i passtrough all disks to ceph osd?
> 
> If your OSD servers have HDDs, buy a good RAID Controller with a 
> battery-backed write cache and configure it using multiple RAID-0 volumes (1 
> physical disk per volume). That way, reads and write will be accelerated by 
> the cache on the HBA.

I’ve lived this scenario and hated it.  Multiple firmware and manufacturing 
issues, batteries/supercaps can fail and need to be monitored, bugs causing 
staged data to be lost before writing to disk, another bug that required 
replacing the card if there was preserved cache for a failed drive, because it 
would refuse to boot, difficulties in drive monitoring, HBA monitoring utility 
that would lock the HBA or peg the CPU, the list goes on.

For the additional cost of RoC, cache RAM, supercap to (fingers crossed) 
protect the cache, all the additional monitoring and hands work … you might 
find that SATA SSDs on a JBOD HBA are no more expensive.

> 3. if i have a 3 physically node osd cluster, did i need 5 physicall mons?
> No. 3 MON are enough

If you have good hands and spares.  If your cluster is on a different continent 
and colo hands can’t find their own butts …..  it’s nice to survive a double 
failure.

ymmv
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to