I've a question regarding advice from these threads:
https://mail.google.com/mail/u/0/#label/ceph/1476b93097673ad7?compose=1476ec7fef10fd01

https://www.mail-archive.com/ceph-users@lists.ceph.com/msg11011.html



 Our current setup has 4 osd's per node.    When a drive  fails   the
cluster is almost unusable for data entry.   I want to change our set up so
that under no circumstances ever happens.

 Network:  we use 2 IB switches and  bonding in fail over mode.
 Systems are two  Dell Poweredge r720 and Supermicro X8DT3 .

 So looking at how to do things better we will try  '#4- anti-cephalopod'
.   That is a seriously funny phrase!

We'll switch to using raid-10 or raid-6 and have one osd per node, using
high end raid controllers,  hot spares etc.

And use one Intel 200gb S3700 per node for journal

My questions:

is there a minimum number of OSD's which should be used?

should  OSD's per node be the same?

best regards, Rob


PS:  I had asked above in middle of another thread...  please ignore there.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to