It also depends a lot on the size of your cluster ... I have a test cluster I'm 
standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I 
lose 4 TB - that's a very small fraction of the data.  My replicas are going to 
be spread out across a lot of spindles, and replicating that missing 4 TB isn't 
much of an issue, across 3 racks each with 80 gbit/sec ToR uplinks to Spine.  
Each node has 20 gbit/sec to ToR in a bond.

On the other hand ... if you only have 4 .. or 8 ... or 10 servers ... and a 
smaller number of OSDs - you have fewer spindles replicating that loss, and it 
might be more of an issue.

It just depends on the size/scale of  your environment.

We're going to 8 TB drives - and that will ultimately be spread over a 100 or 
more physical servers w/ 10 OSD disks per server.   This will be across 7 to 10 
racks (same network topology) ... so an 8 TB drive loss isn't too big of an 
issue.   Now that assumes that replication actually works well in that size 
cluster.  We're still cessing out this part of the PoC engagement.

~~shane



On 7/1/15, 5:05 PM, "ceph-users on behalf of German Anders" 
<[email protected]<mailto:[email protected]> on 
behalf of [email protected]<mailto:[email protected]>> wrote:

ask the other guys on the list, but for me to lose 4TB of data is to much, the 
cluster will still running fine, but in some point you need to recover that 
disk, and also if you lose one server with all the 4TB disk in that case yeah 
it will hurt the cluster, also take into account that with that kind of disk 
you will get no more than 100-110 iops per disk


German Anders
Storage System Engineer Leader
Despegar | IT Team
office +54 11 4894 3500 x3408
mobile +54 911 3493 7262
mail [email protected]<mailto:[email protected]>

2015-07-01 20:54 GMT-03:00 Nate Curry 
<[email protected]<mailto:[email protected]>>:

4TB is too much to lose?  Why would it matter if you lost one 4TB with the 
redundancy?  Won't it auto recover from the disk failure?

Nate Curry

On Jul 1, 2015 6:12 PM, "German Anders" 
<[email protected]<mailto:[email protected]>> wrote:
I would probably go with less size osd disks, 4TB is to much to loss in case of 
a broken disk, so maybe more osd daemons with less size, maybe 1TB or 2TB size. 
4:1 relationship is good enough, also i think that 200G disk for the journals 
would be ok, so you can save some money there, the osd's of course configured 
them as a JBOD, don't use any RAID under it, and use two different networks for 
public and cluster net.


German

2015-07-01 18:49 GMT-03:00 Nate Curry 
<[email protected]<mailto:[email protected]>>:
I would like to get some clarification on the size of the journal disks that I 
should get for my new Ceph cluster I am planning.  I read about the journal 
settings on 
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
 but that didn't really clarify it for me that or I just didn't get it.  I 
found in the Learning Ceph Packt book it states that you should have one disk 
for journalling for every 4 OSDs.  Using that as a reference I was planning on 
getting multiple systems with 8 x 6TB inline SAS drives for OSDs with two SSDs 
for journalling per host as well as 2 hot spares for the 6TB drives and 2 
drives for the OS.  I was thinking of 400GB SSD drives but am wondering if that 
is too much.  Any informed opinions would be appreciated.

Thanks,

Nate Curry


_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to