An other thought, I would hope that with EC, data chunks spreads would profits 
of each drives writes capability where there will be stored.

I did not get any rely for now ! Does this kind of configuration (hard & soft) 
looks crazy ?! Am I missing something ?

Looking forward for your comments, thanks in advance. 

--
Cédric Lemarchand

> Le 7 mai 2014 à 22:10, Cedric Lemarchand <[email protected]> a écrit :
> 
> Some more details, the io pattern will be around 90%write 10%read, mainly 
> sequential.
> Recent posts shows that max_backfills, recovery_max_active and 
> recovery_op_priority settings will be helpful in case of backfilling/re 
> balancing.
> 
> Any thoughts on such hardware setup ?
> 
> Le 07/05/2014 11:43, Cedric Lemarchand a écrit :
>> Hello,
>> 
>> This build is only intended for archiving purpose, what matter here is 
>> lowering ratio $/To/W.
>> Access to the storage would be via radosgw, installed on each nodes. I need 
>> that each nodes sustain an average of 1Gb write rates, for which I think it 
>> would not be a problem. Erasure encoding will be used with something like 
>> k=12 m=3.
>> 
>> A typical node would be :
>> 
>> - Supermicro 36 bays
>> - 2x Xeon E5-2630Lv2
>> - 96Go ram (recommended ratio 1Go/To for OSD is lowered a bit ... )
>> - HBA LSI adaptaters, JBOD mode, could be 2x 9207-8i
>> - 36 HDD 4To with default journals config
>> - dedicated bonded 2Gb links for public/private networks (backfilling will 
>> takes ages if a full node is lost ...)
>> 
>> 
>> I think in an *optimal* state (ceph healthy), it could handle the job. 
>> Waiting for your comment.
>> 
>> What is bothering me more is cases of OSD maintenance operations like 
>> backfilling and cluster re balancing, where nodes will be put under very 
>> hight IO/memory and CPU load during hours/days. Does the latency will *just* 
>> grow up, or does everything will fly away ? (OOMK spawn, OSD suicides 
>> because of latency, node pushed out of the cluster, ect ... )
>> 
>> As you understand I am trying to design the cluster with in mind a sweet 
>> spot like "things becomes slow, latency grow up, but the node stay 
>> stable/usable and aren't pushed out of the cluster".
>> 
>> This is my first jump into Ceph, so any inputs will be greatly appreciated 
>> ;-)
>> 
>> Cheers,
>> 
>> --
>> Cédric 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> -- 
> Cédric
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to