Hello Sebastien,

thanks for your reply.

>>Are you going with a 10Gb network? It’s not an issue for IOPS but more for 
>>the bandwidth. If so read the following: 

Currently it's planned to use 1gb network for the public network (vm-->rbd 
cluster).
Maybe 10gbe for cluster network replication is possible. (I'll use replication 
x 3)

My workload is a lot random (around 400vms with small ios) , so I don't use too 
much bandwith.
(I'm currently using a netapp san, with 2 x 24 sas disk array, with 4 gigabit 
lacp links by array, and I'm far to saturate them)


>>I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or 
>>even 1:4) is preferable. 
>>SAS 10K gives you around 140MB/sec for sequential writes. 
>>So if you use a journal with an SSD, you expect at least 140MB if you don’t 
>>want to slow things down. 
>>If you do so 140*10 (disks): fulfil your 10GB bandwidth already. So either 
>>you don’t need that much disks either you don’t need SSDs. 
>>It depends on the performance that you want to achieve. 

Ok, I understand. (But I think for my random workload it should be ok)

>>Another thing, I also won’t use the DC S3700 since this disk was definitely 
>>made for IOPS intensive applications. The journal is purely sequential (small 
>>seq block, IIRC Stephan mentioned 370k blocks). 
>>I will instead use with a SSD with large sequential capabilities like 525 
>>series 120GB. 

Ok, thanks!

----- Mail original ----- 

De: "Sebastien Han" <sebastien....@enovance.com> 
À: "Alexandre DERUMIER" <aderum...@odiso.com> 
Cc: "ceph-users" <ceph-users@lists.ceph.com> 
Envoyé: Mercredi 15 Janvier 2014 13:55:39 
Objet: Re: [ceph-users] servers advise (dell r515 or supermicro ....) 

Hi Alexandre, 

Are you going with a 10Gb network? It’s not an issue for IOPS but more for the 
bandwidth. If so read the following: 

I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even 
1:4) is preferable. 
SAS 10K gives you around 140MB/sec for sequential writes. 
So if you use a journal with an SSD, you expect at least 140MB if you don’t 
want to slow things down. 
If you do so 140*10 (disks): fulfil your 10GB bandwidth already. So either you 
don’t need that much disks either you don’t need SSDs. 
It depends on the performance that you want to achieve. 
Another thing, I also won’t use the DC S3700 since this disk was definitely 
made for IOPS intensive applications. The journal is purely sequential (small 
seq block, IIRC Stephan mentioned 370k blocks). 
I will instead use with a SSD with large sequential capabilities like 525 
series 120GB. 

Cheers. 
–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 10, rue de la Victoire - 75009 Paris 
Web : www.enovance.com - Twitter : @enovance 

On 15 Jan 2014, at 12:47, Alexandre DERUMIER <aderum...@odiso.com> wrote: 

> Hello List, 
> 
> I'm going to build a build a rbd cluster this year, with 5 nodes 
> 
> I would like to have this kind of configuration for each node: 
> 
> - 2U 
> - 2,5inch drives 
> 
> os : 2 disk sas drive 
> journal : 2 x ssd intel dc s3700 100GB 
> osd : 10 or 12 x sas Seagate Savvio 10K.6 900GB 
> 
> 
> 
> I see on the mailing that intank use dell r515. 
> I currently own a lot of dell servers and I have good prices. 
> 
> But I have also see on the mailing that dell perc H700 can have some 
> performance problem, 
> and also it's not easy to flash the firmware for jbod mode. 
> http://www.spinics.net/lists/ceph-devel/msg16661.html 
> 
> I don't known if theses performance problem has finally been solved ? 
> 
> 
> 
> Another option could be to use supermicro server, 
> they have some 2U - 16 disks chassis + one or two lsi jbod controller. 
> But, I have had in past really bad experience with supermicro motherboard. 
> (Mainly firmware bug, ipmi card bug,.....) 
> 
> Does someone have experience with supermicro, and give me advise for a good 
> motherboard model? 
> 
> 
> Best Regards, 
> 
> Alexandre Derumier 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to