Hi Erich,
I'm also on searching for improvements.
You should use the "right" mountoptions, to prevent fragmentation (for XFS).

[osd]
osd mount options xfs = "rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"
osd_op_threads = 4
osd_disk_threads = 4

With 45 OSDs per node you need an powerfull system... AFAIK 12 OSDs/node is 
recomended.



You should think about what happens if one node die... I use an
monitoring-script which do an "set noout" if more than N OSDs are down.
Then I must decide, if it's faster to get the failed node back, or do an
rebuild (normaly the first choice).

Udo

On 27.06.2014 20:00, Erich Weiler wrote:
> Hi Folks,
>
> We're going to spin up a ceph cluster with the following general specs:
>
> * Six 10Gb/s connected servers, each with 45 4TB disks in a JBOD
>
> * Each disk is an OSD, so 45 OSDs per server
>
> * So 45*6 = 270 OSDs total
>
> * Three separate, dedicated monitor nodes
>
> The files stored on this storage cluster will be large file, each file
> will be several GB in size at the minimum, with some files being over
> 100GB.
>
> Generically, are there any tuning parameters out there that would be
> good to drop in for this hardware profile and file size?
>
> We plan on growing this filesystem as we go, to 10 servers, then 15,
> then 20, etc.
>
> Thanks a bunch for any hints!!
>
> cheers,
> erich
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to