Thanks Zhong.
We got 5 servers for testing ,two are already configured to be OSD nodes
and as per the storage requirement we need at least 5 OSD nodes .Let me try
to get more servers to try cache tier ,but i am not hopefull though :( .

Will try bcache and see how it improves performance,thanks for your
suggestion.

Regards,
Kevin

On Fri, Jan 6, 2017 at 8:56 AM, jiajia zhong <[email protected]> wrote:

>
>
> 2017-01-06 11:10 GMT+08:00 kevin parrikar <[email protected]>:
>
>> Hello All,
>>
>> I have setup a ceph cluster based on 0.94.6 release in  2 servers each
>> with 80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
>> which is connected to a 10G switch with a replica of 2 [ i will add 3
>> more servers to the cluster] and 3 seperate monitor nodes which are vms.
>>
>> rbd_cache is enabled in configurations,XFS filesystem,LSI 92465-4i raid
>> card with 512Mb cache [ssd is in writeback mode wth BBU]
>>
>>
>> Before installing ceph, i tried to check max throughpit of intel 3500
>>  80G SSD using block size of 4M [i read somewhere that ceph uses 4m
>> objects] and it was giving 220mbps {dd if=/dev/zero of=/dev/sdb bs=4M
>> count=1000 oflag=direct}
>>
>> *Observation:*
>> Now the cluster is up and running and from the vm i am trying to write a
>> 4g file to its volume using dd if=/dev/zero of=/dev/sdb bs=4M count=1000
>> oflag=direct .It takes aroud 39 seconds to write.
>>
>>  during this time ssd journal was showing disk write of 104M on both the
>> ceph servers (dstat sdb) and compute node a network transfer rate of ~110M
>> on its 10G storage interface(dstat -nN eth2]
>>
>>
>> my questions are:
>>
>>
>>    - Is this the best throughput ceph can offer or can anything in my
>>    environment be optmised to get  more performance? [iperf shows a max
>>    throughput 9.8Gbits/s]
>>
>>
>>
>>    - I guess Network/SSD is under utilized and it can handle more writes
>>    how can this be improved to send more data over network to ssd?
>>
>> cache tiering? http://docs.ceph.com/docs/hammer/rados/operations/cache-
> tiering/
> or try bcache in kernel.
>
>>
>>    - rbd kernel module wasn't loaded on compute node,i loaded it
>>    manually using "modprobe" and later destroyed/re-created vms,but this
>>    doesnot give any performance boost. So librbd and RBD are equally fast?
>>
>>
>>
>>    - Samsung evo 840 512Gb shows a throughput of 500Mbps for 4M writes
>>    [dd if=/dev/zero of=/dev/sdb bs=4M count=1000 oflag=direct] and for 4Kb it
>>    was equally fast as that of intel S3500 80gb .Does changing my SSD from
>>    intel s3500 100Gb to Samsung 840 500Gb make any performance  difference
>>    here just because for 4M wirtes samsung 840 evo is faster?Can Ceph utilize
>>    this extra speed.Since samsung evo 840 is faster in 4M writes.
>>
>>
>> Can somebody help me understand this better.
>>
>> Regards,
>> Kevin
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to