Hi all

The issue is resolved after upgrading Ceph from Giant to Hammer(0.94.1)

cheers
K.Mohamed Pakkeer

On Sun, Apr 26, 2015 at 11:28 AM, Mohamed Pakkeer <[email protected]>
wrote:

> Hi
>
>  I was doing some testing on erasure coded based CephFS cluster. cluster
> is running with giant 0.87.1 release.
>
>
>
> Cluster info
>
> 15 * 36 drives node(journal on same osd)
>
> 3 * 4 drives SSD cache node( Intel DC3500)
>
> 3 * MON/MDS
>
> EC 10 +3
>
> 10G Ethernet for private and cluster network
>
>
>
> We got approx. 55MB/s read transfer speed using ceph-fuse client, when the
> data was available on cache tier( cold storage was empty). When I tried to
> add more data, ceph started the flushing the data from cache tier to cold
> storage. During flushing, cluster read speed became approx 100 KB/s. But I
> got 50 – 55MB/s write transfer speed during flushing from multiple
> simultaneous ceph-fuse client( 1G Ethernet). I think there is an issue on
> data migration from cold storage to cache tier during ceph-fuse client
> read. Am I hitting any known issue/bug or is there any issue with my
> cluster?
>
>
>
> I used big video files( approx 5 GB to 10 GB) for this testing .
>
>
>
> Any help ?
>
> Cheers
> K.Mohamed Pakkeer
>
>
>


-- 
Thanks & Regards
K.Mohamed Pakkeer
Mobile- 0091-8754410114
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to