which i can telll is :
      in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%
     in 0.80.6, osd's writting about 20MB/s, but io utilization is about 30%

iostat  -mx 2 with 0.87

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

avg-cpu: %user %nice %system %iowait %steal %idle
3.16 0.00 1.01 17.45 0.00 78.38

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

发件人: Haomai Wang<mailto:[email protected]>
发送时间: 2014-10-31 09:40
收件人: 廖建锋<mailto:[email protected]>
抄送: ceph-users<mailto:[email protected]>; 
ceph-users<mailto:[email protected]>
主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
But at Giant, other performance optimization has been applied. Could
you tell more about your tests?

On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 <[email protected]> wrote:
> Also found the other problem is:  the ceph osd directory has millions small
> files which will cause performance issue
>
> 1008 => # pwd
> /var/lib/ceph/osd/ceph-8/current
>
> 1007 => # ls |wc -l
> 21451
>
> 发件人: ceph-users
> 发送时间: 2014-10-31 08:23
> 收件人: ceph-users
> 主题: [ceph-users] half performace with keyvalue backend in 0.87
> Dear Ceph,
>       I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with
> rsync millions small files is 10M byte /second
> when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
> don't why , is there any tunning option for this?
> will superblock cause those performance slow down?
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to