Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-11-10 Thread 廖建锋
Haomai wang,
   Do you have proresss on this performance issue?



发件人: Haomai Wangmailto:haomaiw...@gmail.com
发送时间: 2014-10-31 10:05
收件人: 廖建锋mailto:de...@f-club.cn
抄送: ceph-usersmailto:ceph-users-boun...@lists.ceph.com; 
ceph-usersmailto:ceph-users@lists.ceph.com
主题: Re: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

ok. I will explore it.

On Fri, Oct 31, 2014 at 10:03 AM, 廖建锋 de...@f-club.cn wrote:
 I am not sure if it seq or ramdon,  i just use rsync to copy millions small
 pic file form our pc server to ceph cluster

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:59
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Thanks, recently I mainly focus on rbd performance for it(random small
 write).

 I want to know your test situation. Is it seq write?

 On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote:
 which i can telll is :
   in 0.87 , osd's writting under 10MB/s ,but io utilization is about
 95%
  in 0.80.6, osd's writting about 20MB/s, but io utilization is about
 30%

 iostat  -mx 2 with 0.87

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
 sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
 sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

 avg-cpu: %user %nice %system %iowait %steal %idle
 3.16 0.00 1.01 17.45 0.00 78.38

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
 sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:40
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
 But at Giant, other performance optimization has been applied. Could
 you tell more about your tests?

 On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions
 small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed
 with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --
 Best Regards,

 Wheat



 --
 Best Regards,

 Wheat



--
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-11-10 Thread Haomai Wang
Yep, be patient. Need more time

On Mon, Nov 10, 2014 at 9:33 AM, 廖建锋 de...@f-club.cn wrote:
 Haomai wang,
Do you have proresss on this performance issue?



 发件人: Haomai Wang
 发送时间: 2014-10-31 10:05
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 ok. I will explore it.

 On Fri, Oct 31, 2014 at 10:03 AM, 廖建锋 de...@f-club.cn wrote:
 I am not sure if it seq or ramdon,  i just use rsync to copy millions
 small
 pic file form our pc server to ceph cluster

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:59
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Thanks, recently I mainly focus on rbd performance for it(random small
 write).

 I want to know your test situation. Is it seq write?

 On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote:
 which i can telll is :
   in 0.87 , osd's writting under 10MB/s ,but io utilization is about
 95%
  in 0.80.6, osd's writting about 20MB/s, but io utilization is about
 30%

 iostat  -mx 2 with 0.87

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
 sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
 sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

 avg-cpu: %user %nice %system %iowait %steal %idle
 3.16 0.00 1.01 17.45 0.00 78.38

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
 sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:40
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
 But at Giant, other performance optimization has been applied. Could
 you tell more about your tests?

 On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions
 small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed
 with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,
 I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --
 Best Regards,

 Wheat



 --
 Best Regards,

 Wheat



 --
 Best Regards,

 Wheat



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-31 Thread 廖建锋
Looks like the writte performance of keyvalue backend  is bad than file store 
backend with version 0.87
for my curent cluster, the writteing speed   only  have 1.5MB/s - 4.5MB/s


发件人: ceph-usersmailto:ceph-users-boun...@lists.ceph.com
发送时间: 2014-10-31 08:23
收件人: ceph-usersmailto:ceph-users@lists.ceph.com
主题: [ceph-users] half performace with keyvalue backend in 0.87
Dear Ceph,
  I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with 
rsync millions small files is 10M byte /second
when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I don't 
why , is there any tunning option for this?
will superblock cause those performance slow down?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
Also found the other problem is:  the ceph osd directory has millions small 
files which will cause performance issue

1008 = # pwd
/var/lib/ceph/osd/ceph-8/current

1007 = # ls |wc -l
21451

发件人: ceph-usersmailto:ceph-users-boun...@lists.ceph.com
发送时间: 2014-10-31 08:23
收件人: ceph-usersmailto:ceph-users@lists.ceph.com
主题: [ceph-users] half performace with keyvalue backend in 0.87
Dear Ceph,
  I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with 
rsync millions small files is 10M byte /second
when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I don't 
why , is there any tunning option for this?
will superblock cause those performance slow down?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread Haomai Wang
Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
But at Giant, other performance optimization has been applied. Could
you tell more about your tests?

On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
which i can telll is :
  in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%
 in 0.80.6, osd's writting about 20MB/s, but io utilization is about 30%

iostat  -mx 2 with 0.87

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

avg-cpu: %user %nice %system %iowait %steal %idle
3.16 0.00 1.01 17.45 0.00 78.38

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

发件人: Haomai Wangmailto:haomaiw...@gmail.com
发送时间: 2014-10-31 09:40
收件人: 廖建锋mailto:de...@f-club.cn
抄送: ceph-usersmailto:ceph-users-boun...@lists.ceph.com; 
ceph-usersmailto:ceph-users@lists.ceph.com
主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
But at Giant, other performance optimization has been applied. Could
you tell more about your tests?

On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread Haomai Wang
Thanks, recently I mainly focus on rbd performance for it(random small write).

I want to know your test situation. Is it seq write?

On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote:
 which i can telll is :
   in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%
  in 0.80.6, osd's writting about 20MB/s, but io utilization is about 30%

 iostat  -mx 2 with 0.87

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
 sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
 sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

 avg-cpu: %user %nice %system %iowait %steal %idle
 3.16 0.00 1.01 17.45 0.00 78.38

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
 sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:40
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
 But at Giant, other performance optimization has been applied. Could
 you tell more about your tests?

 On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions
 small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --
 Best Regards,

 Wheat



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
I am not sure if it seq or ramdon,  i just use rsync to copy millions small pic 
file form our pc server to ceph cluster

发件人: Haomai Wangmailto:haomaiw...@gmail.com
发送时间: 2014-10-31 09:59
收件人: 廖建锋mailto:de...@f-club.cn
抄送: ceph-usersmailto:ceph-users-boun...@lists.ceph.com; 
ceph-usersmailto:ceph-users@lists.ceph.com
主题: Re: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

Thanks, recently I mainly focus on rbd performance for it(random small write).

I want to know your test situation. Is it seq write?

On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote:
 which i can telll is :
   in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%
  in 0.80.6, osd's writting about 20MB/s, but io utilization is about 30%

 iostat  -mx 2 with 0.87

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
 sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
 sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

 avg-cpu: %user %nice %system %iowait %steal %idle
 3.16 0.00 1.01 17.45 0.00 78.38

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
 sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:40
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
 But at Giant, other performance optimization has been applied. Could
 you tell more about your tests?

 On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions
 small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --
 Best Regards,

 Wheat



--
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread Haomai Wang
ok. I will explore it.

On Fri, Oct 31, 2014 at 10:03 AM, 廖建锋 de...@f-club.cn wrote:
 I am not sure if it seq or ramdon,  i just use rsync to copy millions small
 pic file form our pc server to ceph cluster

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:59
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Thanks, recently I mainly focus on rbd performance for it(random small
 write).

 I want to know your test situation. Is it seq write?

 On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote:
 which i can telll is :
   in 0.87 , osd's writting under 10MB/s ,but io utilization is about
 95%
  in 0.80.6, osd's writting about 20MB/s, but io utilization is about
 30%

 iostat  -mx 2 with 0.87

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 43.00 9.00 85.50 0.95 1.18 46.14 1.36 14.49 10.01 94.55
 sdc 0.00 37.50 6.00 99.00 0.62 10.01 207.31 2.24 21.31 9.33 97.95
 sda 0.00 3.50 0.00 1.00 0.00 0.02 36.00 0.02 17.50 17.50 1.75

 avg-cpu: %user %nice %system %iowait %steal %idle
 3.16 0.00 1.01 17.45 0.00 78.38

 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
 %util
 sdb 0.00 36.50 0.00 47.50 0.00 1.09 47.07 0.82 17.17 16.71 79.35
 sdc 0.00 25.00 15.00 77.50 1.26 0.65 42.34 1.73 18.72 10.70 99.00
 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

 发件人: Haomai Wang
 发送时间: 2014-10-31 09:40
 收件人: 廖建锋
 抄送: ceph-users; ceph-users
 主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

 Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
 But at Giant, other performance optimization has been applied. Could
 you tell more about your tests?

 On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
 Also found the other problem is:  the ceph osd directory has millions
 small
 files which will cause performance issue

 1008 = # pwd
 /var/lib/ceph/osd/ceph-8/current

 1007 = # ls |wc -l
 21451

 发件人: ceph-users
 发送时间: 2014-10-31 08:23
 收件人: ceph-users
 主题: [ceph-users] half performace with keyvalue backend in 0.87
 Dear Ceph,
   I used keyvalue backend in 0.80.6 and 0.80.7, the average speed
 with
 rsync millions small files is 10M byte /second
 when i upgrade to 0.87(giant), the speed slow down to 5M byte /second,  I
 don't why , is there any tunning option for this?
 will superblock cause those performance slow down?




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 --
 Best Regards,

 Wheat



 --
 Best Regards,

 Wheat



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com