cluster detail:
ceph version 0.94
3 host, 3 mon, 18 osd
1 ssd as journal + 6 hdd per host.

1 pool, name is rbd , pg_num is 1024, 3 replicated.

step:
1.
rbd create test1 -s 81920
rbd create test2 -s 81920
rbd create test3 -s 81920
2.
on host1, rbd map test1, get /dev/rbd0 on kernel 3.18.11 or /dev/rbd1 on kernel 
3.13.6
on host2, rbd map test2, get /dev/rbd0 on kernel 3.18.11 or /dev/rbd1 on kernel 
3.13.6
on host3, rbd map test3, get /dev/rbd0 on kernel 3.18.11 or /dev/rbd1 on kernel 
3.13.6
3.
startup IOMeter1.1.0, 24 workers per host, total 72 workers, 1 outstanding, 5s 
ramp up time, 120s run time

order: 4KB seq read, 4KB seq wirte, 4KB rand read, 4KB rand write

4.getting these numbers in results.csv

Thank you!

       yangruifeng

-----邮件原件-----
发件人: Ilya Dryomov [mailto:idryo...@gmail.com] 
发送时间: 2015年4月14日 15:44
收件人: yangruifeng 09209 (RD)
抄送: ceph-users
主题: Re: [ceph-users] rbd performance problem on kernel 3.13.6 and 3.18.11

On Tue, Apr 14, 2015 at 6:24 AM, yangruifeng.09...@h3c.com 
<yangruifeng.09...@h3c.com> wrote:
> Hi all!
>
>
>
> I am testing rbd performance based on kernel rbd dirver, when I 
> compared the result of the kernel 3.13.6 with 3.18.11, my head gets so 
> confused.
>
>
>
> look at the result, down by a third.
>
>
>
> 3.13.6 IOPS
>
> 3.18.11 IOPS
>
> 4KB seq read
>
> 97169
>
> 23714
>
> 4KB seq write
>
> 10110
>
> 3177
>
> 4KB rand read
>
> 7589
>
> 4565
>
> 4KB rand write
>
> 10497
>
> 2307

How exactly are you getting these numbers?  Which tools are you using, which 
tests are you running, in which order?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to