On Fri, Dec 18, 2015 at 10:55 AM, Alex Gorbachev <a...@iss-integration.com> 
wrote:
> I hope this can help anyone who is running into the same issue as us -
> kernels 4.1.x appear to have terrible RBD sequential write performance.
> Kernels before and after are great.
>
> I tested with 4.1.6 and 4.1.15 on Ubuntu 14.04.3, ceph hammer 0.94.5 - a
> simple dd test yields this result:
>
> dd if=/dev/zero of=/dev/rbd0 bs=1M count=1000 1000+0 records in 1000+0
> records out 1048576000 bytes (1.0 GB) copied, 46.3618 s, 22.6 MB/s
>
> On 3.19 and 4.2.8, quite another story:
>
> dd if=/dev/zero of=/dev/rbd0 bs=1M count=1000 1000+0 records in 1000+0
> records out 1048576000 bytes (1.0 GB) copied, 2.18914 s, 479 MB/s

This is due to an old regression in blk-mq.  rbd was switched to blk-mq
infrastructure in 4.0, the regression in blk-mq core was fixed in 4.2
by commit e6c4438ba7cb "blk-mq: fix plugging in blk_sq_make_request".
It's outside of rbd and wasn't backported, so we are kind of stuck with
it.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to