On Wed, Feb 6, 2019 at 11:09 AM James Dingwall
<james.dingw...@zynstra.com> wrote:
> Hi,
> I have been doing some testing with striped rbd images and have a
> question about the calculation of the optimal_io_size and
> minimum_io_size parameters.  My test image was created using a 4M object
> size, stripe unit 64k and stripe count 16.
> In the kernel rbd_init_disk() code:
> unsigned int objset_bytes =
>              rbd_dev->layout.object_size * rbd_dev->layout.stripe_count;
>          blk_queue_io_min(q, objset_bytes);
>          blk_queue_io_opt(q, objset_bytes);
> Which resulted in 64M minimal / optimal io sizes.  If I understand the
> meaning correctly then even for a small write there is going to be at
> least 64M data written?

No, these are just hints.  The exported values are pretty stupid even
in the default case and more so in the custom striping case and should
be changed.  It's certainly not the case that any write is going to be
turned into io_min or io_opt sized write.

> My use case is a ceph cluster (13.2.4) hosting rbd images for VMs
> running on Xen.  The rbd volumes are mapped to dom0 and then passed
> through to the guest using standard blkback/blkfront drivers.
> I am doing a bit of testing with different stripe unit sizes but keeping
> object size * count = 4M.  Does anyone have any experience finding
> optimal rbd parameters for this scenario?

I'd recommend focusing on the client side performance numbers for the
expected workload(s), not io_min/io_opt or object size * count target.
su = 64k and sc = 16 means that a 1M request will need responses from
up to 16 OSDs at once, which is probably not what you want unless you
have a small sequential write workload (where a custom striping layout
can prove very useful).


ceph-users mailing list

Reply via email to