cific to the newer librbd?
>
> Thanks again,
> Brendan
>
>
> From: Jason Dillaman [jdill...@redhat.com]
> Sent: Monday, November 27, 2017 5:49 PM
> To: Brendan Moloney
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users]
, 2017 5:49 PM
To: Brendan Moloney
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] I/O stalls when doing fstrim on large RBD
My only possible suggestion would be to try a Luminous librbd client
and enable the object-map feature on the image. When the object map is
enabled, librbd will optimize
My only possible suggestion would be to try a Luminous librbd client
and enable the object-map feature on the image. When the object map is
enabled, librbd will optimize away all the no-op discard requests. If
that doesn't work, it could be an issue in the OS / controller
interaction.
On Mon, Nov
Hi,
Anyone have input on this? I am surprised there are not more people running
into this issue. I guess most people don't have multi-TB RBD images? I think
ext4 might also fair better since it does keep track of blocks that have been
discarded in the past and not modified so that they
man [jdill...@redhat.com]
Sent: Saturday, November 18, 2017 5:08 AM
To: Brendan Moloney
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] I/O stalls when doing fstrim on large RBD
Can you capture a blktrace while perform fstrim to record the discard
operations? A 1TB trim extent would caus
Can you capture a blktrace while perform fstrim to record the discard
operations? A 1TB trim extent would cause a huge impact since it would
translate to approximately 262K IO requests to the OSDs (assuming 4MB
backing files).
On Fri, Nov 17, 2017 at 6:19 PM, Brendan Moloney
Hi,
I guess this isn't strictly about Ceph, but I feel like other folks here must
have run into the same issues.
I am trying to keep my thinly provisioned RBD volumes thin. I use virtio-scsi
to attach the RBD volumes to my VMs with the "discard=unmap" option. The RBD is
formatted as XFS and