Updated the issue with zipped copies of raw LTTng files. Thanks for
taking a look!
I will also look at fixing the linking issue on librados/ceph-osd side
and send a PR up.
On 07/18, Jason Dillaman wrote:
Any chance you can zip up the raw LTTng-UST files and attach them to
the ticket? It
Any chance you can zip up the raw LTTng-UST files and attach them to
the ticket? It appears that the rbd-replay-prep tool doesn't record
translate discard events.
The change sounds good to me -- but it would also need to be made in
librados and ceph-osd since I'm sure they would have the same
I was finally able to complete the trace. So along with enabling
*rbd_tracing = true* like you adviced I had to symlink *librbd_tp.so* to
point to *librbd_tp.so.1*. Since the SONAME of the library includes the
version number I think we might need to update it in the place it is
referenced from
I enabled rbd_tracing on HV and restarted the guest as to pick the new
configuration up. The change in value of *rbd_tracing* was confirmed
from the admin socket. I am still unable to see any trace.
lsof -p does not show *librbd_tp.so* loaded despite
multiple restarts. Only *librbd.so*
There appears to be a hole in the documentation. You know have to set
a configuration option to enable tracing:
rbd_tracing = true
This will causes librbd.so to dynamically load the tracing module
librbd_tp.so (which has linkage to LTTng-UST).
On Fri, Jul 15, 2016 at 1:47 PM, Vaibhav Bhembre
I followed the steps mentioned in [1] but somehow I am unable to see any
traces to continue with its step 2. There are no errors seen when
performing operations mentioned in step 1. In my setup I am running
lttng commands on the HV where my VM has the RBD device attached.
My lttng version is
I would probably be able to resolve the issue fairly quickly if it
would be possible for you to provide a RBD replay trace from a slow
and fast mkfs.xfs test run and attach it to the tracker ticket I just
opened for this issue [1]. You can follow the instructions here [2]
but would only need to
We have been observing this similar behavior. Usually it is the case where
we create a new rbd image, expose it into the guest and perform any
operation that issues discard to the device.
A typical command that's first run on a given device is mkfs, usually with
discard on.
# time mkfs.xfs -s
I'm not sure why I never received the original list email, so I
apologize for the delay. Is /dev/sda1, from your example, fresh with
no data to actually discard or does it actually have lots of data to
discard?
Thanks,
On Wed, Jun 22, 2016 at 1:56 PM, Brian Andrus wrote:
>
I've created a downstream bug for this same issue.
https://bugzilla.redhat.com/show_bug.cgi?id=1349116
On Wed, Jun 15, 2016 at 6:23 AM, wrote:
> Hello guys,
>
> We are currently testing Ceph Jewel with object-map feature enabled:
>
> rbd image 'disk-22920':
> size
Hello guys,
We are currently testing Ceph Jewel with object-map feature enabled:
rbd image 'disk-22920':
size 102400 MB in 25600 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.7cfa2238e1f29
format: 2
features: layering, exclusive-lock,
11 matches
Mail list logo