Re: [lttng-dev] LTTng UST Benchmarks

2024-04-25 Thread Kienan Stewart via lttng-dev

Hi Aditya,

It has been suggested to me that the following publication[1] would also 
be of interest. It gives a good comparison of micro-benchmarking tracers.


[1]: https://dl.acm.org/doi/10.1145/3158644

thanks,
kienan

On 4/25/24 1:53 PM, Kienan Stewart via lttng-dev wrote:

Hi Aditya,

On 4/24/24 11:25 AM, Aditya Kurdunkar via lttng-dev wrote:
Hello everyone, I was working on a use case where I am working on 
enabling LTTng on an embedded ARM device running the OpenBMC linux 
distribution. I have enabled the lttng yocto recipe and I am able to 
trace my code. The one thing I am concerned about is the performance 
overhead. Although the documentation mentions that LTTng has the 
lowest overhead amongst all the available solutions, I am concerned 
about the overhead of the LTTng UST in comparison to 
other available tracers/profilers. I have used the benchmarking setup 
from lttng-ust/tests/benchmark at master · lttng/lttng-ust 
(github.com) 
<https://github.com/lttng/lttng-ust/tree/master/tests/benchmark> to 
benchmark the overhead of the tracepoints (on the device). The 
benchmark, please correct me if I am wrong, gives the overhead of a 
single tracepoint in your code.


This seems to be what it does.

Although this might be fine for now, I
was just wondering if there are any published benchmarks comparing 
LTTng with the available tracing/profiling solutions. 


I don't know of any published ones that do an exhaustive comparison.

There is this one[1] which references a comparison with some parts of 
eBPF. The source for the benchmarking is also available[2].


If not, how can I go

about benchmarking the overhead of the applications?



I'm not really sure how to answer you here.

I guess the most pertinent to your use case is to test your application 
with and without tracing to see the complete effect?


It would be good to have a dedicated system, disable CPU frequency 
scaling, and to perform the tests repeatedly and measure the mean, 
median, and standard deviation.


You could pull methodological inspiration from prior publications[3], 
which while outdated in terms of software version and hardware 
demonstrate the process of creating and comparing benchmarks.


It would also be useful to identify how your application and tracing 
setup works, and to understand which parts of the system you are 
interested in measuring.


For example, the startup time of tracing rapidly spawning processes will 
depend on the type of buffering scheme in use, if the tracing 
infrastructure is loaded before or after forking, etc.


Your case might be a long running application and you aren't interested 
in startup time performance but more concretely the impact of the static 
instrumentation on one of your hot paths.


If you're not sure what kind of tracing setups work best in your case, 
or would like us to characterize at certain aspect of the tool-set's 
performance, EfficiOS[4] offers consultation and support for 
instrumentation and performance in applications.


I have come across the lttng/lttng-ust-benchmarks (github.com) 
<https://github.com/lttng/lttng-ust-benchmarks> repository which has 
no documentation on how to run it, apart from one commit message on 
how to run the benchmark script.




To run those benchmarks when you have babeltrace2, lttng-tools, urcu, 
lttng-ust, and optional lttng-modules installed:


```
$ make
$ python3 ./benchmark.py
```

This should produce a file, `benchmarks.json`

You can also inspect how the CI job runs it: 
https://ci.lttng.org/view/LTTng-ust/job/lttng-ust-benchmarks_master_linuxbuild/



Any help is really appreciated. Thank you.

Regards,
Aditya

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[1]: 
https://tracingsummit.org/ts/2022/files/Tracing_Summit_2022-LTTng_Beyond_Ring-Buffer_Based_Tracing_Jeremie_Galarneau_.pdf

[2]: https://github.com/jgalar/LinuxCon2022-Benchmarks
[3]: https://www.dorsal.polymtl.ca/files/publications/desnoyers.pdf
[4]: https://www.efficios.com/contact/

thanks,
kienan
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST Benchmarks

2024-04-25 Thread Kienan Stewart via lttng-dev

Hi Aditya,

On 4/24/24 11:25 AM, Aditya Kurdunkar via lttng-dev wrote:
Hello everyone, I was working on a use case where I am working on 
enabling LTTng on an embedded ARM device running the OpenBMC linux 
distribution. I have enabled the lttng yocto recipe and I am able to 
trace my code. The one thing I am concerned about is the performance 
overhead. Although the documentation mentions that LTTng has the lowest 
overhead amongst all the available solutions, I am concerned about the 
overhead of the LTTng UST in comparison to 
other available tracers/profilers. I have used the benchmarking setup 
from lttng-ust/tests/benchmark at master · lttng/lttng-ust (github.com) 
 to 
benchmark the overhead of the tracepoints (on the device). The 
benchmark, please correct me if I am wrong, gives the overhead of a 
single tracepoint in your code.


This seems to be what it does.

Although this might be fine for now, I
was just wondering if there are any published benchmarks comparing LTTng 
with the available tracing/profiling solutions. 


I don't know of any published ones that do an exhaustive comparison.

There is this one[1] which references a comparison with some parts of 
eBPF. The source for the benchmarking is also available[2].


If not, how can I go

about benchmarking the overhead of the applications?



I'm not really sure how to answer you here.

I guess the most pertinent to your use case is to test your application 
with and without tracing to see the complete effect?


It would be good to have a dedicated system, disable CPU frequency 
scaling, and to perform the tests repeatedly and measure the mean, 
median, and standard deviation.


You could pull methodological inspiration from prior publications[3], 
which while outdated in terms of software version and hardware 
demonstrate the process of creating and comparing benchmarks.


It would also be useful to identify how your application and tracing 
setup works, and to understand which parts of the system you are 
interested in measuring.


For example, the startup time of tracing rapidly spawning processes will 
depend on the type of buffering scheme in use, if the tracing 
infrastructure is loaded before or after forking, etc.


Your case might be a long running application and you aren't interested 
in startup time performance but more concretely the impact of the static 
instrumentation on one of your hot paths.


If you're not sure what kind of tracing setups work best in your case, 
or would like us to characterize at certain aspect of the tool-set's 
performance, EfficiOS[4] offers consultation and support for 
instrumentation and performance in applications.


I have come across the lttng/lttng-ust-benchmarks (github.com) 
 repository which has no 
documentation on how to run it, apart from one commit message on how to 
run the benchmark script.




To run those benchmarks when you have babeltrace2, lttng-tools, urcu, 
lttng-ust, and optional lttng-modules installed:


```
$ make
$ python3 ./benchmark.py
```

This should produce a file, `benchmarks.json`

You can also inspect how the CI job runs it: 
https://ci.lttng.org/view/LTTng-ust/job/lttng-ust-benchmarks_master_linuxbuild/



Any help is really appreciated. Thank you.

Regards,
Aditya

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[1]: 
https://tracingsummit.org/ts/2022/files/Tracing_Summit_2022-LTTng_Beyond_Ring-Buffer_Based_Tracing_Jeremie_Galarneau_.pdf

[2]: https://github.com/jgalar/LinuxCon2022-Benchmarks
[3]: https://www.dorsal.polymtl.ca/files/publications/desnoyers.pdf
[4]: https://www.efficios.com/contact/

thanks,
kienan
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng UST Benchmarks

2024-04-24 Thread Aditya Kurdunkar via lttng-dev
Hello everyone, I was working on a use case where I am working on enabling
LTTng on an embedded ARM device running the OpenBMC linux distribution. I
have enabled the lttng yocto recipe and I am able to trace my code. The one
thing I am concerned about is the performance overhead. Although the
documentation mentions that LTTng has the lowest overhead amongst all the
available solutions, I am concerned about the overhead of the LTTng UST in
comparison to other available tracers/profilers. I have used the
benchmarking setup from lttng-ust/tests/benchmark at master ·
lttng/lttng-ust (github.com)
 to
benchmark the overhead of the tracepoints (on the device). The benchmark,
please correct me if I am wrong, gives the overhead of a single tracepoint
in your code. Although this might be fine for now, I was just wondering if
there are any published benchmarks comparing LTTng with the available
tracing/profiling solutions. If not, how can I go about benchmarking the
overhead of the applications?

I have come across the lttng/lttng-ust-benchmarks (github.com)
 repository which has no
documentation on how to run it, apart from one commit message on how to run
the benchmark script.

Any help is really appreciated. Thank you.

Regards,
Aditya
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-04-03 Thread Erica Bugden via lttng-dev

Hello Zvika,

On 2024-04-02 23:52, Zvi Vered wrote:

Hi Erica,

Thank you very much for your answer.
Can you please tell what is the added value of ftrace (compared to using 
only lttng) ?


I don't think I understand the intention behind your question. I'll make 
some guesses below and you're welcome to clarify if you wish.


ftrace and LTTng are different tools that have some overlap in the 
tracing use cases they can address. ftrace is a Linux kernel tracer that 
is included in the kernel; it isn't an LTTng add-on.


Both ftrace and LTTng can trace the Linux kernel if the tracepoints have 
been included. LTTng doesn't use ftrace, but most kernels that are 
configured to include the tracepoints typically also include ftrace.


That being said, if you only want to trace userspace applications with 
LTTng and don't also want kernel traces, then you don't need an 
ftrace-enabled kernel.


Best,
Erica



Best regards,
Zvika


On Tue, Apr 2, 2024 at 5:11 PM Erica Bugden <mailto:ebug...@efficios.com>> wrote:


Hello Zvika,

On 2024-03-29 01:09, Zvi Vered via lttng-dev wrote:
 > Hi Christopher,
 >
 > Thank you very much for your reply.
 > Can you please explain what do you mean by ftrace-enabled kernel ?

I believe what Christopher means by "ftrace-enabled" kernel is that the
Linux kernel has been configured to include ftrace. Both the ftrace
tracer and the LTTng tracer use the same kernel tracepoints to extract
execution information and these tracepoints are included in the kernel
if ftrace is included.

Most Linux distributions will include ftrace by default. However, you
can check whether this is the case by searching for `tracefs` in
`/proc/filesystems` (assuming it's already mounted) or by trying to
mount `tracefs`. `tracefs` is the filesystem ftrace uses to communicate
with users.

More details about how to check if ftrace is enabled and how to enable
it if not:
https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/ftrace 
<https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/ftrace>

The "More Information" section points to the primary sources (Linux
kernel documentation), but I find this page to be a good starting point.

Best,
Erica

 >
 > Best regards,
 > Zvika
 >
 > On Wed, Mar 27, 2024 at 7:32 PM Christopher Harvey via lttng-dev
 > mailto:lttng-dev@lists.lttng.org>
<mailto:lttng-dev@lists.lttng.org
<mailto:lttng-dev@lists.lttng.org>>> wrote:
 >
 >     you can use an ftrace-enabled kernel with lttng (maybe even just
 >     tracecompass) or perfetto to get that kind of trace
 >
 >

https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
 
<https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html>
 
<https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
 
<https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html>>
 >
 >     or
 >
 > https://ui.perfetto.dev/ <https://ui.perfetto.dev/>
<https://ui.perfetto.dev/ <https://ui.perfetto.dev/>>
 >
 >     On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
 >      > Hello,
 >      >
 >      > I have an application with 4 threads.
 >      > I'm required to display on the graph when thread starts
working
 >     till it
 >      > blocks for the next semaphore.
 >      >
 >      > But without using the lttng userspace library.
 >      >
 >      > Is it possible ?
 >      >
 >      > Thank you,
 >      > Zvika
 >      > ___
 >      > lttng-dev mailing list
 >      > lttng-dev@lists.lttng.org
<mailto:lttng-dev@lists.lttng.org> <mailto:lttng-dev@lists.lttng.org
<mailto:lttng-dev@lists.lttng.org>>
 >      > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
    <https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev>
 >     <https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
<https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev>>
 >     ___
 >     lttng-dev mailing list
 > lttng-dev@lists.lttng.org <mailto:lttng-dev@lists.lttng.org>
<mailto:lttng-dev@lists.lttng.org <mailto:lttng-dev@lists.lttng.org>>
 > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-d

Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-04-02 Thread Zvi Vered via lttng-dev
Hi Erica,

Thank you very much for your answer.
Can you please tell what is the added value of ftrace (compared to using
only lttng) ?

Best regards,
Zvika


On Tue, Apr 2, 2024 at 5:11 PM Erica Bugden  wrote:

> Hello Zvika,
>
> On 2024-03-29 01:09, Zvi Vered via lttng-dev wrote:
> > Hi Christopher,
> >
> > Thank you very much for your reply.
> > Can you please explain what do you mean by ftrace-enabled kernel ?
>
> I believe what Christopher means by "ftrace-enabled" kernel is that the
> Linux kernel has been configured to include ftrace. Both the ftrace
> tracer and the LTTng tracer use the same kernel tracepoints to extract
> execution information and these tracepoints are included in the kernel
> if ftrace is included.
>
> Most Linux distributions will include ftrace by default. However, you
> can check whether this is the case by searching for `tracefs` in
> `/proc/filesystems` (assuming it's already mounted) or by trying to
> mount `tracefs`. `tracefs` is the filesystem ftrace uses to communicate
> with users.
>
> More details about how to check if ftrace is enabled and how to enable
> it if not:
> https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/ftrace
>
> The "More Information" section points to the primary sources (Linux
> kernel documentation), but I find this page to be a good starting point.
>
> Best,
> Erica
>
> >
> > Best regards,
> > Zvika
> >
> > On Wed, Mar 27, 2024 at 7:32 PM Christopher Harvey via lttng-dev
> > mailto:lttng-dev@lists.lttng.org>> wrote:
> >
> > you can use an ftrace-enabled kernel with lttng (maybe even just
> > tracecompass) or perfetto to get that kind of trace
> >
> >
> https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
> <
> https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
> >
> >
> > or
> >
> > https://ui.perfetto.dev/ <https://ui.perfetto.dev/>
> >
> > On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
> >  > Hello,
> >  >
> >  > I have an application with 4 threads.
> >  > I'm required to display on the graph when thread starts working
> > till it
> >  > blocks for the next semaphore.
> >  >
> >  > But without using the lttng userspace library.
> >  >
> >  > Is it possible ?
> >  >
> >  > Thank you,
> >  > Zvika
> >  > ___
> >  > lttng-dev mailing list
> >  > lttng-dev@lists.lttng.org <mailto:lttng-dev@lists.lttng.org>
> >  > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> > <https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev>
> > _______
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org <mailto:lttng-dev@lists.lttng.org>
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> > <https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev>
> >
> >
> > ___
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-modules(13.9) : insmod order

2024-04-02 Thread Zvi Vered via lttng-dev
Hi Kienan,

Thank you for your help!

Best regards,
Zvika

On Tue, Apr 2, 2024 at 4:17 PM Kienan Stewart  wrote:

> Hi Zvika,
>
> a sessiond launched as root will automatically try to load the
> appropriate kernel modules as mentioned here[1], using either modprobe
> or libkmod depending on how the build was configured.
>
> thanks,
> kienan
>
> [1]: https://lttng.org/docs/v2.13/#doc-lttng-modules
>
>
> On 3/30/24 10:52 PM, Zvi Vered via lttng-dev wrote:
> > Hello,
> >
> > For some reason, I failed to integrate v13.9 into buildroot 2023.02.2
> > But I compiled all modules with the my kernel source: 5.4.249
> >
> > Can you please tell what is the right order of *insmod *for the kernel
> > modules ?
> >
> > All I need is to record IRQ events and user space events.
> >
> > Thank you,
> > Zvika
> >
> > ___________
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-04-02 Thread Erica Bugden via lttng-dev

Hello Zvika,

On 2024-03-29 01:09, Zvi Vered via lttng-dev wrote:

Hi Christopher,

Thank you very much for your reply.
Can you please explain what do you mean by ftrace-enabled kernel ?


I believe what Christopher means by "ftrace-enabled" kernel is that the 
Linux kernel has been configured to include ftrace. Both the ftrace 
tracer and the LTTng tracer use the same kernel tracepoints to extract 
execution information and these tracepoints are included in the kernel 
if ftrace is included.


Most Linux distributions will include ftrace by default. However, you 
can check whether this is the case by searching for `tracefs` in 
`/proc/filesystems` (assuming it's already mounted) or by trying to 
mount `tracefs`. `tracefs` is the filesystem ftrace uses to communicate 
with users.


More details about how to check if ftrace is enabled and how to enable 
it if not: 
https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/ftrace


The "More Information" section points to the primary sources (Linux 
kernel documentation), but I find this page to be a good starting point.


Best,
Erica



Best regards,
Zvika

On Wed, Mar 27, 2024 at 7:32 PM Christopher Harvey via lttng-dev 
mailto:lttng-dev@lists.lttng.org>> wrote:


you can use an ftrace-enabled kernel with lttng (maybe even just
tracecompass) or perfetto to get that kind of trace


https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
 
<https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html>

or

https://ui.perfetto.dev/ <https://ui.perfetto.dev/>

On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
 > Hello,
 >
 > I have an application with 4 threads.
 > I'm required to display on the graph when thread starts working
till it
 > blocks for the next semaphore.
 >
 > But without using the lttng userspace library.
 >
 > Is it possible ?
 >
 > Thank you,
 > Zvika
 > ___
 > lttng-dev mailing list
 > lttng-dev@lists.lttng.org <mailto:lttng-dev@lists.lttng.org>
 > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
<https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org <mailto:lttng-dev@lists.lttng.org>
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
<https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev>


_______
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-modules(13.9) : insmod order

2024-04-02 Thread Kienan Stewart via lttng-dev

Hi Zvika,

a sessiond launched as root will automatically try to load the 
appropriate kernel modules as mentioned here[1], using either modprobe 
or libkmod depending on how the build was configured.


thanks,
kienan

[1]: https://lttng.org/docs/v2.13/#doc-lttng-modules


On 3/30/24 10:52 PM, Zvi Vered via lttng-dev wrote:

Hello,

For some reason, I failed to integrate v13.9 into buildroot 2023.02.2
But I compiled all modules with the my kernel source: 5.4.249

Can you please tell what is the right order of *insmod *for the kernel 
modules ?


All I need is to record IRQ events and user space events.

Thank you,
Zvika

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] lttng-modules(13.9) : insmod order

2024-03-30 Thread Zvi Vered via lttng-dev
Hello,

For some reason, I failed to integrate v13.9 into buildroot 2023.02.2
But I compiled all modules with the my kernel source: 5.4.249

Can you please tell what is the right order of *insmod *for the kernel
modules ?

All I need is to record IRQ events and user space events.

Thank you,
Zvika
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-03-28 Thread Zvi Vered via lttng-dev
Hi Christopher,

Thank you very much for your reply.
Can you please explain what do you mean by ftrace-enabled kernel ?

Best regards,
Zvika

On Wed, Mar 27, 2024 at 7:32 PM Christopher Harvey via lttng-dev <
lttng-dev@lists.lttng.org> wrote:

> you can use an ftrace-enabled kernel with lttng (maybe even just
> tracecompass) or perfetto to get that kind of trace
>
>
> https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
>
> or
>
> https://ui.perfetto.dev/
>
> On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
> > Hello,
> >
> > I have an application with 4 threads.
> > I'm required to display on the graph when thread starts working till it
> > blocks for the next semaphore.
> >
> > But without using the lttng userspace library.
> >
> > Is it possible ?
> >
> > Thank you,
> > Zvika
> > ___
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> _______
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-03-27 Thread Christopher Harvey via lttng-dev
you can use an ftrace-enabled kernel with lttng (maybe even just tracecompass) 
or perfetto to get that kind of trace

https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html

or

https://ui.perfetto.dev/

On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
> Hello,
>
> I have an application with 4 threads. 
> I'm required to display on the graph when thread starts working till it 
> blocks for the next semaphore. 
>
> But without using the lttng userspace library.
>
> Is it possible ?
>
> Thank you,
> Zvika
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] Lttng: display active threads in multiple cores.

2024-03-27 Thread Zvi Vered via lttng-dev
Hello,

I have an application with 4 threads.
I'm required to display on the graph when thread starts working till it
blocks for the next semaphore.

But without using the lttng userspace library.

Is it possible ?

Thank you,
Zvika
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTNG Continuously Crashing on Debian-12

2024-03-12 Thread Kienan Stewart via lttng-dev

Hi Lakshmi,

I'd like to encourage once again to review the bug reporting guidelines 
at https://lttng.org/community/ and include all the necessary 
information including steps to reproduce the issue. Thank you.


On 3/12/24 1:51 PM, Lakshmi Deverkonda wrote:

Hi,

We see that python3 based lttng is continuously crashing on debian-12.
Kernel Version is 6.1.0. Is there some special handling that has to be 
taken care for debian-12?




To the best of my knowledge there are no special cases for Debian 12 in 
upstream lttng-ust. The Debian source package carries two minor patches 
from what I can see, neither of which seem directly related to the 
python agent.



Following is the core decode.


Could you install the relevant "-dbgsym" packages for libc and 
liblttng-ust or fetch the debug symbols from debuginfod so that the 
addresses are resolved into meaningful symbols?



Program terminated with signal SIGABRT, Aborted.
#0  0x7fb95dac9e2c in ?? () from /lib/x86_64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0x7fb95d96c040 (LWP 19426))]
(gdb) bt
#0  0x7fb95dac9e2c in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fb95da7afb2 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7fb95da65472 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7fb95da65395 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#4  0x7fb95da73eb2 in __assert_fail () from 
/lib/x86_64-linux-gnu/libc.so.6

#5  0x7fb95d9efbbf in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#6  0x7fb95d9f0f23 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#7  0x7fb95d9ece2f in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#8  0x7fb95d9dc537 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#9  0x7fb95d9c73c2 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#10 0x7fb95d9c8003 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#11 0x7fb95d9c8003 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#12 0x7fb95d9c8003 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#13 0x7fb95d9c8fce in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#14 0x7fb95d9c2408 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#15 0x7fb95dd6812a in ?? () from /lib64/ld-linux-x86-64.so.2
#16 0x7fb95dd6b764 in ?? () from /lib64/ld-linux-x86-64.so.2
#17 0x7fb95da7d55d in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#18 0x7fb95da7d69a in exit () from /lib/x86_64-linux-gnu/libc.so.6
#19 0x7fb95da66251 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#20 0x7fb95da66305 in __libc_start_main () from 
/lib/x86_64-linux-gnu/libc.so.6

#21 0x00627461 in _start ()

Regards,
Lakshmi


thanks,
kienan
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTNG Continuously Crashing on Debian-12

2024-03-12 Thread Lakshmi Deverkonda via lttng-dev
Hi,

We see that python3 based lttng is continuously crashing on debian-12.
Kernel Version is 6.1.0. Is there some special handling that has to be taken 
care for debian-12?

Following is the core decode.
Program terminated with signal SIGABRT, Aborted.
#0  0x7fb95dac9e2c in ?? () from /lib/x86_64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0x7fb95d96c040 (LWP 19426))]
(gdb) bt
#0  0x7fb95dac9e2c in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fb95da7afb2 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7fb95da65472 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7fb95da65395 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#4  0x7fb95da73eb2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#5  0x7fb95d9efbbf in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#6  0x7fb95d9f0f23 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#7  0x7fb95d9ece2f in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#8  0x7fb95d9dc537 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#9  0x7fb95d9c73c2 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#10 0x7fb95d9c8003 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#11 0x7fb95d9c8003 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#12 0x7fb95d9c8003 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#13 0x7fb95d9c8fce in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#14 0x7fb95d9c2408 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.1
#15 0x7fb95dd6812a in ?? () from /lib64/ld-linux-x86-64.so.2
#16 0x7fb95dd6b764 in ?? () from /lib64/ld-linux-x86-64.so.2
#17 0x7fb95da7d55d in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#18 0x7fb95da7d69a in exit () from /lib/x86_64-linux-gnu/libc.so.6
#19 0x7fb95da66251 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#20 0x7fb95da66305 in __libc_start_main () from 
/lib/x86_64-linux-gnu/libc.so.6
#21 0x00627461 in _start ()

Regards,
Lakshmi
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng sessiond daemon Assertion `buf' failed and killed

2024-01-12 Thread Kienan Stewart via lttng-dev

Hi Yonghong,

thanks for the extra info. As you say without the symbol resolution the 
backtrace isn't super useful.


Based on the build-id in the file info for lttng-sessiond, it looks like 
you might be using the lttng/stable-2.13 ppa.


In that case, you could try using debuginfod with gdb to automatically 
download the symbols. See 
https://ubuntu.com/server/docs/service-debuginfod for the instructions 
enabling the symbol download from Ubuntu's debuginfod server.


thanks,
kienan

On 1/12/24 15:38, Yonghong Yan wrote:
Coredump is about 124MB and I am copying the backtrace of the coredump, 
see below. Unfortunately, the sessiond is stripped and no debugging 
info, so not able to see the the call stack that lead to the dump. guess 
I need to build the lttng with debugging info and to get the symbol and 
informed backtrace.


 From the log itself, it is asserted aat >      > DBG1 - 
00:24:25.241138386 [Client management]: Setting relayd for

      > session auto-20240112-002417 (in cmd_setup_relayd() at cmd.c:1004)
      >
      > lttng-sessiond: unix.c:185: lttcomm_recv_unix_sock: Assertion
     `buf' failed.


*yyan7@CCI13SZWP3LWS*:*/var/log*$ tail apport.log


ERROR: apport (pid 3787) Fri Jan 12 10:52:37 2024: debug: session gdbus 
call:


ERROR: apport (pid 3787) Fri Jan 12 10:52:37 2024: this executable 
already crashed 2 times, ignoring


ERROR: apport (pid 10543) Fri Jan 12 15:18:07 2024: called for pid 
10470, signal 6, core limit 18446744073709551615, dump mode 1


ERROR: apport (pid 10543) Fri Jan 12 15:18:07 2024: ignoring implausibly 
big core limit, treating as unlimited


ERROR: apport (pid 10543) Fri Jan 12 15:18:07 2024: executable: 
/usr/bin/lttng-sessiond (command line "lttng-sessiond -vvv 
--verbose-consumer -b")


ERROR: apport (pid 10543) Fri Jan 12 15:18:08 2024: debug: session gdbus 
call: (true,)



ERROR: apport (pid 10543) Fri Jan 12 15:18:08 2024: writing core dump to 
core._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109 (limit: -1)


ERROR: apport (pid 10543) Fri Jan 12 15:18:08 2024: this executable 
already crashed 2 times, ignoring


*yyan7@CCI13SZWP3LWS*:*/var/log*$ cd /var/lib/apport/coredump

*yyan7@CCI13SZWP3LWS*:*/var/lib/apport/coredump*$ ls

core._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109

*yyan7@CCI13SZWP3LWS*:*/var/lib/apport/coredump*$ file 
core._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109


core._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109:
 ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style, from 
'lttng-sessiond -vvv --verbose-consumer -b', real uid: 1000, effective uid: 
1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/lttng-sessiond', 
platform: 'x86_64'

*yyan7@CCI13SZWP3LWS*:*/var/lib/apport/coredump*$ du -h 
core._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109


124Mcore._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109

*yyan7@CCI13SZWP3LWS*:*/var/lib/apport/coredump*$ gdb 
/usr/bin/lttng-sessiond 
core._usr_bin_lttng-sessiond.1000.8f22faa6-512c-49ca-9089-8b3f0801da8d.10470.2023109


*GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1*

Copyright (C) 2022 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later 
>


This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law.

Type "show copying" and "show warranty" for details.

This GDB was configured as "x86_64-linux-gnu".

Type "show configuration" for configuration details.

For bug reporting instructions, please see:

>.


Find the GDB manual and other documentation resources online at:

>.



For help, type "help".

Type "apropos word" to search for commands related to "word"...

Reading symbols from /usr/bin/lttng-sessiond...

(No debugging symbols found in /usr/bin/lttng-sessiond)

[New LWP 10479]

[New LWP 10470]

[New LWP 10476]

[New LWP 10475]

[New LWP 10472]

[New LWP 10478]

[New LWP 10480]

[New LWP 10473]

[New LWP 10477]

[New LWP 10482]

[New LWP 10542]

[New LWP 10533]

[New LWP 10484]

[New LWP 10474]

[New LWP 10481]

[New LWP 10483]

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

--Type  for more, q to quit, c to continue without paging--

Core was generated by `lttng-sessiond -vvv --verbose-consumer -b'.

Program terminated with signal SIGABRT, Aborted.

#0__pthread_kill_implementation(no_tid=0, signo=6, 
threadid=140614729455104) at ./nptl/pthread_kill.c:44


44./nptl/pthread_kill.c: No such file or directory.

[Current thread is 1 (Thread 0x7fe36affd600 (LWP 10479))]


Re: [lttng-dev] LTTng sessiond daemon Assertion `buf' failed and killed

2024-01-12 Thread Kienan Stewart via lttng-dev

Hi Yonghong,

thanks for the additional information. Would you be willing and able to 
share a coredump and/or backtrace of the crash?


I was still unable to reproduce the issue using the commands you 
provided, but am interested in understanding what is happening here.


Are there any configuration options other than LTTNG_UST_DEBUG=1 set in 
your environment?


Are you using urcu/lttng from packages (if so, which repo), or built 
from source?


The test case I am using now:

```
# As root
$ lttng-sessiond -b

# As a non-root user which is not a member of the tracing group
$ export LTTNG_UST_DEBUG=1
$ lttng-relayd -v -b
$ lttng-sessiond -vvv --verbose-consumer -b
$ lttng create
$ lttng enable-event -u -a
```

thanks,
kienan

P.S. In the future, could you keep the lttng-dev in CC? thanks!

On 1/12/24 10:36, Yonghong Yan wrote:

Hi Kienan,

Thank you for checking.  It might be just my setting problem. I will 
just send it to you before we are sure it is an issue of lttng


Below is the output and steps I followed the guideline. Since I started 
the relayd and sessiond in the same terminal, the verbose msg are mixed 
together. I can regenerate it using two separate terminals. It is a Dell 
Precision Tower box and it worked well yesterday. But I rebuilt the 
system yesterday (because of NVIDIA GPU driver issue), and it then have 
this issue.



*yyan7@CCI13SZWP3LWS*:*~*$ uname -a

Linux CCI13SZWP3LWS 6.5.0-14-generic #14~22.04.1-Ubuntu SMP 
PREEMPT_DYNAMIC Mon Nov 20 18:15:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux


*yyan7@CCI13SZWP3LWS*:*~*$ cat /etc/lsb-release

DISTRIB_ID=Ubuntu

DISTRIB_RELEASE=22.04

DISTRIB_CODENAME=jammy

DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"

*yyan7@CCI13SZWP3LWS*:*~*$ lttng --version

lttng (LTTng Trace Control) 2.13.10 - Nordicité

*yyan7@CCI13SZWP3LWS*:*~*$ lsmod | grep lttng

*lttng*_probe_writeback614400

*lttng*_probe_workqueue204800

*lttng*_probe_vmscan 450560

*lttng*_probe_udp122880

*lttng*_probe_timer327680

*lttng*_probe_sunrpc 204800

*lttng*_probe_statedump573440

*lttng*_probe_sock 163840

*lttng*_probe_skb163840

*lttng*_probe_signal 163840

*lttng*_probe_scsi 204800

*lttng*_probe_sched450560

*lttng*_probe_regulator204800

*lttng*_probe_rcu122880

*lttng*_probe_printk 122880

*lttng*_probe_power204800

*lttng*_probe_net327680

*lttng*_probe_napi 122880

*lttng*_probe_module 204800

*lttng*_probe_kvm368640

*lttng*_probe_jbd2 327680

*lttng*_probe_irq163840

*lttng*_probe_gpio 122880

*lttng*_probe_block409600

*lttng*_probe_asoc 368640

*lttng*_counter_client_percpu_32_modular122880

*lttng*_counter_client_percpu_64_modular122881

*lttng*_counter163842 
*lttng*_counter_client_percpu_64_modular,*lttng*_counter_client_percpu_32_modular


*lttng*_ring_buffer_event_notifier_client204802

*lttng*_ring_buffer_metadata_mmap_client204800

*lttng*_ring_buffer_client_mmap_overwrite245760

*lttng*_ring_buffer_client_mmap_discard245760

*lttng*_ring_buffer_metadata_client204800

*lttng*_ring_buffer_client_overwrite245760

*lttng*_ring_buffer_client_discard245760

*lttng*_tracer 303513637 
*lttng*_probe_udp,*lttng*_probe_scsi,*lttng*_probe_sched,*lttng*_probe_net,*lttng*_probe_vmscan,*lttng*_probe_writeback,*lttng*_probe_power,*lttng*_probe_rcu,*lttng*_probe_module,*lttng*_ring_buffer_client_mmap_overwrite,*lttng*_probe_statedump,*lttng*_ring_buffer_client_discard,*lttng*_probe_printk,*lttng*_probe_sock,*lttng*_probe_asoc,*lttng*_counter_client_percpu_64_modular,*lttng*_probe_irq,*lttng*_ring_buffer_client_mmap_discard,*lttng*_probe_kvm,*lttng*_probe_timer,*lttng*_ring_buffer_event_notifier_client,*lttng*_counter_client_percpu_32_modular,*lttng*_probe_workqueue,*lttng*_probe_jbd2,*lttng*_probe_signal,*lttng*_probe_skb,*lttng*_probe_block,*lttng*_probe_napi,*lttng*_ring_buffer_metadata_client,*lttng*_ring_buffer_metadata_mmap_client,*lttng*_probe_gpio,*lttng*_ring_buffer_client_overwrite,*lttng*_probe_regulator,*lttng*_probe_sunrpc


*lttng*_statedump 7536641 *lttng*_tracer

*lttng*_wrapper163847 
*lttng*_statedump,*lttng*_probe_writeback,*lttng*_ring_buffer_client_mmap_overwrite,*lttng*_ring_buffer_client_discard,*lttng*_tracer,*lttng*_ring_buffer_client_mmap_discard,*lttng*_ring_buffer_client_overwrite


*lttng*_uprobes163841 *lttng*_tracer

*lttng*_clock122885 
*lttng*_ring_buffer_client_mmap_overwrite,*lttng*_ring_buffer_client_discard,*lttng*_tracer,*lttng*_ring_buffer_client_mmap_discard,*lttng*_ring_buffer_client_overwrite


*lttng*_kprobes163841 *lttng*_tracer

*lttng*_lib_ring_buffer942088 
*lttng*_ring_buffer_client_mmap_overwrite,*lttng*_ring_buffer_client_discard,*lttng*_tracer,*lttng*_ring_buffer_client_mmap_discard,*lttng*_ring_buffer_event_notifier_client,*lttng*_ring_buffer_metadata_client,*lttng*_ring_buffer_metadata_mmap_client,*lttng*_ring_buffer_client_overwrite


*lttng*_kretprobes 163841 *lttng*_tracer

*yyan7@CCI13SZWP3LWS*:*~*$ export LTTNG_UST_DEBUG=1

*yyan7@CCI13SZWP3LWS*:*~*$ lttng-relayd -v -b

DBG1 - 10:24:56.244818657 [3495/3495]: File 

Re: [lttng-dev] LTTng sessiond daemon Assertion `buf' failed and killed

2024-01-12 Thread Kienan Stewart via lttng-dev

Hi Yonghong,

in a brief test I'm unable to reproduce the error you see by running the 
following commands on an Ubuntu 22.04 installation with lttng-tools 
2.13.10, lttng-ust 2.13.6, urcu stable-0.12, and babeltrace stable-2.0.

```
$ uname -r -v
6.5.0-14-generic #14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 
18:15:30 UTC 2


$ lttng-relayd -v -b
$ lttng-sessiond -v -b
$ lttng create
$ lttng enable-event -u --all
```

Could you please review the bug reporting guidelines at 
https://lttng.org/community/ and elaborate on the steps taken to 
reproduce the issue?


thanks,
kienan

On 1/12/24 08:12, Yonghong Yan via lttng-dev wrote:
I am not sure whether this is my setting problem or a bug with a 
more recent kernel. lttng-sessiond was killed when I tried to "enable 
event" after a session was created. See below part of the verbose output 
of the sessiond. It is observed on Ubuntu 22.04, kernel 6.5.0-14-generic 
#14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC, lttng (LTTng Trace Control) 
2.13.10 - Nordicité.



The same version of LTTng works on another Ubuntu 22.04 machine, but 
with kernel 6.2.0-33-generic. Any suggestion on what I should try?



Thank you

Yonghong



c:1016)

DBG1 - 00:24:25.134138433 [Client management]: Getting session 
auto-20240112-002417 by name (in process_client_msg() at client.c:1133)


DBG1 - 00:24:25.134148894 [Client management]: Creating UST session (in 
create_ust_session() at client.c:510)


DBG1 - 00:24:25.134267958 [Client management]: Spawning consumerd (in 
spawn_consumerd() at client.c:204)


DBG1 - 00:24:25.135155823 [Client management]: Waiting for consumer 
management thread to be ready (in wait_until_thread_is_ready() at 
manage-consumer.c:46)


DBG1 - 00:24:25.135247552 [Consumer management]: Entering thread entry 
point (in launch_thread() at thread.c:65)


DBG1 - 00:24:25.135293542 [Consumer management]: [thread] Manage 
consumer started (in thread_consumer_management() at manage-consumer.c:65)


DBG1 - 00:24:25.135335776 [Client management]: Using 64-bit UST consumer 
at: /usr/lib/x86_64-linux-gnu/lttng/libexec/lttng-consumerd (in 
spawn_consumerd() at client.c:284)


DBG1 - 00:24:25.240725883 [Consumer management]: Consumer command socket 
ready (fd: 61) (in thread_consumer_management() at manage-consumer.c:204)


DBG1 - 00:24:25.240746802 [Consumer management]: Consumer metadata 
socket ready (fd: 62) (in thread_consumer_management() at 
manage-consumer.c:205)


DBG1 - 00:24:25.240775318 [Consumer management]: Sending consumer 
initialization command (in consumer_init() at consumer.c:1791)


DBG1 - 00:24:25.241066762 [Consumer management]: Marking consumer 
management thread as ready (in mark_thread_as_ready() at 
manage-consumer.c:31)


DBG1 - 00:24:25.241100481 [Client management]: Consumer management 
thread is ready (in wait_until_thread_is_ready() at manage-consumer.c:48)


DBG1 - 00:24:25.241138386 [Client management]: Setting relayd for 
session auto-20240112-002417 (in cmd_setup_relayd() at cmd.c:1004)


lttng-sessiond: unix.c:185: lttcomm_recv_unix_sock: Assertion `buf' failed.

Error: Events: No session daemon is available (channel channel0, session 
auto-20240112-002417)


[1]+Aborted (core dumped) lttng-sessiond --verbose

DBG1 - 00:24:26.357769106 [Run-as worker]: run_as worker exiting (ret = 
0) (in run_as_create_worker_no_lock() at runas.c:1526)




___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng sessiond daemon Assertion `buf' failed and killed

2024-01-12 Thread Yonghong Yan via lttng-dev
I am not sure whether this is my setting problem or a bug with a
more recent kernel. lttng-sessiond was killed when I tried to "enable
event" after a session was created. See below part of the verbose output of
the sessiond. It is observed on Ubuntu 22.04, kernel 6.5.0-14-generic
#14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC, lttng (LTTng Trace Control) 2.13.10
- Nordicité.


The same version of LTTng works on another Ubuntu 22.04 machine, but with
kernel 6.2.0-33-generic. Any suggestion on what I should try?


Thank you

Yonghong



c:1016)

DBG1 - 00:24:25.134138433 [Client management]: Getting session
auto-20240112-002417 by name (in process_client_msg() at client.c:1133)

DBG1 - 00:24:25.134148894 [Client management]: Creating UST session (in
create_ust_session() at client.c:510)

DBG1 - 00:24:25.134267958 [Client management]: Spawning consumerd (in
spawn_consumerd() at client.c:204)

DBG1 - 00:24:25.135155823 [Client management]: Waiting for consumer
management thread to be ready (in wait_until_thread_is_ready() at
manage-consumer.c:46)

DBG1 - 00:24:25.135247552 [Consumer management]: Entering thread entry
point (in launch_thread() at thread.c:65)

DBG1 - 00:24:25.135293542 [Consumer management]: [thread] Manage consumer
started (in thread_consumer_management() at manage-consumer.c:65)

DBG1 - 00:24:25.135335776 [Client management]: Using 64-bit UST consumer
at: /usr/lib/x86_64-linux-gnu/lttng/libexec/lttng-consumerd (in
spawn_consumerd() at client.c:284)

DBG1 - 00:24:25.240725883 [Consumer management]: Consumer command socket
ready (fd: 61) (in thread_consumer_management() at manage-consumer.c:204)

DBG1 - 00:24:25.240746802 [Consumer management]: Consumer metadata socket
ready (fd: 62) (in thread_consumer_management() at manage-consumer.c:205)

DBG1 - 00:24:25.240775318 [Consumer management]: Sending consumer
initialization command (in consumer_init() at consumer.c:1791)

DBG1 - 00:24:25.241066762 [Consumer management]: Marking consumer
management thread as ready (in mark_thread_as_ready() at
manage-consumer.c:31)

DBG1 - 00:24:25.241100481 [Client management]: Consumer management thread
is ready (in wait_until_thread_is_ready() at manage-consumer.c:48)

DBG1 - 00:24:25.241138386 [Client management]: Setting relayd for session
auto-20240112-002417 (in cmd_setup_relayd() at cmd.c:1004)

lttng-sessiond: unix.c:185: lttcomm_recv_unix_sock: Assertion `buf' failed.

Error: Events: No session daemon is available (channel channel0, session
auto-20240112-002417)

[1]+  Aborted (core dumped) lttng-sessiond --verbose

DBG1 - 00:24:26.357769106 [Run-as worker]: run_as worker exiting (ret = 0)
(in run_as_create_worker_no_lock() at runas.c:1526)
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-analysis error

2023-10-31 Thread Kienan Stewart via lttng-dev

Hi Tao,

based on a quick test I performed, there may be a work-around that will 
work for you.


The issue that I see is that at least the 
`lttng_statedump_file_descriptor` events no longer have a pid field 
which the older lttng-analyses code assumes is present.


To find this out
When running the manual recording steps, could you add the following 
command before running `lttng start`:


```
lttng add-context --kernel --type=pid
```

To pinpoint the source of the error, I used the `--debug` parameter for 
`lttng-analyses` as follows:


```
# lttng-schedtop --debug 
lttng-traces/lttng-analysis-27010-20231031-182735/
Checking the trace for lost events... 

Traceback (most recent call last): 
   ] ETA:  --:--:--
  File 
"/usr/local/lib/python3.7/dist-packages/lttnganalyses/cli/command.py", 
line 73, in _run_step
fn() 

  File 
"/usr/local/lib/python3.7/dist-packages/lttnganalyses/cli/command.py", 
line 365, in _run_analysis
self._automaton.process_event(event) 

  File 
"/usr/local/lib/python3.7/dist-packages/lttnganalyses/linuxautomaton/automaton.py", 
line 81, in process_event

sp.process_event(ev)
  File 
"/usr/local/lib/python3.7/dist-packages/lttnganalyses/linuxautomaton/sp.py", 
line 33, in process_event
self._cbs[name](ev) 

  File 
"/usr/local/lib/python3.7/dist-packages/lttnganalyses/linuxautomaton/statedump.py", 
line 91, in _process_lttng_statedump_file_descriptor

pid = event['pid']
  File 
"/usr/local/lib/python3.7/dist-packages/babeltrace/babeltrace.py", line 
912, in __getitem__
raise KeyError(field_name) 

KeyError: 'pid' 


Error: Cannot run analysis: 'pid'
```

thanks,
kienan

On 2023-10-31 07:44, 姜涛 Tao(AD) via lttng-dev wrote:
Hi everyone, I use lttng-analysis tools to parse the trace files but get 
error as below:


I try two methods to catch the trace data: “automatic” using 
lttng-analysis-record and “manual” using lttng 
create/enable-event/start/stop/destroy, but always get the errors when 
using lttng-analysis tools(0.6.0 and 0.6.1 version).


I load the lttng modules(2.12.14 version) successfully:

文本 描述已自动生成

The environment is as below: Ubuntu20.04 aarch64, kernel version: 
5.10.120 rt patch.


Any help will be appreciated! Thanks!

免责声明:本邮件所包含信息发给指定个人或机构,邮件可能包含保密或专属信 
息。未经接收者许可,不得阅读、转发或传播邮件内容,或根据邮件内容采取任何 
相关行动。如果错误地收到了此邮件,请与收件人联系并自行删除邮件内容。 
Disclaimer:The information transmitted is intended only for the person 
or entity to which it is addressed and may contain confidential and/or 
privileged material. Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited. If 
you received this in error , please contact the sender and delete the 
material from any computer.


___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng list -k is missing tracepoints

2023-10-10 Thread Kienan Stewart via lttng-dev
int)
   writeback_global_dirty_state (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   writeback_bdi_dirty_ratelimit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   writeback_balance_dirty_pages (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   writeback_sb_inodes_requeue (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   writeback_congestion_wait (loglevel: TRACE_EMERG (0)) (type: tracepoint)
   writeback_wait_iff_congested (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   writeback_single_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
   x86_irq_vectors_local_timer_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_local_timer_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_reschedule_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_reschedule_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_spurious_apic_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_spurious_apic_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_error_apic_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_error_apic_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_ipi_entry (loglevel: TRACE_EMERG (0)) (type: tracepoint)
   x86_irq_vectors_ipi_exit (loglevel: TRACE_EMERG (0)) (type: tracepoint)
   x86_irq_vectors_irq_work_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_irq_work_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_call_function_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_call_function_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_call_function_single_entry (loglevel: TRACE_EMERG (0)) 
(type: tracepoint)
   x86_irq_vectors_call_function_single_exit (loglevel: TRACE_EMERG (0)) 
(type: tracepoint)
   x86_irq_vectors_threshold_apic_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_threshold_apic_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_deferred_error_apic_entry (loglevel: TRACE_EMERG (0)) 
(type: tracepoint)
   x86_irq_vectors_deferred_error_apic_exit (loglevel: TRACE_EMERG (0)) 
(type: tracepoint)
   x86_irq_vectors_thermal_apic_entry (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
   x86_irq_vectors_thermal_apic_exit (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] lttng list -k is missing tracepoints

2023-10-10 Thread Uday Shankar via lttng-dev
Hi,

I'm trying to use LTTNG to trace io_uring tracepoints, of which there
are many in the 6.5.4 kernel I am running:

# grep io_uring /sys/kernel/tracing/available_events
syscalls:sys_exit_io_uring_register
syscalls:sys_enter_io_uring_register
syscalls:sys_exit_io_uring_setup
syscalls:sys_enter_io_uring_setup
syscalls:sys_exit_io_uring_enter
syscalls:sys_enter_io_uring_enter
io_uring:io_uring_local_work_run
io_uring:io_uring_short_write
io_uring:io_uring_task_work_run
io_uring:io_uring_cqe_overflow
io_uring:io_uring_req_failed
io_uring:io_uring_task_add
io_uring:io_uring_poll_arm
io_uring:io_uring_submit_req
io_uring:io_uring_complete
io_uring:io_uring_fail_link
io_uring:io_uring_cqring_wait
io_uring:io_uring_link
io_uring:io_uring_defer
io_uring:io_uring_queue_async_work
io_uring:io_uring_file_get
io_uring:io_uring_register
io_uring:io_uring_create

However, lttng list -k does not show any of these tracepoints. I'm using
an lttng-modules built from the repo which I cloned earlier today:

$ modinfo lttng-wrapper | grep extra_version_git
extra_version_git:  v2.13.0-rc1-316-ga62977ca

(lttng-tools and lttng-ust are older; whatever version came with my
distro, but I don't think that should matter). Am I missing something?

Here's the full output of lttng list -k, in case it's helpful:

Kernel events:
-
  lttng_logger (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  compaction_isolate_migratepages (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  compaction_isolate_freepages (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  compaction_migratepages (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_bias_level_start (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_bias_level_done (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_start (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_done (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_widget_power (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_widget_event_start (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_widget_event_done (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_walk_done (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_path (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_connected (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_jack_irq (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_jack_report (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_jack_notify (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  gpio_direction (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  gpio_value (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_touch_buffer (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_dirty_buffer (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_requeue (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_complete (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_insert (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_issue (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_merge (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_complete (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_bounce (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_backmerge (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_frontmerge (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_queue (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_getrq (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_plug (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_unplug (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_split (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_remap (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_remap (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_free_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_request_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_allocate_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_evict_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_drop_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_mark_inode_dirty (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_begin_ordered_truncate (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_write_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_da_write_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_ordered_write_end (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_writeback_write_end (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_journalled_write_end (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_da_write_end (loglevel: 

[lttng-dev] lttng list -k is missing tracepoints

2023-10-09 Thread Uday Shankar via lttng-dev
Hi,

I'm trying to use LTTNG to trace io_uring tracepoints, of which there
are many in the 6.5.4 kernel I am running:

# grep io_uring /sys/kernel/tracing/available_events
syscalls:sys_exit_io_uring_register
syscalls:sys_enter_io_uring_register
syscalls:sys_exit_io_uring_setup
syscalls:sys_enter_io_uring_setup
syscalls:sys_exit_io_uring_enter
syscalls:sys_enter_io_uring_enter
io_uring:io_uring_local_work_run
io_uring:io_uring_short_write
io_uring:io_uring_task_work_run
io_uring:io_uring_cqe_overflow
io_uring:io_uring_req_failed
io_uring:io_uring_task_add
io_uring:io_uring_poll_arm
io_uring:io_uring_submit_req
io_uring:io_uring_complete
io_uring:io_uring_fail_link
io_uring:io_uring_cqring_wait
io_uring:io_uring_link
io_uring:io_uring_defer
io_uring:io_uring_queue_async_work
io_uring:io_uring_file_get
io_uring:io_uring_register
io_uring:io_uring_create

However, lttng list -k does not show any of these tracepoints. I'm using
an lttng-modules built from the repo which I cloned last Friday:

$ modinfo lttng-wrapper | grep extra_version_git
extra_version_git:  v2.13.0-rc1-316-ga62977ca

(lttng-tools and lttng-ust are older - whatever version came with my
distro - but I don't think that should matter). Am I missing something?

Here's the full output of lttng list -k, in case it's helpful:

Kernel events:
-
  lttng_logger (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  compaction_isolate_migratepages (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  compaction_isolate_freepages (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  compaction_migratepages (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_bias_level_start (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_bias_level_done (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_start (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_done (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_widget_power (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_widget_event_start (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_widget_event_done (loglevel: TRACE_EMERG (0)) (type: 
tracepoint)
  asoc_snd_soc_dapm_walk_done (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_path (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_dapm_connected (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_jack_irq (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_jack_report (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  asoc_snd_soc_jack_notify (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  gpio_direction (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  gpio_value (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_touch_buffer (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_dirty_buffer (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_requeue (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_complete (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_insert (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_issue (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_merge (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_complete (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_bounce (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_backmerge (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_frontmerge (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_queue (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_getrq (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_plug (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_unplug (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_split (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_bio_remap (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  block_rq_remap (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_free_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_request_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_allocate_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_evict_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_drop_inode (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_mark_inode_dirty (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_begin_ordered_truncate (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_write_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_da_write_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_ordered_write_end (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_writeback_write_end (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_journalled_write_end (loglevel: TRACE_EMERG (0)) (type: tracepoint)
  ext4_da_write_end (loglevel: 

Re: [lttng-dev] LTTng enable-event performance issue during active session

2023-10-06 Thread Kienan Stewart via lttng-dev

Hi Zach,

there's a proposed change to lttng-ust which addresses the issue you are 
seeing, but it hasn't finished clearing review & testing yet. 
https://review.lttng.org/c/lttng-ust/+/11006


thanks,
kienan

On 2023-10-05 18:04, Kienan Stewart via lttng-dev wrote:

Hi Zach

apologies for the delay in replying to your question.

On 2023-09-26 08:27, Kramer, Zach via lttng-dev wrote:

Hi,

I am observing a performance issue with regards to enabling events 
while a session is active and was wondering if this is expected.


LTTng versions:

  * lttng-tools: 2.13.9
  * lttng-ust: 2.13.6
Steps to reproduce:

 1. Ensure many userspace tracepoints are available in `lttng list -u`
    e.g. 100
 2. Create a new session
 3. Start session
 4. Enable new events on session

The time it takes to enable each new event has increasing cost e.g.

 1. Event 1: 1ms
 2. Event 100: 15ms
 3. Event 1000: 150ms
 4. àin total about 1.5 minutes to enable 1000 events

If either:

 1. No userspace tracepoints are available
 2. Or session is not started until /after /the events are enabled

Then the time it takes to enable new events is constant (e.g. 1ms).

Below is a bash script demonstrating this behavior:

# Pre-requisite: have many userspace tracepoints available

lttng create foo

lttng enable-channel -u -s foo bar

lttng start foo

total_t1=$(date +%s%3N);

for i in {1..100}

do

 t1=$(date +%s%3N);

 lttng enable-event -u lttng_iter_$i -s foo -c bar > /dev/null

 t2=$(date +%s%3N);

 echo "Event #$i took $((t2-t1)) ms"

done

total_t2=$(date +%s%3N);

echo ""

echo "Enabling events on active session took $((total_t2-total_t1)) ms"

echo ""

lttng destroy foo

lttng create foo

lttng enable-channel -u -s foo bar

total_t1=$(date +%s%3N);

for i in {1..100}

do

 t1=$(date +%s%3N);

 lttng enable-event -u lttng_iter_$i -s foo -c bar > /dev/null

 t2=$(date +%s%3N);

 echo "Event #$i took $((t2-t1)) ms"

done

total_t2=$(date +%s%3N);

echo ""

echo "Enabling events on inactive session took $((total_t2-total_t1)) ms"

echo ""

lttng destroy foo

Is this reproducible for you? Any insight is appreciated.



I'm able to reproduce what you're seeing, and it's not completely 
unexpected based on my understanding of the architecture of lttng.


When a session is active and an event rule is enabled/disabled, the 
sessiond will notify each of the registered applications of the event 
rule state changes. This communication is going through either unix or 
tcp sockets, and that portion of the protocol seems to struggle with 
many small changes as the protocol to communicate the changes doesn't 
support batching.


Enabling multiple events in a single call reduces some of the overhead, 
but the changes will still be communicated serially. This means a single 
call activating thousands events can still take a long time to process 
(eg. `lttng enable-event -s foo -c bar -u tp0,tp1,...,tpN`)


Using glob patterns or `--all` will be significantly faster as the UST 
applications can digest those types of event rule changes with a single 
set of messages. In cases where you want most but not all events, 
flipping the logic to enable "*" but add events or patterns to omit with 
"--exclude" is going to be a better strategy.


Do you have cases where you need to activate many of events but which 
couldn't be covered by using glob patterns and/or exclusions?


thanks,
kienan


Many thanks,

Zach


_______
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

_______
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng enable-event performance issue during active session

2023-10-05 Thread Kienan Stewart via lttng-dev

Hi Zach

apologies for the delay in replying to your question.

On 2023-09-26 08:27, Kramer, Zach via lttng-dev wrote:

Hi,

I am observing a performance issue with regards to enabling events while 
a session is active and was wondering if this is expected.


LTTng versions:

  * lttng-tools: 2.13.9
  * lttng-ust: 2.13.6 


Steps to reproduce:

 1. Ensure many userspace tracepoints are available in `lttng list -u`
e.g. 100
 2. Create a new session
 3. Start session
 4. Enable new events on session

The time it takes to enable each new event has increasing cost e.g.

 1. Event 1: 1ms
 2. Event 100: 15ms
 3. Event 1000: 150ms
 4. àin total about 1.5 minutes to enable 1000 events

If either:

 1. No userspace tracepoints are available
 2. Or session is not started until /after /the events are enabled

Then the time it takes to enable new events is constant (e.g. 1ms).

Below is a bash script demonstrating this behavior:

# Pre-requisite: have many userspace tracepoints available

lttng create foo

lttng enable-channel -u -s foo bar

lttng start foo

total_t1=$(date +%s%3N);

for i in {1..100}

do

     t1=$(date +%s%3N);

     lttng enable-event -u lttng_iter_$i -s foo -c bar > /dev/null

     t2=$(date +%s%3N);

     echo "Event #$i took $((t2-t1)) ms"

done

total_t2=$(date +%s%3N);

echo ""

echo "Enabling events on active session took $((total_t2-total_t1)) ms"

echo ""

lttng destroy foo

lttng create foo

lttng enable-channel -u -s foo bar

total_t1=$(date +%s%3N);

for i in {1..100}

do

     t1=$(date +%s%3N);

     lttng enable-event -u lttng_iter_$i -s foo -c bar > /dev/null

     t2=$(date +%s%3N);

     echo "Event #$i took $((t2-t1)) ms"

done

total_t2=$(date +%s%3N);

echo ""

echo "Enabling events on inactive session took $((total_t2-total_t1)) ms"

echo ""

lttng destroy foo

Is this reproducible for you? Any insight is appreciated.



I'm able to reproduce what you're seeing, and it's not completely 
unexpected based on my understanding of the architecture of lttng.


When a session is active and an event rule is enabled/disabled, the 
sessiond will notify each of the registered applications of the event 
rule state changes. This communication is going through either unix or 
tcp sockets, and that portion of the protocol seems to struggle with 
many small changes as the protocol to communicate the changes doesn't 
support batching.


Enabling multiple events in a single call reduces some of the overhead, 
but the changes will still be communicated serially. This means a single 
call activating thousands events can still take a long time to process 
(eg. `lttng enable-event -s foo -c bar -u tp0,tp1,...,tpN`)


Using glob patterns or `--all` will be significantly faster as the UST 
applications can digest those types of event rule changes with a single 
set of messages. In cases where you want most but not all events, 
flipping the logic to enable "*" but add events or patterns to omit with 
"--exclude" is going to be a better strategy.


Do you have cases where you need to activate many of events but which 
couldn't be covered by using glob patterns and/or exclusions?


thanks,
kienan


Many thanks,

Zach


___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng enable-event performance issue during active session

2023-09-26 Thread Kramer, Zach via lttng-dev
Hi,

I am observing a performance issue with regards to enabling events while a 
session is active and was wondering if this is expected.

LTTng versions:

  *   lttng-tools: 2.13.9
  *   lttng-ust: 2.13.6

Steps to reproduce:

  1.  Ensure many userspace tracepoints are available in `lttng list -u` e.g. 
100
  2.  Create a new session
  3.  Start session
  4.  Enable new events on session

The time it takes to enable each new event has increasing cost e.g.

  1.  Event 1: 1ms
  2.  Event 100: 15ms
  3.  Event 1000: 150ms
  4.  --> in total about 1.5 minutes to enable 1000 events

If either:

  1.  No userspace tracepoints are available
  2.  Or session is not started until after the events are enabled

Then the time it takes to enable new events is constant (e.g. 1ms).


Below is a bash script demonstrating this behavior:
# Pre-requisite: have many userspace tracepoints available

lttng create foo
lttng enable-channel -u -s foo bar
lttng start foo

total_t1=$(date +%s%3N);

for i in {1..100}
do
t1=$(date +%s%3N);
lttng enable-event -u lttng_iter_$i -s foo -c bar > /dev/null
t2=$(date +%s%3N);
echo "Event #$i took $((t2-t1)) ms"
done

total_t2=$(date +%s%3N);

echo ""
echo "Enabling events on active session took $((total_t2-total_t1)) ms"
echo ""

lttng destroy foo

lttng create foo
lttng enable-channel -u -s foo bar

total_t1=$(date +%s%3N);

for i in {1..100}
do
t1=$(date +%s%3N);
lttng enable-event -u lttng_iter_$i -s foo -c bar > /dev/null
t2=$(date +%s%3N);
echo "Event #$i took $((t2-t1)) ms"
done

total_t2=$(date +%s%3N);

echo ""
echo "Enabling events on inactive session took $((total_t2-total_t1)) ms"
echo ""

lttng destroy foo

Is this reproducible for you? Any insight is appreciated.

Many thanks,
Zach
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTNG LIB UST Crash

2023-09-07 Thread Kienan Stewart via lttng-dev

Hi Lakshmi,

On 2023-09-07 07:46, Lakshmi Deverkonda via lttng-dev wrote:

Thanks for replying.

Basically in our python3 application, we already have a logger which 
will redirect the logs to a log file.  By default, only info logs gets 
logged unless user explicitly turns on debug logging via cli.


For LTTNG Tracing, we would want to log all the events that is both 
info/debugs.

 >>Do you think there would be any overhead on the application?


Yes, but I'm not in position to quantify its effect on your application.

When lttngust is imported, one or two threads are spawned to register 
and communicate with the lttng-sessiond (see `lttngust.agent`). Some 
debug information about this process is available when you run the 
application with `LTTNG_UST_PYTHON_DEBUG=1` set in the environment


In terms of logging, the default behaviour is to add an instance of 
`lttngust.loghandler._Handler` to the root logger with level 
`logging.NOTSET`.


Instances of this handler load the liblttng-ust-python-agent.so dynamic 
library, and the `emit()` method encodes the record data and calls the 
library's `py_tracepoint` function. This is a thin wrapper of 
liblttng-ust, which will ultimately send the encoded event to the 
running consumerd.


One of the goals of the tracing libraries is to have a minimal footprint 
and impact on the running applications, but it is non-zero. In any case, 
if you`re interested in the overhead, you can profile your application 
with and without tracing and measure the effect.


 >> I cannot use the existing logger which does a file logging so for 
lttng only I have created just a new logger without any handler. Should 
this be fine?


I think this question is more about how you're writing your python 
application and not about LTTng / tracing userspace applications, so  I 
don't think I can answer it for you.


 >>Also, I see the default channel created for the python logging in 
lttng  "lttng_python_channel" .  From the documentation, I see that we 
cannot create another channel for python logging.
I would want to modify some of the attribute for the default channel 
such as making event-loss-mode as "overwrite" and increasing the 
trace_file_count. How can I do it ? This is one of the necessary 
requirement for our application. Can you please guide on this ?


You can create the channel explicitly with your desired configuration 
before running `lttng enable-events --python my-logger`


For example:

```
$ lttng create
Session auto-20230907-103141 created.

$ lttng enable-channel --userspace  --overwrite --tracefile-count 10 
--tracefile-size 1073741824 lttng_python_channel

ust channel lttng_python_channel enabled for session auto-20230907-103141

$ lttng enable-event --python my-logger

$ lttng list auto-20230907-103141

...
- lttng_python_channel: [enabled]

  Attributes;
Event-loss mode:  overwrite
Sub-buffer size:  524288 bytes
...
Trace file count: 10 per stream
Trace file size:  1073741824  bytes
...
```

thanks,
kienan




Regards,
Lakshmi

*From:* Kienan Stewart 
*Sent:* 06 September 2023 21:01
*To:* lttng-dev@lists.lttng.org ; Lakshmi 
Deverkonda 

*Subject:* Re: [lttng-dev] LTTNG LIB UST Crash
External email: Use caution opening links or attachments


Hi Lakshmi,

On 2023-09-06 06:02, Lakshmi Deverkonda via lttng-dev wrote:

Thanks for the reply. Issue is fixed after loading the tracing helpers.

I have one query wrt to logging wrt lttng on python3 application. Is
there any way I can avoid the file logging and only trace via lttng ?



In the example python application at
https://lttng.org/docs/v2.13/#doc-python-application 
<https://lttng.org/docs/v2.13/#doc-python-application> the log messages
are not written to disk or to stderr.

I lack the details of your application to give you a more precise answer.

Hope this helps,
kienan


Regards,
Lakshmi



*From:* Kienan Stewart 
*Sent:* 05 September 2023 21:20
*To:* Lakshmi Deverkonda 
*Subject:* Re: [lttng-dev] LTTNG LIB UST Crash
External email: Use caution opening links or attachments


Hi Lakshmi,

could you please provide us with the system details and version
information for LTTng tools and UST?

The bug reporting guidelines which cover the type of information
required to respond adequately questions can be found here:
https://lttng.org/community/#bug-reporting-guidelines 
<https://lttng.org/community/#bug-reporting-guidelines> 
<https://lttng.org/community/#bug-reporting-guidelines 
<https://lttng.org/community/#bug-reporting-guidelines>>

Given that you are instrumenting a user space application, do you have a
minimal reproducer of the crash including the details of how the
application is invoked that you would be able to share?

Some types of user space applications required tracing helpers l

Re: [lttng-dev] LTTNG LIB UST Crash

2023-09-07 Thread Lakshmi Deverkonda via lttng-dev
Thanks for replying.

Basically in our python3 application, we already have a logger which will 
redirect the logs to a log file.  By default, only info logs gets logged unless 
user explicitly turns on debug logging via cli.

For LTTNG Tracing, we would want to log all the events that is both info/debugs.
>>Do you think there would be any overhead on the application?
>> I cannot use the existing logger which does a file logging so for lttng only 
>> I have created just a new logger without any handler. Should this be fine?
>>Also, I see the default channel created for the python logging in lttng  
>>"lttng_python_channel" .  From the documentation, I see that we cannot create 
>>another channel for python logging.
I would want to modify some of the attribute for the default channel such as 
making event-loss-mode as "overwrite" and increasing the trace_file_count. How 
can I do it ? This is one of the necessary requirement for our application. Can 
you please guide on this ?


Regards,
Lakshmi

From: Kienan Stewart 
Sent: 06 September 2023 21:01
To: lttng-dev@lists.lttng.org ; Lakshmi Deverkonda 

Subject: Re: [lttng-dev] LTTNG LIB UST Crash

External email: Use caution opening links or attachments


Hi Lakshmi,

On 2023-09-06 06:02, Lakshmi Deverkonda via lttng-dev wrote:
> Thanks for the reply. Issue is fixed after loading the tracing helpers.
>
> I have one query wrt to logging wrt lttng on python3 application. Is
> there any way I can avoid the file logging and only trace via lttng ?
>

In the example python application at
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2Fv2.13%2F%23doc-python-application=05%7C01%7Claksd%40nvidia.com%7C5dec50116ea24aef578508dbaeee6437%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C63829641743499%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=HMryNCihtLBKA49TWfsxp4%2FkzkFJhZrlyocoAsR7w78%3D=0<https://lttng.org/docs/v2.13/#doc-python-application>
 the log messages
are not written to disk or to stderr.

I lack the details of your application to give you a more precise answer.

Hope this helps,
kienan

> Regards,
> Lakshmi
>
>
> 
> *From:* Kienan Stewart 
> *Sent:* 05 September 2023 21:20
> *To:* Lakshmi Deverkonda 
> *Subject:* Re: [lttng-dev] LTTNG LIB UST Crash
> External email: Use caution opening links or attachments
>
>
> Hi Lakshmi,
>
> could you please provide us with the system details and version
> information for LTTng tools and UST?
>
> The bug reporting guidelines which cover the type of information
> required to respond adequately questions can be found here:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fcommunity%2F%23bug-reporting-guidelines=05%7C01%7Claksd%40nvidia.com%7C5dec50116ea24aef578508dbaeee6437%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C63829641743499%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=ZU4mg%2BDNKMbtxCVA3kRuyCZAaj7yUH5bLz6p4oXpROA%3D=0<https://lttng.org/community/#bug-reporting-guidelines>
>  
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fcommunity%2F%23bug-reporting-guidelines=05%7C01%7Claksd%40nvidia.com%7C5dec50116ea24aef578508dbaeee6437%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C63829641899697%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=x0QzE9thuFtUFdNKd2qg0Fo4XSYqeTB4vbEBl88dGxE%3D=0<https://lttng.org/community/#bug-reporting-guidelines>>
>
> Given that you are instrumenting a user space application, do you have a
> minimal reproducer of the crash including the details of how the
> application is invoked that you would be able to share?
>
> Some types of user space applications required tracing helpers loaded
> via LD_PRELOAD. More information can be found here
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2Fv2.13%2F%23doc-prebuilt-ust-helpers=05%7C01%7Claksd%40nvidia.com%7C5dec50116ea24aef578508dbaeee6437%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C63829641899697%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=WpDm%2FpMZMb3EukOw9NF9L2GiMZpWeOAq3It8H8p%2BNSI%3D=0<https://lttng.org/docs/v2.13/#doc-prebuilt-ust-helpers>
>  
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2Fv2.13%2F%23doc-prebuilt-ust-helpers=05%7C01%7Claksd%40nvidia.com%7C5dec50116ea24aef578508dbaeee6437%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C63829641899697%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB

Re: [lttng-dev] LTTNG LIB UST Crash

2023-09-06 Thread Kienan Stewart via lttng-dev

Hi Lakshmi,

On 2023-09-06 06:02, Lakshmi Deverkonda via lttng-dev wrote:

Thanks for the reply. Issue is fixed after loading the tracing helpers.

I have one query wrt to logging wrt lttng on python3 application. Is 
there any way I can avoid the file logging and only trace via lttng ?




In the example python application at 
https://lttng.org/docs/v2.13/#doc-python-application the log messages 
are not written to disk or to stderr.


I lack the details of your application to give you a more precise answer.

Hope this helps,
kienan


Regards,
Lakshmi



*From:* Kienan Stewart 
*Sent:* 05 September 2023 21:20
*To:* Lakshmi Deverkonda 
*Subject:* Re: [lttng-dev] LTTNG LIB UST Crash
External email: Use caution opening links or attachments


Hi Lakshmi,

could you please provide us with the system details and version
information for LTTng tools and UST?

The bug reporting guidelines which cover the type of information
required to respond adequately questions can be found here:
https://lttng.org/community/#bug-reporting-guidelines 
<https://lttng.org/community/#bug-reporting-guidelines>

Given that you are instrumenting a user space application, do you have a
minimal reproducer of the crash including the details of how the
application is invoked that you would be able to share?

Some types of user space applications required tracing helpers loaded
via LD_PRELOAD. More information can be found here
https://lttng.org/docs/v2.13/#doc-prebuilt-ust-helpers 
<https://lttng.org/docs/v2.13/#doc-prebuilt-ust-helpers>

If you're unable to share code or other log files due to company policy,
or require responses within a guaranteed time frame, EfficiOS offers
commercial support services: 
https://www.efficios.com/services/ <https://www.efficios.com/services/>


thanks,
kienan

p.s. Sorry for forgetting to CC you in my earlier reply to the list!

On 2023-09-05 02:30, Lakshmi Deverkonda via lttng-dev wrote:

Hi All,

I am observing lttng crash while trying to interface LTTNG for one of my
python3 application.
I just tried the following things,

Added "import lttngust " in the code.

# lttng create clagd
Session clagd created.
Traces will be written in /root/lttng-traces/clagd-20230905-062210

# lttng enable-event --python clagd
#lttng start

#service clagd start
*
*
*cumulus-core: Running cl-support for core files
"clagd-ust.95410.1693895062.core"*

#0  0x7f134fb938eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7f134fb7e535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7f134fb7e40f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7f134fb8c1a2 in __assert_fail () from
/lib/x86_64-linux-gnu/libc.so.6
#4  0x7f134f1a9677 in lttng_ust_add_fd_to_tracker () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#5  0x7f134f1bdcf4 in lttng_ust_elf_create () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#6  0x7f134f1bf8de in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#7  0x7f134fc8f957 in dl_iterate_phdr () from
/lib/x86_64-linux-gnu/libc.so.6
#8  0x7f134f1bff6b in lttng_ust_dl_update () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#9  0x7f134f1c061a in do_lttng_ust_statedump () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#10 0x7f134f1b5ca9 in lttng_handle_pending_statedump () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#11 0x7f134f1ab6d1 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#12 0x7f134f1ad7eb in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#13 0x7f134ff0dfa3 in start_thread () from
/lib/x86_64-linux-gnu/libpthread.so.0
#14 0x7f134fc5506f in clone () from /lib/x86_64-linux-gnu/libc.so.6


When I stop the lttng session, I see another core

#0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0x7fad9331f700 (LWP 2221103))]
(gdb) bt
#0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fad95183535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7fad9518340f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7fad951911a2 in __assert_fail () from
/lib/x86_64-linux-gnu/libc.so.6
#4  0x7fad947e8d9f in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#5  0x7fad947e98e3 in shm_object_table_destroy () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#6  0x7fad947e4d9a in channel_destroy () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#7  0x7fad947ba6b5 in lttng_session_destroy () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#8  0x7fad947b47c6 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#9  0x7fad947b4bac in lttng_ust_objd_unref () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#10 0x7fad947b4bac in lttng_ust_objd_unref () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#11 0x7fad947b4bac in lttng_ust_objd_unref () from
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#12 0x7fad947b5304 in lttng_ust_objd_t

Re: [lttng-dev] LTTNG LIB UST Crash

2023-09-06 Thread Lakshmi Deverkonda via lttng-dev
Thanks for the reply. Issue is fixed after loading the tracing helpers.

I have one query wrt to logging wrt lttng on python3 application. Is there any 
way I can avoid the file logging and only trace via lttng ?

Regards,
Lakshmi



From: Kienan Stewart 
Sent: 05 September 2023 21:20
To: Lakshmi Deverkonda 
Subject: Re: [lttng-dev] LTTNG LIB UST Crash

External email: Use caution opening links or attachments


Hi Lakshmi,

could you please provide us with the system details and version
information for LTTng tools and UST?

The bug reporting guidelines which cover the type of information
required to respond adequately questions can be found here:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fcommunity%2F%23bug-reporting-guidelines=05%7C01%7Claksd%40nvidia.com%7C087f25ce0bc54b0b0cf008dbae27e747%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638295258729072557%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=ZUKJj2JTtxWJNp9Prf8XjarnbPRPbLGXQy1Eou8AvRE%3D=0<https://lttng.org/community/#bug-reporting-guidelines>

Given that you are instrumenting a user space application, do you have a
minimal reproducer of the crash including the details of how the
application is invoked that you would be able to share?

Some types of user space applications required tracing helpers loaded
via LD_PRELOAD. More information can be found here
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2Fv2.13%2F%23doc-prebuilt-ust-helpers=05%7C01%7Claksd%40nvidia.com%7C087f25ce0bc54b0b0cf008dbae27e747%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638295258729072557%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=g9HSdPQPqYw%2BdTXH9kKKyYjNZDX90NGArrUTW8UHp7s%3D=0<https://lttng.org/docs/v2.13/#doc-prebuilt-ust-helpers>

If you're unable to share code or other log files due to company policy,
or require responses within a guaranteed time frame, EfficiOS offers
commercial support services: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.efficios.com%2Fservices%2F=05%7C01%7Claksd%40nvidia.com%7C087f25ce0bc54b0b0cf008dbae27e747%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638295258729072557%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=Yry9Y74Ydo9eHmC5uP0bhcueqYTNoXbXpyiOLIytShc%3D=0<https://www.efficios.com/services/>

thanks,
kienan

p.s. Sorry for forgetting to CC you in my earlier reply to the list!

On 2023-09-05 02:30, Lakshmi Deverkonda via lttng-dev wrote:
> Hi All,
>
> I am observing lttng crash while trying to interface LTTNG for one of my
> python3 application.
> I just tried the following things,
>
> Added "import lttngust " in the code.
>
> # lttng create clagd
> Session clagd created.
> Traces will be written in /root/lttng-traces/clagd-20230905-062210
>
> # lttng enable-event --python clagd
> #lttng start
>
> #service clagd start
> *
> *
> *cumulus-core: Running cl-support for core files
> "clagd-ust.95410.1693895062.core"*
>
> #0  0x7f134fb938eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x7f134fb7e535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x7f134fb7e40f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  0x7f134fb8c1a2 in __assert_fail () from
> /lib/x86_64-linux-gnu/libc.so.6
> #4  0x7f134f1a9677 in lttng_ust_add_fd_to_tracker () from
> /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #5  0x7f134f1bdcf4 in lttng_ust_elf_create () from
> /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #6  0x7f134f1bf8de in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #7  0x7f134fc8f957 in dl_iterate_phdr () from
> /lib/x86_64-linux-gnu/libc.so.6
> #8  0x7f134f1bff6b in lttng_ust_dl_update () from
> /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #9  0x7f134f1c061a in do_lttng_ust_statedump () from
> /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #10 0x7f134f1b5ca9 in lttng_handle_pending_statedump () from
> /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #11 0x7f134f1ab6d1 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #12 0x7f134f1ad7eb in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
> #13 0x7f134ff0dfa3 in start_thread () from
> /lib/x86_64-linux-gnu/libpthread.so.0
> #14 0x7f134fc5506f in clone () from /lib/x86_64-linux-gnu/libc.so.6
>
>
> When I stop the lttng session, I see another core
>
> #0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
> [Current thread is 1 (Thread 0x7fad9331f700 (LWP 2221103))]
> (gdb) bt
> #0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x7fad95183535 in abort () from /lib/x86_64-linux-gnu/libc.s

Re: [lttng-dev] LTTNG LIB UST Crash

2023-09-05 Thread Kienan Stewart via lttng-dev

Hi Lakshmi,

could you please provide us with the system details and version 
information for LTTng tools and UST?


The bug reporting guidelines which cover the type of information 
required to respond adequately questions can be found here: 
https://lttng.org/community/#bug-reporting-guidelines


Given that you are instrumenting a user space application, do you have a 
minimal reproducer of the crash including the details of how the 
application is invoked that you would be able to share?


Some types of user space applications required tracing helpers loaded 
via LD_PRELOAD. More information can be found here 
https://lttng.org/docs/v2.13/#doc-prebuilt-ust-helpers


If you're unable to share code or other log files due to company policy, 
or require responses within a guaranteed time frame, EfficiOS offers 
commercial support services: https://www.efficios.com/services/


thanks,
kienan

On 2023-09-05 02:30, Lakshmi Deverkonda via lttng-dev wrote:

Hi All,

I am observing lttng crash while trying to interface LTTNG for one of my 
python3 application.

I just tried the following things,

Added "import lttngust " in the code.

# lttng create clagd
Session clagd created.
Traces will be written in /root/lttng-traces/clagd-20230905-062210

# lttng enable-event --python clagd
#lttng start

#service clagd start
*
*
*cumulus-core: Running cl-support for core files 
"clagd-ust.95410.1693895062.core"*


#0  0x7f134fb938eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7f134fb7e535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7f134fb7e40f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7f134fb8c1a2 in __assert_fail () from 
/lib/x86_64-linux-gnu/libc.so.6
#4  0x7f134f1a9677 in lttng_ust_add_fd_to_tracker () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#5  0x7f134f1bdcf4 in lttng_ust_elf_create () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0

#6  0x7f134f1bf8de in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#7  0x7f134fc8f957 in dl_iterate_phdr () from 
/lib/x86_64-linux-gnu/libc.so.6
#8  0x7f134f1bff6b in lttng_ust_dl_update () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#9  0x7f134f1c061a in do_lttng_ust_statedump () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#10 0x7f134f1b5ca9 in lttng_handle_pending_statedump () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0

#11 0x7f134f1ab6d1 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#12 0x7f134f1ad7eb in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#13 0x7f134ff0dfa3 in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0

#14 0x7f134fc5506f in clone () from /lib/x86_64-linux-gnu/libc.so.6


When I stop the lttng session, I see another core

#0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0x7fad9331f700 (LWP 2221103))]
(gdb) bt
#0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fad95183535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7fad9518340f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7fad951911a2 in __assert_fail () from 
/lib/x86_64-linux-gnu/libc.so.6

#4  0x7fad947e8d9f in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#5  0x7fad947e98e3 in shm_object_table_destroy () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#6  0x7fad947e4d9a in channel_destroy () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#7  0x7fad947ba6b5 in lttng_session_destroy () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0

#8  0x7fad947b47c6 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#9  0x7fad947b4bac in lttng_ust_objd_unref () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#10 0x7fad947b4bac in lttng_ust_objd_unref () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#11 0x7fad947b4bac in lttng_ust_objd_unref () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#12 0x7fad947b5304 in lttng_ust_objd_table_owner_cleanup () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0

#13 0x7fad947b2b75 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#14 0x7fad95512fa3 in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0

#15 0x7fad9525a06f in clone () from /lib/x86_64-linux-gnu/libc.so.6


Can you please help here if I'm missing something. This is a critical 
task item for us but are currently stuck with multiple lttng crashes.


Regards,
Lakshmi

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTNG LIB UST Crash

2023-09-05 Thread Lakshmi Deverkonda via lttng-dev
Hi All,

I am observing lttng crash while trying to interface LTTNG for one of my 
python3 application.
I just tried the following things,

Added "import lttngust " in the code.

# lttng create clagd
Session clagd created.
Traces will be written in /root/lttng-traces/clagd-20230905-062210

# lttng enable-event --python clagd
#lttng start

#service clagd start

cumulus-core: Running cl-support for core files 
"clagd-ust.95410.1693895062.core"

#0  0x7f134fb938eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7f134fb7e535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7f134fb7e40f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7f134fb8c1a2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4  0x7f134f1a9677 in lttng_ust_add_fd_to_tracker () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#5  0x7f134f1bdcf4 in lttng_ust_elf_create () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#6  0x7f134f1bf8de in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#7  0x7f134fc8f957 in dl_iterate_phdr () from 
/lib/x86_64-linux-gnu/libc.so.6
#8  0x7f134f1bff6b in lttng_ust_dl_update () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#9  0x7f134f1c061a in do_lttng_ust_statedump () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#10 0x7f134f1b5ca9 in lttng_handle_pending_statedump () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#11 0x7f134f1ab6d1 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#12 0x7f134f1ad7eb in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#13 0x7f134ff0dfa3 in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#14 0x7f134fc5506f in clone () from /lib/x86_64-linux-gnu/libc.so.6


When I stop the lttng session, I see another core

#0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0x7fad9331f700 (LWP 2221103))]
(gdb) bt
#0  0x7fad951988eb in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fad95183535 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7fad9518340f in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x7fad951911a2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4  0x7fad947e8d9f in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#5  0x7fad947e98e3 in shm_object_table_destroy () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#6  0x7fad947e4d9a in channel_destroy () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#7  0x7fad947ba6b5 in lttng_session_destroy () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#8  0x7fad947b47c6 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#9  0x7fad947b4bac in lttng_ust_objd_unref () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#10 0x7fad947b4bac in lttng_ust_objd_unref () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#11 0x7fad947b4bac in lttng_ust_objd_unref () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#12 0x7fad947b5304 in lttng_ust_objd_table_owner_cleanup () from 
/lib/x86_64-linux-gnu/liblttng-ust.so.0
#13 0x7fad947b2b75 in ?? () from /lib/x86_64-linux-gnu/liblttng-ust.so.0
#14 0x7fad95512fa3 in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#15 0x7fad9525a06f in clone () from /lib/x86_64-linux-gnu/libc.so.6


Can you please help here if I'm missing something. This is a critical task item 
for us but are currently stuck with multiple lttng crashes.

Regards,
Lakshmi
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng-UST and SystemTap/DTrace

2023-08-14 Thread Ondřej Surý via lttng-dev
Hi,

we are in process of adding USDT probes to the BIND 9 probes and the obvious 
question is whether it’s possible to use the probes with the LTTng tools.

Alternatively, if there’s an easy way how to use one or another. The USDT 
probes are provided by SystemTap on Linux and by DTrace on BSDs and Solaris, so 
they are more universal for our use, but using perf record feels more 
complicated for us.

Ideally, we would like to give users a choice or at least have a backup plan to 
switch if we pick the wrong technology and it doesn’t work for us or our users.

Any thoughts or experiences?

Ondrej
--
Ondřej Surý  (He/Him)
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng bugs repository - pending administrator approval

2023-02-15 Thread Michael Jeanson via lttng-dev

On 2023-02-15 09:56, Rus, Sani via lttng-dev wrote:

Hi,

a couple of days ago I registered at LTTng bugs repository - 
https://bugs.lttng.org/ , but I still can not sign 
in. At the login I get the "Your account was created and is now pending 
administrator approval." message.


Do you know when the registration request will be approved? Should I contact 
someone in addition?


Sani


I approved your account.

Cheers,

Michael

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng bugs repository - pending administrator approval

2023-02-15 Thread Rus, Sani via lttng-dev
Hi,

a couple of days ago I registered at LTTng bugs repository - 
https://bugs.lttng.org/, but I still can not sign in. At the login I get the 
"Your account was created and is now pending administrator approval." message.

Do you know when the registration request will be approved? Should I contact 
someone in addition?

Sani


___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST structure support

2023-02-10 Thread chafraysse--- via lttng-dev

Hi Jérémie,

Right, I missed the graph C api, my bad
Thanks for the example offer but I don't want to take up more of your 
time :)


Best regards,

Charles


Hi Charles,

I'm not sure why you can't embed a graph in your application. The idea 
is to make new events visible to your source component (or the iterator 
it provides) by exposing some form of "user data" as a shared context 
[1][2]. When the event is made available to the source through that 
context, your "write" function can then pass it down the graph by 
invoking the graph's "run" method.


I'll see if I can find time to produce an exemple if you think it would 
be helpful.


Regards,
Jérémie

[1] 
https://babeltrace.org/docs/v2.0/libbabeltrace2/group__api-self-msg-iter.html#gaa0795c9c18725a844df5b5c705255977
[2] 
https://babeltrace.org/docs/v2.0/libbabeltrace2/group__api-self-comp.html#gae01283b1ee0c3945c4f54205a1528169
[3] 
https://babeltrace.org/docs/v2.0/libbabeltrace2/group__api-graph.html#gad2e1c1ab20d1400af1b552e70b3d567c


--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com

Le 2023-02-09 10:15, chafray...@free.fr a écrit :
Hi Jérémie,

Thx for your reply,
I'm doing a mockup on an Ubuntu host for now
Ok for the graph, I cannot embed it in my application though
I wanted that application to directly output CTF (with compound types,
excluding LTTng) but I guess I'll output it in my format and then
post-process it in a bt2 graph with a custom source and the ctf sink

Best regards,

Charles


Hi Charles,

I can't really comment on what the packages do, but babeltrace2 
provides those headers under "include/babeltrace2-ctf-writer" since 
that library is somewhat "grafted" to the project. Which distro are 
you using?


As for where to "plug" your bindings, that library is there to 
maintain compatibility with the ctf-writer library that was provided 
by Babeltrace 1.x as it had a number of external users. I don't expect 
it to keep up with new CTF versions.


A more "future proof" integration point is to write a source 
component, and instanciate it in a graph configured with the CTF 
filesystem sink, and feed your events through the graph.


Let me know if you want more information,
Jérémie

--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com

Le 2023-02-02 17:06, chafray...@free.fr a écrit :
Hi,

So I wrote a draft of Rust lib above ctf-writer, using the apis as
demonstrated in the ctf-writer test
For deployment I wanted to use "libbabeltrace2-ctf-writer.so" in the
"libbabeltrace2-dev" package but I could not locate the matching
includes in there or in the other babeltrace 2 packages
Did I miss them somewhere ? Should I have plugged in at another level
in babeltrace2 ?

Best regards,

Charles

- Mail original -
De: chafray...@free.fr
À: "Mathieu Desnoyers" 
Cc: lttng-dev@lists.lttng.org
Envoyé: Lundi 16 Janvier 2023 10:38:28
Objet: Re: [lttng-dev] LTTng UST structure support

Hi Mathieu,

Thanks for your reply :)
I'll stick to bt2 modules in the meantime then
I'll already be saving a ton of time with those and the CTF spec which
is great !

Best regards,

Charles

- Mail original -
De: "Mathieu Desnoyers" 
À: chafray...@free.fr, lttng-dev@lists.lttng.org
Envoyé: Jeudi 12 Janvier 2023 21:10:57
Objet: Re: [lttng-dev] LTTng UST structure support

On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:

Hi,

I'm looking for a CTF writer to serialize instrumentations in an
embedded Linux/Rust framework
LTTng UST looked like a very strong option, but I want to serialize
structures as CTF compound type structures and I did not see those
supported in the doc or api


This is correct. I am currently working on a new project called
"libside" (see https://git.efficios.com/?p=libside.git;a=summary) 
which

features support for compound types.

However, we still need to do the heavy-lifting implementation work of
integrating this with LTTng-UST. This is the plan towards supporting
compound types in LTTng-UST.


I'd love to have confirmation that I did not just miss something :)
If LTTng UST is out for me I will probably try to use the ctf-writer
module of babeltrace 2 instead


For now the ctf-writer modules of bt2 would be an alternative to
consider, but remember that it is not designed for low-impact tracing
such as lttng-ust. So it depends on how much tracer overhead/runtime
impact you can afford in your use-case.

Thanks,

Mathieu



Best regards,

Charles
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST structure support

2023-02-09 Thread Jérémie Galarneau via lttng-dev
Hi Charles,

I'm not sure why you can't embed a graph in your application. The idea is to 
make new events visible to your source component (or the iterator it provides) 
by exposing some form of "user data" as a shared context [1][2]. When the event 
is made available to the source through that context, your "write" function can 
then pass it down the graph by invoking the graph's "run" method.

I'll see if I can find time to produce an exemple if you think it would be 
helpful.

Regards,
Jérémie

[1] 
https://babeltrace.org/docs/v2.0/libbabeltrace2/group__api-self-msg-iter.html#gaa0795c9c18725a844df5b5c705255977
[2] 
https://babeltrace.org/docs/v2.0/libbabeltrace2/group__api-self-comp.html#gae01283b1ee0c3945c4f54205a1528169
[3] 
https://babeltrace.org/docs/v2.0/libbabeltrace2/group__api-graph.html#gad2e1c1ab20d1400af1b552e70b3d567c

--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com

From: lttng-dev  on behalf of chafraysse--- 
via lttng-dev 
Sent: February 9, 2023 04:15
To: lttng-dev@lists.lttng.org 
Subject: Re: [lttng-dev] LTTng UST structure support

Hi Jérémie,

Thx for your reply,
I'm doing a mockup on an Ubuntu host for now
Ok for the graph, I cannot embed it in my application though
I wanted that application to directly output CTF (with compound types,
excluding LTTng) but I guess I'll output it in my format and then
post-process it in a bt2 graph with a custom source and the ctf sink

Best regards,

Charles

> Hi Charles,
>
> I can't really comment on what the packages do, but babeltrace2
> provides those headers under "include/babeltrace2-ctf-writer" since
> that library is somewhat "grafted" to the project. Which distro are you
> using?
>
> As for where to "plug" your bindings, that library is there to maintain
> compatibility with the ctf-writer library that was provided by
> Babeltrace 1.x as it had a number of external users. I don't expect it
> to keep up with new CTF versions.
>
> A more "future proof" integration point is to write a source component,
> and instanciate it in a graph configured with the CTF filesystem sink,
> and feed your events through the graph.
>
> Let me know if you want more information,
> Jérémie
>
> --
> Jérémie Galarneau
> EfficiOS Inc.
> https://www.efficios.com
>
> Le 2023-02-02 17:06, chafray...@free.fr a écrit :
> Hi,
>
> So I wrote a draft of Rust lib above ctf-writer, using the apis as
> demonstrated in the ctf-writer test
> For deployment I wanted to use "libbabeltrace2-ctf-writer.so" in the
> "libbabeltrace2-dev" package but I could not locate the matching
> includes in there or in the other babeltrace 2 packages
> Did I miss them somewhere ? Should I have plugged in at another level
> in babeltrace2 ?
>
> Best regards,
>
> Charles
>
> - Mail original -
> De: chafray...@free.fr
> À: "Mathieu Desnoyers" 
> Cc: lttng-dev@lists.lttng.org
> Envoyé: Lundi 16 Janvier 2023 10:38:28
> Objet: Re: [lttng-dev] LTTng UST structure support
>
> Hi Mathieu,
>
> Thanks for your reply :)
> I'll stick to bt2 modules in the meantime then
> I'll already be saving a ton of time with those and the CTF spec which
> is great !
>
> Best regards,
>
> Charles
>
> - Mail original -
> De: "Mathieu Desnoyers" 
> À: chafray...@free.fr, lttng-dev@lists.lttng.org
> Envoyé: Jeudi 12 Janvier 2023 21:10:57
> Objet: Re: [lttng-dev] LTTng UST structure support
>
> On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:
>> Hi,
>>
>> I'm looking for a CTF writer to serialize instrumentations in an
>> embedded Linux/Rust framework
>> LTTng UST looked like a very strong option, but I want to serialize
>> structures as CTF compound type structures and I did not see those
>> supported in the doc or api
>
> This is correct. I am currently working on a new project called
> "libside" (see https://git.efficios.com/?p=libside.git;a=summary) which
> features support for compound types.
>
> However, we still need to do the heavy-lifting implementation work of
> integrating this with LTTng-UST. This is the plan towards supporting
> compound types in LTTng-UST.
>
>> I'd love to have confirmation that I did not just miss something :)
>> If LTTng UST is out for me I will probably try to use the ctf-writer
>> module of babeltrace 2 instead
>
> For now the ctf-writer modules of bt2 would be an alternative to
> consider, but remember that it is not designed for low-impact tracing
> such as lttng-ust. So it depends on how much tracer overhead/runtime
> impact you can afford in your use-case.
>
> Thanks,
>
> Mathieu
>

Re: [lttng-dev] LTTng UST structure support

2023-02-09 Thread chafraysse--- via lttng-dev

Hi Jérémie,

Thx for your reply,
I'm doing a mockup on an Ubuntu host for now
Ok for the graph, I cannot embed it in my application though
I wanted that application to directly output CTF (with compound types, 
excluding LTTng) but I guess I'll output it in my format and then 
post-process it in a bt2 graph with a custom source and the ctf sink


Best regards,

Charles


Hi Charles,

I can't really comment on what the packages do, but babeltrace2 
provides those headers under "include/babeltrace2-ctf-writer" since 
that library is somewhat "grafted" to the project. Which distro are you 
using?


As for where to "plug" your bindings, that library is there to maintain 
compatibility with the ctf-writer library that was provided by 
Babeltrace 1.x as it had a number of external users. I don't expect it 
to keep up with new CTF versions.


A more "future proof" integration point is to write a source component, 
and instanciate it in a graph configured with the CTF filesystem sink, 
and feed your events through the graph.


Let me know if you want more information,
Jérémie

--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com

Le 2023-02-02 17:06, chafray...@free.fr a écrit :
Hi,

So I wrote a draft of Rust lib above ctf-writer, using the apis as
demonstrated in the ctf-writer test
For deployment I wanted to use "libbabeltrace2-ctf-writer.so" in the
"libbabeltrace2-dev" package but I could not locate the matching
includes in there or in the other babeltrace 2 packages
Did I miss them somewhere ? Should I have plugged in at another level
in babeltrace2 ?

Best regards,

Charles

- Mail original -
De: chafray...@free.fr
À: "Mathieu Desnoyers" 
Cc: lttng-dev@lists.lttng.org
Envoyé: Lundi 16 Janvier 2023 10:38:28
Objet: Re: [lttng-dev] LTTng UST structure support

Hi Mathieu,

Thanks for your reply :)
I'll stick to bt2 modules in the meantime then
I'll already be saving a ton of time with those and the CTF spec which
is great !

Best regards,

Charles

- Mail original -
De: "Mathieu Desnoyers" 
À: chafray...@free.fr, lttng-dev@lists.lttng.org
Envoyé: Jeudi 12 Janvier 2023 21:10:57
Objet: Re: [lttng-dev] LTTng UST structure support

On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:

Hi,

I'm looking for a CTF writer to serialize instrumentations in an
embedded Linux/Rust framework
LTTng UST looked like a very strong option, but I want to serialize
structures as CTF compound type structures and I did not see those
supported in the doc or api


This is correct. I am currently working on a new project called
"libside" (see https://git.efficios.com/?p=libside.git;a=summary) which
features support for compound types.

However, we still need to do the heavy-lifting implementation work of
integrating this with LTTng-UST. This is the plan towards supporting
compound types in LTTng-UST.


I'd love to have confirmation that I did not just miss something :)
If LTTng UST is out for me I will probably try to use the ctf-writer
module of babeltrace 2 instead


For now the ctf-writer modules of bt2 would be an alternative to
consider, but remember that it is not designed for low-impact tracing
such as lttng-ust. So it depends on how much tracer overhead/runtime
impact you can afford in your use-case.

Thanks,

Mathieu



Best regards,

Charles
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-02-06 Thread Mathieu Desnoyers via lttng-dev

Hi Micke,

I did tweaks to make the code C++ compatible even though it's currently 
only built in C. It makes it more future-proof.


I've merged the resulting patch into lttng-ust 
master/stable-2.13/stable-2.12. Thanks for testing !


Mathieu

On 2023-02-06 11:15, Beckius, Mikael wrote:

Hello Mathieu!

I added your latest implementation to my test and it seems to perform well on 
both arm and arm64. Since the test was written in C++ I had to make a small 
change to the cast in order for the test to compile.

Micke


-Ursprungligt meddelande-
Från: Mathieu Desnoyers 
Skickat: den 2 februari 2023 17:26
Till: Beckius, Mikael ; lttng-
d...@lists.lttng.org
Ämne: Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch
specific optimization

CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and
know the content is safe.

Hi  Mikael,

I just tried another approach to fix this issue, see:

https://review.lttng.org/c/lttng-ust/+/9413 Fix: use unaligned pointer
accesses for lttng_inline_memcpy

It is less intrusive than other approaches, and does not change the generated
code on the
most relevant architectures.

Feedback is welcome,

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com




--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-02-06 Thread Beckius, Mikael via lttng-dev
Hello Mathieu!

I added your latest implementation to my test and it seems to perform well on 
both arm and arm64. Since the test was written in C++ I had to make a small 
change to the cast in order for the test to compile.

Micke

> -Ursprungligt meddelande-
> Från: Mathieu Desnoyers 
> Skickat: den 2 februari 2023 17:26
> Till: Beckius, Mikael ; lttng-
> d...@lists.lttng.org
> Ämne: Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch
> specific optimization
> 
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and
> know the content is safe.
> 
> Hi  Mikael,
> 
> I just tried another approach to fix this issue, see:
> 
> https://review.lttng.org/c/lttng-ust/+/9413 Fix: use unaligned pointer
> accesses for lttng_inline_memcpy
> 
> It is less intrusive than other approaches, and does not change the generated
> code on the
> most relevant architectures.
> 
> Feedback is welcome,
> 
> Thanks,
> 
> Mathieu
> 
> 
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-02-06 Thread Beckius, Mikael via lttng-dev
Hello Mathieu!

Sorry for the late reply. I was away for a few days.

I will have a look at your updated approach and get back to you on your other 
replies if still relevant, but in short:
- With __ARM_FEATURE_UNALIGNED defined 32-bit arm appears to support 2 and 4 
bytes unaligned access
- Regarding 8 bytes I found this wording:
   "__ARM_FEATURE_UNALIGNED is defined if the target supports unaligned access 
in hardware, at least to the extent of being able
   to load or store an integer word at any alignment with a single instruction. 
(There may be restrictions on load-multiple and
   floating-point accesses.)" on 
https://developer.arm.com/documentation/101028/0012/5--Feature-test-macros
   and I think all the crash reports were about 8 bytes unaligned access on arm 
32-bit
- Performance seems to improve for both aligned and unaligned access compared 
to using memcpy but your are right that a test needs to be carefully constructed
- 64-bit arm appears to support 2, 4 and 8 bytes unaligned access

Micke
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST structure support

2023-02-02 Thread Jérémie Galarneau via lttng-dev
Hi Charles,

I can't really comment on what the packages do, but babeltrace2 provides those 
headers under "include/babeltrace2-ctf-writer" since that library is somewhat 
"grafted" to the project. Which distro are you using?

As for where to "plug" your bindings, that library is there to maintain 
compatibility with the ctf-writer library that was provided by Babeltrace 1.x 
as it had a number of external users. I don't expect it to keep up with new CTF 
versions.

A more "future proof" integration point is to write a source component, and 
instanciate it in a graph configured with the CTF filesystem sink, and feed 
your events through the graph.

Let me know if you want more information,
Jérémie

--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com


From: lttng-dev  on behalf of chafraysse--- 
via lttng-dev 
Sent: February 2, 2023 11:06
To: Mathieu Desnoyers 
Cc: lttng-dev@lists.lttng.org 
Subject: Re: [lttng-dev] LTTng UST structure support

Hi,

So I wrote a draft of Rust lib above ctf-writer, using the apis as demonstrated 
in the ctf-writer test
For deployment I wanted to use "libbabeltrace2-ctf-writer.so" in the 
"libbabeltrace2-dev" package but I could not locate the matching includes in 
there or in the other babeltrace 2 packages
Did I miss them somewhere ? Should I have plugged in at another level in 
babeltrace2 ?

Best regards,

Charles

- Mail original -
De: chafray...@free.fr
À: "Mathieu Desnoyers" 
Cc: lttng-dev@lists.lttng.org
Envoyé: Lundi 16 Janvier 2023 10:38:28
Objet: Re: [lttng-dev] LTTng UST structure support

Hi Mathieu,

Thanks for your reply :)
I'll stick to bt2 modules in the meantime then
I'll already be saving a ton of time with those and the CTF spec which is great 
!

Best regards,

Charles

- Mail original -
De: "Mathieu Desnoyers" 
À: chafray...@free.fr, lttng-dev@lists.lttng.org
Envoyé: Jeudi 12 Janvier 2023 21:10:57
Objet: Re: [lttng-dev] LTTng UST structure support

On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:
> Hi,
>
> I'm looking for a CTF writer to serialize instrumentations in an
> embedded Linux/Rust framework
> LTTng UST looked like a very strong option, but I want to serialize
> structures as CTF compound type structures and I did not see those
> supported in the doc or api

This is correct. I am currently working on a new project called
"libside" (see https://git.efficios.com/?p=libside.git;a=summary) which
features support for compound types.

However, we still need to do the heavy-lifting implementation work of
integrating this with LTTng-UST. This is the plan towards supporting
compound types in LTTng-UST.

> I'd love to have confirmation that I did not just miss something :)
> If LTTng UST is out for me I will probably try to use the ctf-writer
> module of babeltrace 2 instead

For now the ctf-writer modules of bt2 would be an alternative to
consider, but remember that it is not designed for low-impact tracing
such as lttng-ust. So it depends on how much tracer overhead/runtime
impact you can afford in your use-case.

Thanks,

Mathieu

>
> Best regards,
>
> Charles
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

_______
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-02-02 Thread Mathieu Desnoyers via lttng-dev

Hi  Mikael,

I just tried another approach to fix this issue, see:

https://review.lttng.org/c/lttng-ust/+/9413 Fix: use unaligned pointer accesses 
for lttng_inline_memcpy

It is less intrusive than other approaches, and does not change the generated 
code on the
most relevant architectures.

Feedback is welcome,

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST structure support

2023-02-02 Thread chafraysse--- via lttng-dev
Hi,

So I wrote a draft of Rust lib above ctf-writer, using the apis as demonstrated 
in the ctf-writer test
For deployment I wanted to use "libbabeltrace2-ctf-writer.so" in the 
"libbabeltrace2-dev" package but I could not locate the matching includes in 
there or in the other babeltrace 2 packages
Did I miss them somewhere ? Should I have plugged in at another level in 
babeltrace2 ?

Best regards,

Charles

- Mail original -
De: chafray...@free.fr
À: "Mathieu Desnoyers" 
Cc: lttng-dev@lists.lttng.org
Envoyé: Lundi 16 Janvier 2023 10:38:28
Objet: Re: [lttng-dev] LTTng UST structure support

Hi Mathieu,

Thanks for your reply :)
I'll stick to bt2 modules in the meantime then
I'll already be saving a ton of time with those and the CTF spec which is great 
!

Best regards,

Charles

- Mail original -
De: "Mathieu Desnoyers" 
À: chafray...@free.fr, lttng-dev@lists.lttng.org
Envoyé: Jeudi 12 Janvier 2023 21:10:57
Objet: Re: [lttng-dev] LTTng UST structure support

On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:
> Hi,
> 
> I'm looking for a CTF writer to serialize instrumentations in an 
> embedded Linux/Rust framework
> LTTng UST looked like a very strong option, but I want to serialize 
> structures as CTF compound type structures and I did not see those 
> supported in the doc or api

This is correct. I am currently working on a new project called 
"libside" (see https://git.efficios.com/?p=libside.git;a=summary) which 
features support for compound types.

However, we still need to do the heavy-lifting implementation work of 
integrating this with LTTng-UST. This is the plan towards supporting 
compound types in LTTng-UST.

> I'd love to have confirmation that I did not just miss something :)
> If LTTng UST is out for me I will probably try to use the ctf-writer 
> module of babeltrace 2 instead

For now the ctf-writer modules of bt2 would be an alternative to 
consider, but remember that it is not designed for low-impact tracing 
such as lttng-ust. So it depends on how much tracer overhead/runtime 
impact you can afford in your use-case.

Thanks,

Mathieu

> 
> Best regards,
> 
> Charles
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-31 Thread Mathieu Desnoyers via lttng-dev

On 2023-01-31 11:18, Mathieu Desnoyers wrote:

On 2023-01-31 11:08, Mathieu Desnoyers wrote:

On 2023-01-30 01:50, Beckius, Mikael via lttng-dev wrote:

Hello Matthieu!

I have looked at this in place of Anders and as far as I can tell 
this is not an arm64 issue but an arm issue. And even on arm 
__ARM_FEATURE_UNALIGNED is 1 so it seems the problem only occurs if 
size equals 8.


So for ARM, perhaps we should do the following in 
include/lttng/ust-arch.h:


#if defined(LTTNG_UST_ARCH_ARM) && defined(__ARM_FEATURE_UNALIGNED)
#define LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1
#endif

And refer to 
https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html#ARM-Options


Based on that documentation, it is possible to build with 
-mno-unaligned-access,
and for all pre-ARMv6, all ARMv6-M and for ARMv8-M Baseline 
architectures,

unaligned accesses are not enabled.

I would only push this kind of change into the master branch though, 
due to

its impact and the fact that this is only a performance improvement.


But setting LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 for arm32
when __ARM_FEATURE_UNALIGNED is defined would still cause issues for
8-byte lttng_inline_memcpy with my proposed patch right ?

AFAIU 32-bit arm with __ARM_FEATURE_UNALIGNED has unaligned accesses for
2 and 4 bytes accesses, but somehow traps for unaligned 8-bytes
accesses ?


Re-reading your analysis, I may have mistakenly concluded that using the
lttng ust ring buffer in "packed" mode would be faster than aligned mode 
on arm32 and aarch64, but that's not really what you have benchmarked there.


So forget what I said about setting 
LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS to 1 for arm32 and aarch64.


There is a distinction between having efficient unaligned access and
supporting unaligned accesses at all.

For aarch64, it appears to support unaligned accesses, but it may be
slower than aligned accesses AFAIU.

For arm32, it supports unaligned accesses for 2 and 4 bytes when 
__ARM_FEATURE_UNALIGNED is set, but not for 8 bytes (it traps). Then 
it's not clear whether a 2 or 4 bytes access is slower when unaligned 
compared to aligned.


At the end of the day, it's a question of compactness of the generated 
trace data (added throughput overhead) vs cpu time required to perform 
an unaligned access vs aligned.


Thoughts ?

Thanks,

Mathieu



Thanks,

Mathieu





In addition I did some performance testing of lttng_inline_memcpy by 
extracting it and adding it to a simple test program. It appears that 
the general performance increases on arm, arm64, arm on arm64 
hardware and x86-64. But it also appears that on arm if you end up in 
memcpy the old code where you call memcpy directly is actually 
slightly faster.


Nothing unexpected here. Just make sure that your test program does 
not call lttng_inline_memcpy
with constant size values which end up optimizing away branches. In 
the context where lttng_inline_memcpy

is used, most of the time its arguments are not constants.



Skipping the memcpy fallback on arm for unaligned copies of sizes 2 
and 4 further improves the performance


This would be naturally done on your board if we conditionally
set LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 for 
__ARM_FEATURE_UNALIGNED

right ?

and setting LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 yields the 
best performance on arm64.


This could go into lttng-ust master branch as well, e.g.:

#if defined(LTTNG_UST_ARCH_AARCH64)
#define LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1
#endif

Thanks!

Mathieu



Micke
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev






--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-31 Thread Mathieu Desnoyers via lttng-dev

On 2023-01-31 11:08, Mathieu Desnoyers wrote:

On 2023-01-30 01:50, Beckius, Mikael via lttng-dev wrote:

Hello Matthieu!

I have looked at this in place of Anders and as far as I can tell this 
is not an arm64 issue but an arm issue. And even on arm 
__ARM_FEATURE_UNALIGNED is 1 so it seems the problem only occurs if 
size equals 8.


So for ARM, perhaps we should do the following in include/lttng/ust-arch.h:

#if defined(LTTNG_UST_ARCH_ARM) && defined(__ARM_FEATURE_UNALIGNED)
#define LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1
#endif

And refer to 
https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html#ARM-Options


Based on that documentation, it is possible to build with 
-mno-unaligned-access,

and for all pre-ARMv6, all ARMv6-M and for ARMv8-M Baseline architectures,
unaligned accesses are not enabled.

I would only push this kind of change into the master branch though, due to
its impact and the fact that this is only a performance improvement.


But setting LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 for arm32
when __ARM_FEATURE_UNALIGNED is defined would still cause issues for
8-byte lttng_inline_memcpy with my proposed patch right ?

AFAIU 32-bit arm with __ARM_FEATURE_UNALIGNED has unaligned accesses for
2 and 4 bytes accesses, but somehow traps for unaligned 8-bytes
accesses ?

Thanks,

Mathieu





In addition I did some performance testing of lttng_inline_memcpy by 
extracting it and adding it to a simple test program. It appears that 
the general performance increases on arm, arm64, arm on arm64 hardware 
and x86-64. But it also appears that on arm if you end up in memcpy 
the old code where you call memcpy directly is actually slightly faster.


Nothing unexpected here. Just make sure that your test program does not 
call lttng_inline_memcpy
with constant size values which end up optimizing away branches. In the 
context where lttng_inline_memcpy

is used, most of the time its arguments are not constants.



Skipping the memcpy fallback on arm for unaligned copies of sizes 2 
and 4 further improves the performance


This would be naturally done on your board if we conditionally
set LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 for 
__ARM_FEATURE_UNALIGNED

right ?

and setting LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 yields the 
best performance on arm64.


This could go into lttng-ust master branch as well, e.g.:

#if defined(LTTNG_UST_ARCH_AARCH64)
#define LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1
#endif

Thanks!

Mathieu



Micke
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev




--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-31 Thread Mathieu Desnoyers via lttng-dev

On 2023-01-30 01:50, Beckius, Mikael via lttng-dev wrote:

Hello Matthieu!

I have looked at this in place of Anders and as far as I can tell this is not 
an arm64 issue but an arm issue. And even on arm __ARM_FEATURE_UNALIGNED is 1 
so it seems the problem only occurs if size equals 8.


So for ARM, perhaps we should do the following in include/lttng/ust-arch.h:

#if defined(LTTNG_UST_ARCH_ARM) && defined(__ARM_FEATURE_UNALIGNED)
#define LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1
#endif

And refer to https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html#ARM-Options

Based on that documentation, it is possible to build with -mno-unaligned-access,
and for all pre-ARMv6, all ARMv6-M and for ARMv8-M Baseline architectures,
unaligned accesses are not enabled.

I would only push this kind of change into the master branch though, due to
its impact and the fact that this is only a performance improvement.



In addition I did some performance testing of lttng_inline_memcpy by extracting 
it and adding it to a simple test program. It appears that the general 
performance increases on arm, arm64, arm on arm64 hardware and x86-64. But it 
also appears that on arm if you end up in memcpy the old code where you call 
memcpy directly is actually slightly faster.


Nothing unexpected here. Just make sure that your test program does not call 
lttng_inline_memcpy
with constant size values which end up optimizing away branches. In the context 
where lttng_inline_memcpy
is used, most of the time its arguments are not constants.



Skipping the memcpy fallback on arm for unaligned copies of sizes 2 and 4 
further improves the performance


This would be naturally done on your board if we conditionally
set LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 for __ARM_FEATURE_UNALIGNED
right ?

and setting LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 yields the best 
performance on arm64.

This could go into lttng-ust master branch as well, e.g.:

#if defined(LTTNG_UST_ARCH_AARCH64)
#define LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1
#endif

Thanks!

Mathieu



Micke
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-30 Thread Beckius, Mikael via lttng-dev
Hello Matthieu!

I have looked at this in place of Anders and as far as I can tell this is not 
an arm64 issue but an arm issue. And even on arm __ARM_FEATURE_UNALIGNED is 1 
so it seems the problem only occurs if size equals 8.

In addition I did some performance testing of lttng_inline_memcpy by extracting 
it and adding it to a simple test program. It appears that the general 
performance increases on arm, arm64, arm on arm64 hardware and x86-64. But it 
also appears that on arm if you end up in memcpy the old code where you call 
memcpy directly is actually slightly faster.

Skipping the memcpy fallback on arm for unaligned copies of sizes 2 and 4 
further improves the performance and setting 
LTTNG_UST_ARCH_HAS_EFFICIENT_UNALIGNED_ACCESS 1 yields the best performance on 
arm64.

Micke
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-26 Thread Anders Wallin via lttng-dev
Hi Matthieu,

I've retired and no longer have access to any arch64  target to test it on.

Regards
Anders


Den ons 25 jan. 2023 13:25Mathieu Desnoyers 
skrev:

> Hi Anders,
>
> Sorry for the long delay on this one, can you have a look at the following
> fix ?
>
> https://review.lttng.org/c/lttng-ust/+/9319 Fix: aarch64: do not perform
> unaligned stores
>
> If it passes your testing, I'll merge this into lttng-ust.
>
> Thanks,
>
> Mathieu
>
> On 2017-12-28 09:13, Anders Wallin wrote:
> > Hi Mathieu,
> >
> > I finally got some time to dig into this issue. The crash only happens
> > when metadata is written AND the size of the metadata will end up in a
> > write that is 8,4,2 or 1 bytes long AND
> > that the source or destination is not aligned correctly according to HW
> > limitation. I have not found any simple way to keep the performance
> > enhancement code that is run most of the time.
> > Maybe the metadata writes should have it's own write function instead.
> >
> > Here is an example of a crash (code is from lttng-ust 2.9.1 and
> > lttng-tools 2.9.6) where the size is 8 bytes and the src address is
> > unaligned at 0xf3b7eeb2;
> >
> > #0  lttng_inline_memcpy (len=8, src=0xf3b7eeb2, dest=) at
> > /usr/src/debug/lttng-ust/2.9.1/git/libringbuffer/backend_internal.h:610
> > No locals.
> > #1  lib_ring_buffer_write (len=8, src=0xf3b7eeb2, ctx=0xf57c47d0,
> > config=0xf737c560 ) at
> > /usr/src/debug/lttng-ust/2.9.1/git/libringbuffer/backend.h:100
> >  __len = 8
> >  handle = 0xf3b2e0c0
> >  backend_pages = 
> >  chanb = 0xf3b2e2e0
> >  offset = 
> >
> > #2  lttng_event_write (ctx=0xf57c47d0, src=0xf3b7eeb2, len=8) at
> >
> /usr/src/debug/lttng-ust/2.9.1/git/liblttng-ust/lttng-ring-buffer-metadata-client.h:267
> > No locals.
> >
> > #3  0xf7337ef8 in ustctl_write_one_packet_to_channel (channel= > out>, metadata_str=0xf3b7eeb2 "", len=) at
> > /usr/src/debug/lttng-ust/2.9.1/git/liblttng-ust-ctl/ustctl.c:1183
> >  ctx = {chan = 0xf3b2e290, priv = 0x0, handle = 0xf3b2e0c0,
> > data_size = 8, largest_align = 1, cpu = -1, buf = 0xf6909000, slot_size
> > = 8, buf_offset = 163877, pre_offset = 163877, tsc = 0, rflags = 0,
> > ctx_len = 80, ip = 0x0, priv2 = 0x0, padding2 = '\000'  > times>, backend_pages = 0xf690c000}
> >  chan = 0xf3b2e4d8
> >  str = 0xf3b7eeb2 ""
> >  reserve_len = 8
> >  ret = 
> >  __func__ = '\000' 
> >  __PRETTY_FUNCTION__ = '\000' 
> > ---Type  to continue, or q  to quit---
> >
> > #4  0x000344cc in commit_one_metadata_packet
> > (stream=stream@entry=0xf3b2e560) at ust-consumer.c:2206
> >  write_len = 
> >  ret = 
> >  __PRETTY_FUNCTION__ = "commit_one_metadata_packet"
> >
> > #5  0x00036538 in lttng_ustconsumer_read_subbuffer
> > (stream=stream@entry=0xf3b2e560, ctx=ctx@entry=0x25e6e8) at
> > ust-consumer.c:2452
> >  len = 4096
> >  subbuf_size = 4093
> >  padding = 
> >  err = -11
> >  write_index = 1
> >  ret = 
> >  ustream = 
> >  index = {offset = 0, packet_size = 575697416355872,
> > content_size = 17564043391468256584, timestamp_begin =
> > 17564043425827782792, timestamp_end = 34359738496,
> > Regards
> > Anders
> >
> > fre 24 nov. 2017 kl 20:18 skrev Mathieu Desnoyers
> > mailto:mathieu.desnoy...@efficios.com
> >>:
> >
> > - On Nov 24, 2017, at 3:23 AM, Anders Wallin  > > wrote:
> >
> > Hi,
> > architectures that has memory alignment restrictions may/will
> > fail with the
> > optimization done in 51b8f2fa2b972e62117caa946dd3e3565b6ca4a3.
> > Please revert the patch or make it X86 specific.
> >
> >
> > Hi Anders,
> >
> > This was added in the development cycle of lttng-ust 2.9. We could
> > perhaps
> > add a test on the pointer alignment for architectures that care
> > about it, and
> > fallback to memcpy in those cases.
> >
> > The revert approach would have been justified if this commit had
> > been backported
> > as a "fix" to a stable branch, which is not the case here. We should
> > work on
> > finding an acceptable solution that takes care of dealing with
> > unaligned pointers
> > on architectures that care about the difference.
> >
> > Thanks,
> >
> > Mathieu
> >
> >
> >
> > Regards
> >
> > Anders Wallin
> >
>  
> 
> > commit 51b8f2fa2b972e62117caa946dd3e3565b6ca4a3
> > Author: Mathieu Desnoyers  > >
> > Date:   Sun Sep 25 12:31:11 2016 -0400
> >
> >  Performance: implement lttng_inline_memcpy
> >  Because all length parameters received for serializing data
> > coming from
> >  applications go through a callback, they 

Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-26 Thread Mathieu Desnoyers via lttng-dev

On 2023-01-26 14:32, Anders Wallin wrote:

Hi Matthieu,

I've retired and no longer have access to any arch64  target to test it on.



Thanks for your reply Anders,

I've talked to Henrik and Pär today and they are already testing it out.

Enjoy your retirement :)

Best regards,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd crash on aarch64 due to x86 arch specific optimization

2023-01-25 Thread Mathieu Desnoyers via lttng-dev

Hi Anders,

Sorry for the long delay on this one, can you have a look at the following fix ?

https://review.lttng.org/c/lttng-ust/+/9319 Fix: aarch64: do not perform 
unaligned stores

If it passes your testing, I'll merge this into lttng-ust.

Thanks,

Mathieu

On 2017-12-28 09:13, Anders Wallin wrote:

Hi Mathieu,

I finally got some time to dig into this issue. The crash only happens 
when metadata is written AND the size of the metadata will end up in a 
write that is 8,4,2 or 1 bytes long AND
that the source or destination is not aligned correctly according to HW 
limitation. I have not found any simple way to keep the performance 
enhancement code that is run most of the time.

Maybe the metadata writes should have it's own write function instead.

Here is an example of a crash (code is from lttng-ust 2.9.1 and 
lttng-tools 2.9.6) where the size is 8 bytes and the src address is 
unaligned at 0xf3b7eeb2;


#0  lttng_inline_memcpy (len=8, src=0xf3b7eeb2, dest=) at 
/usr/src/debug/lttng-ust/2.9.1/git/libringbuffer/backend_internal.h:610

No locals.
#1  lib_ring_buffer_write (len=8, src=0xf3b7eeb2, ctx=0xf57c47d0, 
config=0xf737c560 ) at 
/usr/src/debug/lttng-ust/2.9.1/git/libringbuffer/backend.h:100

         __len = 8
         handle = 0xf3b2e0c0
         backend_pages = 
         chanb = 0xf3b2e2e0
         offset = 

#2  lttng_event_write (ctx=0xf57c47d0, src=0xf3b7eeb2, len=8) at 
/usr/src/debug/lttng-ust/2.9.1/git/liblttng-ust/lttng-ring-buffer-metadata-client.h:267

No locals.

#3  0xf7337ef8 in ustctl_write_one_packet_to_channel (channel=out>, metadata_str=0xf3b7eeb2 "", len=) at 
/usr/src/debug/lttng-ust/2.9.1/git/liblttng-ust-ctl/ustctl.c:1183
         ctx = {chan = 0xf3b2e290, priv = 0x0, handle = 0xf3b2e0c0, 
data_size = 8, largest_align = 1, cpu = -1, buf = 0xf6909000, slot_size 
= 8, buf_offset = 163877, pre_offset = 163877, tsc = 0, rflags = 0, 
ctx_len = 80, ip = 0x0, priv2 = 0x0, padding2 = '\000' times>, backend_pages = 0xf690c000}

         chan = 0xf3b2e4d8
         str = 0xf3b7eeb2 ""
         reserve_len = 8
         ret = 
         __func__ = '\000' 
         __PRETTY_FUNCTION__ = '\000' 
---Type  to continue, or q  to quit---

#4  0x000344cc in commit_one_metadata_packet 
(stream=stream@entry=0xf3b2e560) at ust-consumer.c:2206

         write_len = 
         ret = 
         __PRETTY_FUNCTION__ = "commit_one_metadata_packet"

#5  0x00036538 in lttng_ustconsumer_read_subbuffer 
(stream=stream@entry=0xf3b2e560, ctx=ctx@entry=0x25e6e8) at 
ust-consumer.c:2452

         len = 4096
         subbuf_size = 4093
         padding = 
         err = -11
         write_index = 1
         ret = 
         ustream = 
         index = {offset = 0, packet_size = 575697416355872, 
content_size = 17564043391468256584, timestamp_begin = 
17564043425827782792, timestamp_end = 34359738496,

Regards
Anders

fre 24 nov. 2017 kl 20:18 skrev Mathieu Desnoyers 
mailto:mathieu.desnoy...@efficios.com>>:


- On Nov 24, 2017, at 3:23 AM, Anders Wallin mailto:walli...@gmail.com>> wrote:

Hi,
architectures that has memory alignment restrictions may/will
fail with the
optimization done in 51b8f2fa2b972e62117caa946dd3e3565b6ca4a3.
Please revert the patch or make it X86 specific.


Hi Anders,

This was added in the development cycle of lttng-ust 2.9. We could
perhaps
add a test on the pointer alignment for architectures that care
about it, and
fallback to memcpy in those cases.

The revert approach would have been justified if this commit had
been backported
as a "fix" to a stable branch, which is not the case here. We should
work on
finding an acceptable solution that takes care of dealing with
unaligned pointers
on architectures that care about the difference.

Thanks,

Mathieu



Regards

Anders Wallin


commit 51b8f2fa2b972e62117caa946dd3e3565b6ca4a3
Author: Mathieu Desnoyers mailto:mathieu.desnoy...@efficios.com>>
Date:   Sun Sep 25 12:31:11 2016 -0400

     Performance: implement lttng_inline_memcpy
     Because all length parameters received for serializing data
coming from
     applications go through a callback, they are never
constant, and it
     hurts performance to perform a call to memcpy each time.
     Signed-off-by: Mathieu Desnoyers
mailto:mathieu.desnoy...@efficios.com>>

diff --git a/libringbuffer/backend_internal.h
b/libringbuffer/backend_internal.h
index 90088b89..e597cf4d 100644
--- a/libringbuffer/backend_internal.h
+++ b/libringbuffer/backend_internal.h
@@ -592,6 +592,28 @@ int update_read_sb_index(const struct
lttng_ust_lib_ring_buffer_config *config,
  #define inline_memcpy(dest, src, n)    

Re: [lttng-dev] LTTng UST structure support

2023-01-16 Thread chafraysse--- via lttng-dev
Hi Mathieu,

Thanks for your reply :)
I'll stick to bt2 modules in the meantime then
I'll already be saving a ton of time with those and the CTF spec which is great 
!

Best regards,

Charles

- Mail original -
De: "Mathieu Desnoyers" 
À: chafray...@free.fr, lttng-dev@lists.lttng.org
Envoyé: Jeudi 12 Janvier 2023 21:10:57
Objet: Re: [lttng-dev] LTTng UST structure support

On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:
> Hi,
> 
> I'm looking for a CTF writer to serialize instrumentations in an 
> embedded Linux/Rust framework
> LTTng UST looked like a very strong option, but I want to serialize 
> structures as CTF compound type structures and I did not see those 
> supported in the doc or api

This is correct. I am currently working on a new project called 
"libside" (see https://git.efficios.com/?p=libside.git;a=summary) which 
features support for compound types.

However, we still need to do the heavy-lifting implementation work of 
integrating this with LTTng-UST. This is the plan towards supporting 
compound types in LTTng-UST.

> I'd love to have confirmation that I did not just miss something :)
> If LTTng UST is out for me I will probably try to use the ctf-writer 
> module of babeltrace 2 instead

For now the ctf-writer modules of bt2 would be an alternative to 
consider, but remember that it is not designed for low-impact tracing 
such as lttng-ust. So it depends on how much tracer overhead/runtime 
impact you can afford in your use-case.

Thanks,

Mathieu

> 
> Best regards,
> 
> Charles
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST structure support

2023-01-12 Thread Mathieu Desnoyers via lttng-dev

On 2023-01-09 09:02, chafraysse--- via lttng-dev wrote:

Hi,

I'm looking for a CTF writer to serialize instrumentations in an 
embedded Linux/Rust framework
LTTng UST looked like a very strong option, but I want to serialize 
structures as CTF compound type structures and I did not see those 
supported in the doc or api


This is correct. I am currently working on a new project called 
"libside" (see https://git.efficios.com/?p=libside.git;a=summary) which 
features support for compound types.


However, we still need to do the heavy-lifting implementation work of 
integrating this with LTTng-UST. This is the plan towards supporting 
compound types in LTTng-UST.



I'd love to have confirmation that I did not just miss something :)
If LTTng UST is out for me I will probably try to use the ctf-writer 
module of babeltrace 2 instead


For now the ctf-writer modules of bt2 would be an alternative to 
consider, but remember that it is not designed for low-impact tracing 
such as lttng-ust. So it depends on how much tracer overhead/runtime 
impact you can afford in your use-case.


Thanks,

Mathieu



Best regards,

Charles
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng UST structure support

2023-01-09 Thread chafraysse--- via lttng-dev

Hi,

I'm looking for a CTF writer to serialize instrumentations in an 
embedded Linux/Rust framework
LTTng UST looked like a very strong option, but I want to serialize 
structures as CTF compound type structures and I did not see those 
supported in the doc or api

I'd love to have confirmation that I did not just miss something :)
If LTTng UST is out for me I will probably try to use the ctf-writer 
module of babeltrace 2 instead


Best regards,

Charles
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng and containers.

2022-10-12 Thread Michael Jeanson via lttng-dev

On 2022-10-10 17:12, Maksim Khmelevskiy via lttng-dev wrote:

Hi,

I would like to ask regarding the hot topic - container tracing. I've seen a 
youtube video , have read a 
message  
from LTTng mailing list and tried to google more about this topic but didn't 
find much of info. Could you please direct me where should I continue digging?

My problem:
I would like to have multiple containers where traces are generated by apps 
with compiled-in tracepoints. Traces could be stored in these containers as well.
Besides these containers I would have a trace processor container, a master 
container which could address a container(or a trace session) and fetch traces 
from it and read with babeltrace or similar tool.
So far, intuitively, remote tracing comes to my mind but before continuing 
with the task I would be happy to hear an advice from LTTng devs.


Thank you!


Hi,

You can either use lttng-relayd to send the traces over the network to your 
trace processor container, or just trace to disk on shared diectories. You 
should have a look at trace rotation, https://doc.lttng.org is probably the 
best place to start.


Michael

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng and containers.

2022-10-12 Thread Jérémie Galarneau via lttng-dev
Hi Maksim,

It's hard to give general advice without knowing more about your constraints.

I can see three main approaches that are easy to reach:

  1.  Store the traces in each container and fetch them,
  2.  Trace to a shared volume (make sure sessions are stopped or rotated 
before you read them),
  3.  Or stream traces to a "collector" container (perhaps the one doing the 
analyses) using network streaming

If you want to continuously consume traces, I would suggest you look into 
"trace rotations". You could setup periodic trace rotations (based on size or 
time) and ship traces to your processor node as they become available.

Although my demo wasn't running in a container, the basic idea is what I 
presented at FOSDEM in 2019:

https://archive.fosdem.org/2019/schedule/event/lttng/
[https://fosdem.org/2019/assets/style/logo-gear-7204a6874eb0128932db10ff4030910401ac06f4e907f8b4a40da24ba592b252.png]<https://archive.fosdem.org/2019/schedule/event/lttng/>
FOSDEM 2019 - Fine-grained Distributed Application Monitoring Using 
LTTng<https://archive.fosdem.org/2019/schedule/event/lttng/>
archive.fosdem.org

I hope that helps a bit,
Jérémie

--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com

From: lttng-dev  on behalf of Maksim 
Khmelevskiy via lttng-dev 
Sent: October 10, 2022 12:12
To: lttng-dev@lists.lttng.org 
Subject: [lttng-dev] LTTng and containers.

Hi,

I would like to ask regarding the hot topic - container tracing. I've seen a 
youtube video<https://www.youtube.com/watch?v=hra-eu6EOpY>, have read a 
message<https://lists.lttng.org/pipermail/lttng-dev/2021-May/029952.html> from 
LTTng mailing list and tried to google more about this topic but didn't find 
much of info. Could you please direct me where should I continue digging?
My problem:
I would like to have multiple containers where traces are generated by apps 
with compiled-in tracepoints. Traces could be stored in these containers as 
well.
Besides these containers I would have a trace processor container, a master 
container which could address a container(or a trace session) and fetch traces 
from it and read with babeltrace or similar tool.
So far, intuitively, remote tracing comes to my mind but before continuing with 
the task I would be happy to hear an advice from LTTng devs.

Thank you!
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-ust on arm64 getting bogged down by the getcpu syscall (taking more than 600ns per call)

2022-10-12 Thread Jérémie Galarneau via lttng-dev
Hi,

I think the first problem you will run into is that rseq is only available on 
kernels 4.18+.

As far as I know, starting with glibc 2.35, getcpu() defers to using rseq when 
available.

Is there a way you can upgrade these two components and see if the overhead is 
reduced?

Jérémie

--
Jérémie Galarneau
EfficiOS Inc.
https://www.efficios.com

From: lttng-dev  on behalf of Akhil 
Veeraghanta via lttng-dev 
Sent: October 11, 2022 16:57
To: lttng-dev@lists.lttng.org 
Subject: [lttng-dev] lttng-ust on arm64 getting bogged down by the getcpu 
syscall (taking more than 600ns per call)

Hello!

I've run into an issue with getcpu not having a vsdo implementation and taking 
anywhere from 600ns to 80us (avg 1 us) when using lttng-ust tracepoints.
I am on lttng v2.13 and kernel version 4.9.253-l4t, running on a jetson 
(arm64). I was digging around and found that rseq might be the recommended next 
step

I am wondering:

  1.  Are there examples of using rseq system call to replace getcpu
  2.  Are there any existing patches that I can apply to get better getcpu 
performance

Thanks in advance,
Akhil
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] lttng-ust on arm64 getting bogged down by the getcpu syscall (taking more than 600ns per call)

2022-10-11 Thread Akhil Veeraghanta via lttng-dev
Hello!

I've run into an issue with getcpu not having a vsdo implementation and taking 
anywhere from 600ns to 80us (avg 1 us) when using lttng-ust tracepoints.
I am on lttng v2.13 and kernel version 4.9.253-l4t, running on a jetson 
(arm64). I was digging around and found that rseq might be the recommended next 
step

I am wondering:

  1.  Are there examples of using rseq system call to replace getcpu
  2.  Are there any existing patches that I can apply to get better getcpu 
performance

Thanks in advance,
Akhil
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng and containers.

2022-10-11 Thread Maksim Khmelevskiy via lttng-dev
Hi,

I would like to ask regarding the hot topic - container tracing. I've seen
a youtube video , have read a
message 
from LTTng mailing list and tried to google more about this topic but
didn't find much of info. Could you please direct me where should I
continue digging?
My problem:
I would like to have multiple containers where traces are generated by apps
with compiled-in tracepoints. Traces could be stored in these containers as
well.
Besides these containers I would have a trace processor container, a master
container which could address a container(or a trace session) and fetch
traces from it and read with babeltrace or similar tool.
So far, intuitively, remote tracing comes to my mind but before continuing
with the task I would be happy to hear an advice from LTTng devs.

Thank you!
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng on RHEL 7.9 with real time kernel patch

2022-07-18 Thread Mathieu Desnoyers via lttng-dev
- On Jul 15, 2022, at 9:19 AM, Pennese Marco via lttng-dev 
 wrote: 

> Hi,

> I’m trying to use LTTng on a Red Hat Enterprise Linux 7.9 (with the real time
> kernel patch).

> I installed LTTng on the machine following the EfficiOS guide linked by lttng
> official website.

> In my configuration, lttng manages to record user space events.

> I’d like to record also kernel space events, so I’m following the quickstart
> guide on the website.
> As soon as I run “ lttng list –-kernel” , I obtain the error:
> Error: Unable to list kernel events: Kernel tracer not available
> Error: Command error

> If I repeat the procedure on a machine without the real time kernel patch, it
> works as it should.

> The real time kernel I’m using is the following

> 3.10.0-1160.62.1.rt56.1203.el7.x86_64

> Do you think the error is due to the real time patch? Can LTTng work with a 
> real
> time kernel?

> Do I need to follow a different procedure for installing LTTng on a RHEL 7.9
> real time machine?

Hi Marco, 

Specific packaging needs around RHEL 7, including the real-time variant, are 
part of 
EfficiOS packaging, support and feature development services, which allow us to 
develop and maintain the entire LTTng open source ecosystem. Please contact us 
in private to discuss the available options. 

Thanks for reaching out! 

Mathieu 

> Thank you,

> Marco

> Il presente messaggio e-mail e ogni suo allegato devono intendersi indirizzati
> esclusivamente al destinatario indicato e considerarsi dal contenuto
> strettamente riservato e confidenziale. Se non siete l'effettivo destinatario 
> o
> avete ricevuto il messaggio e-mail per errore, siete pregati di avvertire
> immediatamente il mittente e di cancellare il suddetto messaggio e ogni suo
> allegato dal vostro sistema informatico. Qualsiasi utilizzo, diffusione, copia
> o archiviazione del presente messaggio da parte di chi non ne è il 
> destinatario
> è strettamente proibito e può dar luogo a responsabilità di carattere civile e
> penale punibili ai sensi di legge.
> Questa e-mail ha valore legale solo se firmata digitalmente ai sensi della
> normativa vigente.

> The contents of this email message and any attachments are intended solely for
> the addressee(s) and contain confidential and/or privileged information.
> If you are not the intended recipient of this message, or if this message has
> been addressed to you in error, please immediately notify the sender and then
> delete this message and any attachments from your system. If you are not the
> intended recipient, you are hereby notified that any use, dissemination,
> copying, or storage of this message or its attachments is strictly prohibited.
> Unauthorized disclosure and/or use of information contained in this email
> message may result in civil and criminal liability. “
> This e-mail has legal value according to the applicable laws only if it is
> digitally signed by the sender

> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers 
EfficiOS Inc. 
http://www.efficios.com 
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng on RHEL 7.9 with real time kernel patch

2022-07-15 Thread Pennese Marco via lttng-dev
Hi,

I’m trying to use LTTng on a Red Hat Enterprise Linux 7.9 (with the real time 
kernel patch).
I installed LTTng on the machine following the EfficiOS guide linked by lttng 
official website.

In my configuration, lttng manages to record user space events.
I’d like to record also kernel space events, so I’m following the quickstart 
guide on the website.

As soon as I run “lttng list –-kernel”, I obtain the error:



Error: Unable to list kernel events: Kernel tracer not available

Error: Command error

If I repeat the procedure on a machine without the real time kernel patch, it 
works as it should.
The real time kernel I’m using is the following
3.10.0-1160.62.1.rt56.1203.el7.x86_64

Do you think the error is due to the real time patch? Can LTTng work with a 
real time kernel?
Do I need to follow a different procedure for installing LTTng on a RHEL 7.9 
real time machine?

Thank you,
Marco

Il presente messaggio e-mail e ogni suo allegato devono intendersi indirizzati 
esclusivamente al destinatario indicato e considerarsi dal contenuto 
strettamente riservato e confidenziale. Se non siete l'effettivo destinatario o 
avete ricevuto il messaggio e-mail per errore, siete pregati di avvertire 
immediatamente il mittente e di cancellare il suddetto messaggio e ogni suo 
allegato dal vostro sistema informatico. Qualsiasi utilizzo, diffusione, copia 
o archiviazione del presente messaggio da parte di chi non ne è il destinatario 
è strettamente proibito e può dar luogo a responsabilità di carattere civile e 
penale punibili ai sensi di legge.
Questa e-mail ha valore legale solo se firmata digitalmente ai sensi della 
normativa vigente.

The contents of this email message and any attachments are intended solely for 
the addressee(s) and contain confidential and/or privileged information.
If you are not the intended recipient of this message, or if this message has 
been addressed to you in error, please immediately notify the sender and then 
delete this message and any attachments from your system. If you are not the 
intended recipient, you are hereby notified that any use, dissemination, 
copying, or storage of this message or its attachments is strictly prohibited. 
Unauthorized disclosure and/or use of information contained in this email 
message may result in civil and criminal liability. “
This e-mail has legal value according to the applicable laws only if it is 
digitally signed by the sender
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] [lttng-tools] Removal of root_regression tests

2022-06-14 Thread Marcel Hamer via lttng-dev
Hello Jonathan,

On Mon, Jun 13, 2022 at 11:21:49AM -0400, Jonathan Rajotte-Julien wrote:
> [Please note: This e-mail is from an EXTERNAL e-mail address]
> 
> Hi Marcel,
> 
> - Original Message -
> > From: "Marcel Hamer via lttng-dev" 
> > To: "lttng-dev" 
> > Sent: Monday, 13 June, 2022 07:49:39
> > Subject: [lttng-dev] [lttng-tools] Removal of root_regression tests
> 
> > Hello,
> >
> > Since version v2.12.9 of lttng-tools the root_regression file has been 
> > emptied
> > to make the tests part of the 'make check' sequence instead.
> >
> > We were always actively using that test file as part of our regression 
> > testing.
> > In our case we are working in a cross-compilation environment, where the 
> > run.sh
> > script was used on target for testing and as such not at compile time. It 
> > is not
> > easy to run a make check sequence on a target.
> 
> I would suggest that you take a look at how OpenEmbedded does it with ptest 
> AFAIK it match your requirements:
> 
> https://github.com/openembedded/openembedded-core/blob/c7e2901eacf3dcbd0c5bb91d2cc1d467b4a9aaf7/meta/recipes-kernel/lttng/lttng-tools_2.13.7.bb#L75
> 

That is a very good suggestion. I guess we were a bit too focused on our 
existing
solution of using run.sh. We will look into this.

> >
> > It is now also a bit unclear which tests actually require root access and 
> > which
> > tests do not. I understood this was the reason the file was called
> > 'root_regression'?
> 
> Yes when the tests suites primarily used `prove` via run.sh.
> 
> We have been slowly moving away from it for a good time and now mostly use 
> the Automake test harness as much as possible.
> 
> The worse that will happen if you run a test that required root as a non-root 
> user is that `skip` tap output will be emitted.
> 
> >
> > Some questions that get raised because of this:
> >
> > - Is there now an alternative way to run regressions on target in case of a
> >  cross-compilation environment?
> 
> AFAIU, this is out of scope of the lttng project. Still, I would recommend 
> that you see how yocto/oe do it with ptest.
> 
> > - Would there be a possibility to fill the 'root_regression' file again and
> >  possibly revert this change?
> 
> Feel free to do it out-of-tree. I doubt that we are the only project that 
> WindRiver handles that uses
> the automake test harness and that do not provide a easy way to run on-target 
> for cross-compilation testing.

Yes, you are right and that is a fair point. We will look into the ptest
solution.

> 
> A quick grep with "isroot" should get you 95% there.
> 
> > - How are tests now identified that require root access?
> 
> All tests that require root access test for it at runtime
> 
> Something along:
> 
> regression/tools/streaming/test_high_throughput_limits:
> 
>  if [ "$(id -u)" == "0" ]; then
> isroot=1
>  else
>  isroot=0
>  fi
> 
>  skip $isroot "Root access is needed to set bandwidth limits. Skipping all 
> tests." $NUM_TESTS ||
>  {
>  ...
> Tests are done here.
>  }
> 
> Cheers

Thanks for the tip on how to identify test cases that require root privileges.

Kind regards,

Marcel
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] [lttng-tools] Removal of root_regression tests

2022-06-13 Thread Jonathan Rajotte-Julien via lttng-dev
Hi Marcel,

- Original Message -
> From: "Marcel Hamer via lttng-dev" 
> To: "lttng-dev" 
> Sent: Monday, 13 June, 2022 07:49:39
> Subject: [lttng-dev] [lttng-tools] Removal of root_regression tests

> Hello,
> 
> Since version v2.12.9 of lttng-tools the root_regression file has been emptied
> to make the tests part of the 'make check' sequence instead.
> 
> We were always actively using that test file as part of our regression 
> testing.
> In our case we are working in a cross-compilation environment, where the 
> run.sh
> script was used on target for testing and as such not at compile time. It is 
> not
> easy to run a make check sequence on a target.

I would suggest that you take a look at how OpenEmbedded does it with ptest 
AFAIK it match your requirements:

https://github.com/openembedded/openembedded-core/blob/c7e2901eacf3dcbd0c5bb91d2cc1d467b4a9aaf7/meta/recipes-kernel/lttng/lttng-tools_2.13.7.bb#L75

> 
> It is now also a bit unclear which tests actually require root access and 
> which
> tests do not. I understood this was the reason the file was called
> 'root_regression'?

Yes when the tests suites primarily used `prove` via run.sh.

We have been slowly moving away from it for a good time and now mostly use the 
Automake test harness as much as possible.

The worse that will happen if you run a test that required root as a non-root 
user is that `skip` tap output will be emitted.

> 
> Some questions that get raised because of this:
> 
> - Is there now an alternative way to run regressions on target in case of a
>  cross-compilation environment?

AFAIU, this is out of scope of the lttng project. Still, I would recommend that 
you see how yocto/oe do it with ptest.

> - Would there be a possibility to fill the 'root_regression' file again and
>  possibly revert this change?

Feel free to do it out-of-tree. I doubt that we are the only project that 
WindRiver handles that uses
the automake test harness and that do not provide a easy way to run on-target 
for cross-compilation testing.

A quick grep with "isroot" should get you 95% there.

> - How are tests now identified that require root access?

All tests that require root access test for it at runtime

Something along:

regression/tools/streaming/test_high_throughput_limits:

 if [ "$(id -u)" == "0" ]; then
isroot=1
 else
 isroot=0
 fi

 skip $isroot "Root access is needed to set bandwidth limits. Skipping all 
tests." $NUM_TESTS ||
 {
 ...
Tests are done here.
 }

Cheers
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] [lttng-tools] Removal of root_regression tests

2022-06-13 Thread Marcel Hamer via lttng-dev
Hello,

Since version v2.12.9 of lttng-tools the root_regression file has been emptied
to make the tests part of the 'make check' sequence instead. 

We were always actively using that test file as part of our regression testing. 
In our case we are working in a cross-compilation environment, where the run.sh
script was used on target for testing and as such not at compile time. It is not
easy to run a make check sequence on a target.

It is now also a bit unclear which tests actually require root access and which
tests do not. I understood this was the reason the file was called
'root_regression'?

Some questions that get raised because of this:

- Is there now an alternative way to run regressions on target in case of a
  cross-compilation environment? 
- Would there be a possibility to fill the 'root_regression' file again and
  possibly revert this change?
- How are tests now identified that require root access?

This has been as part of commit 9e2d9d2bae015e6748ebf8575ea602ee0fe65c62 in
lttng-tools.

Thank you in advance!

Kind regards,

Marcel
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-consumerd can NOT get notification to consume ring buffer data

2021-12-07 Thread Mathieu Desnoyers via lttng-dev
Hi, 

Can you try: 

- Reproducing with a more recent LTTng-UST/LTTng-Tools ? (e.g. 2.13). LTTng 2.7 
is not supported anymore. 
- Try to reproduce with "per-pid" UST buffers rather than "per-uid", 
- Try to reproduce without the "tracefile rotation" mode (without 
--tracefile-count and --tracefile-size options to the channel). 

Also, do you happen to have traced applications which are frequently killed 
asynchronously 
while some of their threads are actively tracing by any chance ? 

Thanks, 

Mathieu 

- On Dec 7, 2021, at 4:03 AM, lttng-dev  wrote: 

> Hi, Lttng-dev:

> I found a strange problem related to lttng-consumed and ctf files recently. 
> The
> ctf files belongs to some CPUs have been stopped rotating but other ctf
> files(belong to other CPUs) keeped working as usual. I am very sure all CPUs
> were producing spans all the time.

> #date; ls -ltrh channel0_0_* |tail -n 3; date;ls -ltrh channel0_1_* |tail -n 3
> Tue Dec 7 16:25:45 CST 2021
> -rw-rw 1 root root 1.8M Dec 7 16:20 channel0_0_17
> -rw-rw 1 root root 2.0M Dec 7 16:23 channel0_0_18
> -rw-rw 1 root root 916K Dec 7 16:24 channel0_0_19
> Tue Dec 7 16:25:45 CST 2021
> -rw-rw 1 root root 1.7M Dec 6 00:30 channel0_1_8
> -rw-rw 1 root root 1.9M Dec 6 00:31 channel0_1_9
> -rw-rw 1 root root 388K Dec 6 00:32 channel0_1_10

> Notice that the ctf files with CPU0 (channel0_0_19) was modified at the time
> "now", but the ctf files with CPU1(channeo0_1_10) has been stopped working at
> Dec 6.
> I gdb lttng-consumerd(break at lib_ring_buffer_channel_do_read() and
> lib_ring_buffer_poll_deliver()--I configured a read timer on channels). See 
> the
> followings ( lib_ring_buffer_poll_deliver() )

> 0x7f0f2418a445 <+213>: mov 0x0(%r13),%rcx 
> rcx:0x7f0f0402e610(handle->table)
> 0x7f0f2418a449 <+217>: mov 0x98(%rsi),%r9 rsi:0x7f0efb86f000(buf)
> r9:0x86c40(consumed_old = buf->consumed)
> 0x7f0f2418a450 <+224>: lea 0x150(%rsi),%rdi rdi:0x7f0efda75150
> 0x7f0f2418a457 <+231>: mov 0x150(%rsi),%rax rax:3(cpu?)
> 0x7f0f2418a45e <+238>: mov 0x78(%rbp),%rdx rbp:chan
> rdx:0x100(chan->backend.buf_size)
> 0x7f0f2418a462 <+242>: mov 0x88(%rbp),%r8d
> r8d:0x14(20)(chan->backend.buf_size_order)
> 0x7f0f2418a469 <+249>: cmp %rax,0x8(%rcx)
> 0x7f0f2418a46d <+253>: jbe 0x7f0f2418a80c
> 
> 0x7f0f2418a473 <+259>: shl $0x6,%rax rax:192
> 0x7f0f2418a477 <+263>: sub $0x1,%rdx rdx:0xff 16777215
> (chan->backend.buf_size - 1)
> 0x7f0f2418a47b <+267>: lea 0x10(%rax,%rcx,1),%r10 rax:192
> rcx:0x7f0f0402e610(handle->table) r10:0x7f0f0402e6e0
> 0x7f0f2418a480 <+272>: and %r9,%rdx r9:0x8fdc0 rdx:0xc0 12582912
> buf_trunc(consume_old, chan->backend.buf_size - 1)
> 0x7f0f2418a483 <+275>: mov 0x8(%rdi),%rax rdi:0x7f0efda75150 rax:2688
> ref_offset = (size_t) ref->offset
> 0x7f0f2418a487 <+279>: mov %r8d,%ecx ecx:0x14(20)
> 0x7f0f2418a48a <+282>: shr %cl,%rdx ()
> 0x7f0f2418a48d <+285>: shl $0x7,%rdx
> 0x7f0f2418a491 <+289>: add %rax,%rdx
> 0x7f0f2418a494 <+292>: lea 0x80(%rdx),%rax
> 0x7f0f2418a49b <+299>: cmp 0x28(%r10),%rax
> 0x7f0f2418a49f <+303>: ja 0x7f0f2418a80c
> 
> 0x7f0f2418a4a5 <+309>: mov 0x20(%r10),%rax r10:0x7f0f0402e6e0
> rax:0x7f0efb86f000(buf)
> 0x7f0f2418a4a9 <+313>: add %rdx,%rax rdx:3200 rax:0x7f0efb86fc80
> 0x7f0f2418a4ac <+316>: mov (%rax),%rax
> rax:0x86c0(commit_count!!Incremented _once_ at sb switch cc_sb)
> 0x7f0f2418a4af <+319>: mov 0x80(%rbp),%r8
> r8:0x10(chan->backend.subbuf_size)
> 0x7f0f2418a4b6 <+326>: mov 0x78(%rbp),%rdx
> rdx:0x100(chan->backend.buf_size)
> 0x7f0f2418a4ba <+330>: mov 0x8c(%rbp),%ecx ecx:4
> 0x7f0f2418a4c0 <+336>: mov 0x80(%rsi),%rdi rdi:0x86d40
> 0x7f0f2418a4c7 <+343>: sub %r8,%rax rax:0x86b0(commit_count -
> chan->backend.subbuf_size)
> 0x7f0f2418a4ca <+346>: and 0x8(%rbp),%rax rax & chan->commit_count_mask =
> rax:0x86b0
> 0x7f0f2418a4ce <+350>: neg %rdx rdx:0x100(16M chan->backend.buf_size)
> --> 0xff00(-16777216)
> 0x7f0f2418a4d1 <+353>: and %r9,%rdx r9:0x86c40(consume_old)
> rdx:0x86c00
> 0x7f0f2418a4d4 <+356>: shr %cl,%rdx cl:4 rdx:0x86c0
> 0x7f0f2418a4d7 <+359>: cmp %rdx,%rax rax:0x86b rdx:0x86c0
> 0x7f0f2418a4da <+362>: jne 0x7f0f2418a411
> 
> 0x7f0f2418a4e0 <+368>: mov %r8,%rax
> 209 static inline
> 210 int lib_ring_buffer_poll_deliver(const struct
> lttng_ust_lib_ring_buffer_config *config,
> 211 struct lttng_ust_lib_ring_buffer *buf,
> 212 struct channel *chan,
> 213 struct lttng_ust_shm_handle *handle)
> 214 {
> 215 unsigned long consumed_old, consumed_idx, commit_count, write_offset;
> 216
> 217 consumed_old = uatomic_read(>consumed);
> 218 consumed_idx = subbuf_index(consumed_old, chan);
> 219 commit_count = v_read(config, _index(handle, buf->commit_cold,
> consumed_idx)->cc_sb);
> 220 /*
> 221 * No memory barrier here, since we are only interested
> 222 * in a 

[lttng-dev] lttng-consumerd can NOT get notification to consume ring buffer data

2021-12-07 Thread zhenyu.ren via lttng-dev
Hi, Lttng-dev:

  I found a strange problem related to lttng-consumed and ctf files recently. 
The ctf files belongs to some CPUs have been stopped rotating but other ctf 
files(belong to other CPUs) keeped working as usual. I am very sure all CPUs 
were producing spans all the time.

#date; ls -ltrh channel0_0_* |tail -n 3; date;ls -ltrh channel0_1_* |tail -n 3
Tue Dec  7 16:25:45 CST 2021
-rw-rw 1 root root 1.8M Dec  7 16:20 channel0_0_17
-rw-rw 1 root root 2.0M Dec  7 16:23 channel0_0_18
-rw-rw 1 root root 916K Dec  7 16:24 channel0_0_19
Tue Dec  7 16:25:45 CST 2021
-rw-rw 1 root root 1.7M Dec  6 00:30 channel0_1_8
-rw-rw 1 root root 1.9M Dec  6 00:31 channel0_1_9
-rw-rw 1 root root 388K Dec  6 00:32 channel0_1_10

Notice that the ctf files with CPU0 (channel0_0_19) was modified at the 
time "now", but the ctf files with CPU1(channeo0_1_10) has been stopped working 
at Dec 6.
I gdb lttng-consumerd(break at lib_ring_buffer_channel_do_read() and  
lib_ring_buffer_poll_deliver()--I configured a read timer on channels). See the 
followings (lib_ring_buffer_poll_deliver() )

   0x7f0f2418a445 <+213>: mov0x0(%r13),%rcx
rcx:0x7f0f0402e610(handle->table)
   0x7f0f2418a449 <+217>: mov0x98(%rsi),%r9
rsi:0x7f0efb86f000(buf) r9:0x86c40(consumed_old = buf->consumed)
   0x7f0f2418a450 <+224>: lea0x150(%rsi),%rdi  
rdi:0x7f0efda75150
   0x7f0f2418a457 <+231>: mov0x150(%rsi),%rax  rax:3(cpu?)
   0x7f0f2418a45e <+238>: mov0x78(%rbp),%rdx   rbp:chan 
 rdx:0x100(chan->backend.buf_size)
   0x7f0f2418a462 <+242>: mov0x88(%rbp),%r8d   
r8d:0x14(20)(chan->backend.buf_size_order)
   0x7f0f2418a469 <+249>: cmp%rax,0x8(%rcx)
   0x7f0f2418a46d <+253>: jbe0x7f0f2418a80c 

   0x7f0f2418a473 <+259>: shl$0x6,%rax rax:192
   0x7f0f2418a477 <+263>: sub$0x1,%rdx rdx:0xff 
16777215 (chan->backend.buf_size - 1)
   0x7f0f2418a47b <+267>: lea0x10(%rax,%rcx,1),%r10rax:192 
rcx:0x7f0f0402e610(handle->table) r10:0x7f0f0402e6e0
   0x7f0f2418a480 <+272>: and%r9,%rdx  
r9:0x8fdc0 rdx:0xc0 12582912  buf_trunc(consume_old, 
chan->backend.buf_size - 1)
   0x7f0f2418a483 <+275>: mov0x8(%rdi),%rax
rdi:0x7f0efda75150 rax:2688 ref_offset = (size_t) ref->offset
   0x7f0f2418a487 <+279>: mov%r8d,%ecx ecx:0x14(20)
   0x7f0f2418a48a <+282>: shr%cl,%rdx   () 
   0x7f0f2418a48d <+285>: shl$0x7,%rdx
   0x7f0f2418a491 <+289>: add%rax,%rdx
   0x7f0f2418a494 <+292>: lea0x80(%rdx),%rax
   0x7f0f2418a49b <+299>: cmp0x28(%r10),%rax
   0x7f0f2418a49f <+303>: ja 0x7f0f2418a80c 

   0x7f0f2418a4a5 <+309>: mov0x20(%r10),%rax   
r10:0x7f0f0402e6e0 rax:0x7f0efb86f000(buf)
   0x7f0f2418a4a9 <+313>: add%rdx,%rax rdx:3200  
rax:0x7f0efb86fc80
   0x7f0f2418a4ac <+316>: mov(%rax),%rax   
rax:0x86c0(commit_count!!Incremented _once_ at sb switch cc_sb)
   0x7f0f2418a4af <+319>: mov0x80(%rbp),%r8
r8:0x10(chan->backend.subbuf_size)
   0x7f0f2418a4b6 <+326>: mov0x78(%rbp),%rdx   
rdx:0x100(chan->backend.buf_size)
   0x7f0f2418a4ba <+330>: mov0x8c(%rbp),%ecx   ecx:4
   0x7f0f2418a4c0 <+336>: mov0x80(%rsi),%rdi   
rdi:0x86d40
   0x7f0f2418a4c7 <+343>: sub%r8,%rax  
rax:0x86b0(commit_count - chan->backend.subbuf_size)
   0x7f0f2418a4ca <+346>: and0x8(%rbp),%raxrax & 
chan->commit_count_mask = rax:0x86b0
   0x7f0f2418a4ce <+350>: neg%rdx  
rdx:0x100(16M chan->backend.buf_size)  --> 0xff00(-16777216)
   0x7f0f2418a4d1 <+353>: and%r9,%rdx  
r9:0x86c40(consume_old)   rdx:0x86c00
   0x7f0f2418a4d4 <+356>: shr%cl,%rdx  cl:4 
 rdx:0x86c0
   0x7f0f2418a4d7 <+359>: cmp%rdx,%rax 
rax:0x86b rdx:0x86c0
   0x7f0f2418a4da <+362>: jne0x7f0f2418a411 

   0x7f0f2418a4e0 <+368>: mov%r8,%rax
209 static inline
210 int lib_ring_buffer_poll_deliver(const struct 
lttng_ust_lib_ring_buffer_config *config,
211  struct lttng_ust_lib_ring_buffer *buf,
212  struct channel *chan,
213  struct lttng_ust_shm_handle *handle)
214 {
215 unsigned long consumed_old, consumed_idx, commit_count, write_offset;
216 
217 consumed_old = uatomic_read(>consumed);
218 consumed_idx = subbuf_index(consumed_old, chan);
219 commit_count = v_read(config, _index(handle, buf->commit_cold, 
consumed_idx)->cc_sb);
220 /*
221  * No 

Re: [lttng-dev] Lttng live protocol

2021-09-16 Thread Mayur Patel via lttng-dev
Thank you for the information!

On Wed, 15 Sept 2021 at 17:00, Mathieu Desnoyers <
mathieu.desnoy...@efficios.com> wrote:

>
> - On Sep 1, 2021, at 1:23 PM, lttng-dev 
> wrote:
>
> Hi there,
>
> I am currently evaluating the use of CTF and lttng tooling for application
> tracing on windows. We are exploring alternatives to ETW that are more
> customisable.
> One thing we would really like to do is real-time monitoring of our
> application from another machine. I have a few questions regarding this:
>
> 1. Is lttng live protocol suitable for this purpose? What kind of latency
> would we expect? (e.g 10s or 100s of milliseconds or more)
>
>
> The lttng live protocol has been designed for extracting a low-throughput
> of events to a live pretty-printer, with delays in the area of
> a few seconds. It's a polling-based mechanism at the moment.
>
> 2. Is the protocol documented?
>
>
> No. There is only an implementation with the lttng project and in
> babeltrace.
>
> 3. Is it possible to use lttng-relayd to read from local CTF log files
> (which are being written to) and stream events to other machines / a viewer
> on the same machine? The reason I ask this is the documentation seems
> suggests lttng-relayd can consume CTF files
> https://lttng.org/docs/v2.13/#doc-lttng-relayd.
>
>
> lttng-relayd needs to control both writing to the CTF log files and
> reading from them. The "writing to"
> cannot be done by an external process.
>
> 4. I see there is a windows cygwin build on jenkins. Would you recommend
> this for production use?
>
>
> We do not recommend Cygwin builds for production use unless there are no
> alternatives. From my own
> past experience, the Cygwin layer is not a solid basis for
> production-quality software.
>
> Thanks for your interest,
>
> Mathieu
>
>
> Any guidance would be much appreciated.
>
> Thanks in advance,
>
> Mayur
>
> --
>
> 
>
> Mayur Patel
>
> Lead Software Engineer
>
> T : +44 20 7234 9840
>
> M : +44 7342 180127
>
> A : 88-89 Blackfriars Road, London, SE1 8HA
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> disguise Technologies Limited is a privately owned business registered in
> England and Wales (registered number 07937973), with its registered office
> located at 88-89 Blackfriars Road, London, SE1 8HA. This e-mail, and any
> attachments thereto, is intended only for use by the addressee(s) named
> herein and may contain legally privileged and/or confidential information.
> If you are not the intended recipient of this e-mail, you are hereby
> notified that any dissemination, distribution or copying of this e-mail,
> and any attachments thereto, is strictly prohibited. If you have received
> this e-mail in error, please notify me by replying to this message and
> permanently delete the original and any copy of this e-mail and any
> printout thereof. Although this e-mail and any attachments are believed to
> be free of any virus, or other defect which might affect any computer or
> system into which they are received and opened, it is the responsibility of
> the recipient to ensure that they are virus free and no responsibility is
> accepted by disguise for any loss or damage from receipt or use thereof.
>
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com
>


Mayur Patel
Lead Software Engineer
+44 20 7234 9840
88-89 Blackfriars Road, London, SE1 8HA
 ___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Lttng live protocol

2021-09-15 Thread Mathieu Desnoyers via lttng-dev
- On Sep 1, 2021, at 1:23 PM, lttng-dev  wrote: 

> Hi there,

> I am currently evaluating the use of CTF and lttng tooling for application
> tracing on windows. We are exploring alternatives to ETW that are more
> customisable.
> One thing we would really like to do is real-time monitoring of our 
> application
> from another machine. I have a few questions regarding this:

> 1. Is lttng live protocol suitable for this purpose? What kind of latency 
> would
> we expect? (e.g 10s or 100s of milliseconds or more)

The lttng live protocol has been designed for extracting a low-throughput of 
events to a live pretty-printer, with delays in the area of 
a few seconds. It's a polling-based mechanism at the moment. 

> 2. Is the protocol documented?

No. There is only an implementation with the lttng project and in babeltrace. 

> 3. Is it possible to use lttng-relayd to read from local CTF log files (which
> are being written to) and stream events to other machines / a viewer on the
> same machine? The reason I ask this is the documentation seems suggests
> lttng-relayd can consume CTF files [
> https://lttng.org/docs/v2.13/#doc-lttng-relayd |
> https://lttng.org/docs/v2.13/#doc-lttng-relayd ] .

lttng-relayd needs to control both writing to the CTF log files and reading 
from them. The "writing to" 
cannot be done by an external process. 

> 4. I see there is a windows cygwin build on jenkins. Would you recommend this
> for production use?

We do not recommend Cygwin builds for production use unless there are no 
alternatives. From my own 
past experience, the Cygwin layer is not a solid basis for production-quality 
software. 

Thanks for your interest, 

Mathieu 

> Any guidance would be much appreciated.

> Thanks in advance,

> Mayur

> --

> [ http://www.disguise.one/ ]

> Mayur Patel

> Lead Software Engineer

> T : +44 20 7234 9840

> M : +44 7342 180127

> A : [
> https://www.google.com/maps/place/88-89%20Blackfriars%20Road,%20London,%20SE1%208HA
> | 88-89 Blackfriars Road, London, SE1 8HA ]

> [ https://www.facebook.com/disguise.one/ ]

> [ https://twitter.com/disguise_one ]


> [ https://www.youtube.com/channel/UCBXckvTm2VHU29BUoKJizvA ]

> [ https://www.instagram.com/disguise_one/ ]

> [ https://www.linkedin.com/company/disguise-/ ]

> disguise Technologies Limited is a privately owned business registered in
> England and Wales (registered number 07937973), with its registered office
> located at 88-89 Blackfriars Road, London, SE1 8HA. This e-mail, and any
> attachments thereto, is intended only for use by the addressee(s) named herein
> and may contain legally privileged and/or confidential information. If you are
> not the intended recipient of this e-mail, you are hereby notified that any
> dissemination, distribution or copying of this e-mail, and any attachments
> thereto, is strictly prohibited. If you have received this e-mail in error,
> please notify me by replying to this message and permanently delete the
> original and any copy of this e-mail and any printout thereof. Although this
> e-mail and any attachments are believed to be free of any virus, or other
> defect which might affect any computer or system into which they are received
> and opened, it is the responsibility of the recipient to ensure that they are
> virus free and no responsibility is accepted by disguise for any loss or 
> damage
> from receipt or use thereof.

> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers 
EfficiOS Inc. 
http://www.efficios.com 
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] Lttng live protocol

2021-09-01 Thread Mayur Patel via lttng-dev
Hi there,

I am currently evaluating the use of CTF and lttng tooling for application
tracing on windows. We are exploring alternatives to ETW that are more
customisable.
One thing we would really like to do is real-time monitoring of our
application from another machine. I have a few questions regarding this:

1. Is lttng live protocol suitable for this purpose? What kind of latency
would we expect? (e.g 10s or 100s of milliseconds or more)
2. Is the protocol documented?
3. Is it possible to use lttng-relayd to read from local CTF log files
(which are being written to) and stream events to other machines / a viewer
on the same machine? The reason I ask this is the documentation seems
suggests lttng-relayd can consume CTF files
https://lttng.org/docs/v2.13/#doc-lttng-relayd.
4. I see there is a windows cygwin build on jenkins. Would you recommend
this for production use?

Any guidance would be much appreciated.

Thanks in advance,

Mayur

Mayur Patel
Lead Software Engineer
+44 20 7234 9840
88-89 Blackfriars Road, London, SE1 8HA
 ___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng 2.13.0 - Nordicité - Linux kernel and user-space tracer

2021-08-03 Thread Mathieu Desnoyers via lttng-dev
Hi everyone,
  
Today is the official release of LTTng 2.13 - Nordicité! It is the result of
one year of development from most of the EfficiOS team.

The most notable features of this new release are:

  - Event-rule matches condition triggers and new actions, allowing internal
actions or external monitoring applications to quickly react when kernel
or user-space instrumentation is hit,
  - Notification payload capture, allowing external monitoring applications
to read elements of the instrumentation payload when instrumentation is
hit.
  - Instrumentation API: vtracef and vtracelog (LTTng-UST),
  - User space time namespace context (LTTng-UST and LTTng-modules).

This release is named after "Nordicité", the product of a collaboration between
Champ Libre and Boréale. This farmhouse IPA is brewed with Kveik yeast and
Québec-grown barley, oats and juniper branches. The result is a remarkable
fruity hazy golden IPA that offers a balanced touch of resinous and woodsy
bitterness.

Based on the LTTng project's documented stable releases lifetime, this 2.13
release coincides with the end-of-life (EOL) of the LTTng 2.11 release series.

Read on for a short description of each of the new features and the
links to this release.

A prettified version of this announcement will be available soon on GitHub:
https://github.com/lttng/lttng-tools/releases/tag/v2.13.0


Note on LTTng-UST backward compatibility
---

- soname major version change
  This release changes the LTTng-UST soname major from 0 to 1.

  The event notifier (triggers using an event-rule-matches condition)
  functionality required a significant rework of public data structures which
  should never have been made public in the first place.

  Bumping the soname major to 1, will require applications and tracepoint
  providers to be rebuilt against an updated LTTng-UST to use it.

  Old applications and tracepoint providers linked against libraries with
  major soname 0 should be able to co-exist on the same system.

- Building probe providers using a C++ compiler requires C++11

- API namespaceing
  The LTTng-UST API is now systematically namespaced under `lttng_ust_*` (e.g
  `tracepoint()` becomes `lttng_ust_tracepoint()`).

  However, the non-namespaced names are still exposed to maintain API
  compatibility.


Event-rule matches condition and new actions
---

Expanding the trigger infrastructure and making it usable through the `lttng`
client was the core focus of this release.

A trigger is an association between a condition and one or more actions. When
the condition associated to a trigger is met, the actions associated to that
trigger are executed. The tracing does not have to be active for the conditions
to be met, and triggers are independent from tracing sessions.

Since their introduction as part of LTTng 2.10, new conditions and actions were
added to make this little-known mechanism more flexible.

For instance, before this release, triggers supported the following condition
types:
  - Buffer usage exceeded a given threshold,
  - Buffer usage went under a configurable threshold,
  - A session rotation occurred,
  - A session rotation completed.

A _notify_ action could be used to send a notification to a third party
applications whenever those conditions were met.

This made it possible, for instance, to disable certain event rules if the
tracing buffers were almost full. It could also be used to wait for session
rotations to be completed to start processing the resulting trace chunk
archives as part of various post-processing trace analyses.

This release introduces a new powerful condition type: event-rule matches.

This type of condition is met when the tracer encounters an event matching the
given even rule. The arguments describing the event rule are the same as those
describing the event rules of the `enable-event` command.

While this is not intended as a general replacement for the existing
high-throughput tracing facilities, this makes it possible for an application
to wait for a very-specific event to occur and take action whenever it occurs.
The purpose of event-rule matches triggers is to react quickly to an event
without the delay introduced by buffering.

For example, the following command will create a trigger that emits a
notification whenever the 'openat' system call is invoked with the
'/etc/passwd' filename argument.

$ lttng add-trigger
--condition event-rule-matches
  --type=kernel:syscall
  --name='openat'
--action notify

New actions were also introduced as part of this release:
  - Start session
This action causes the LTTng session daemon to start tracing for the session
with the given name. If no session with the given name exist at the time the
condition is met, nothing is done.

  - Stop session
This action causes the LTTng session daemon to stop tracing for the session
with the given name. If no session with the given name exist at the time the
condition is 

Re: [lttng-dev] LTTng 32 bits on 64 bits systems - kernel events... but no user-event

2021-06-07 Thread Jonathan Rajotte-Julien via lttng-dev
Hi Julien,

A fix is on the way for libfd-tracker lib problem: 

https://review.lttng.org/c/lttng-tools/+/6045 

> Thank you ! The following modification is working :
> lttng-sessiond --consumerd32-libdir=/usr/local/lib32
> --consumerd32-path=/usr/local/lib32/lttng/libexec/lttng-consumerd --daemonize

Glad we could help.

Cheers
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng 32 bits on 64 bits systems - kernel events... but no user-event

2021-06-07 Thread MONTET Julien via lttng-dev
Hi,

I installed KeKriek version because it is the one recommanded by the apt-get of 
Debian Buster 10.9 (LTTng 2.10).
Debian -- Détails du paquet lttng-tools dans 
buster<https://packages.debian.org/buster/lttng-tools>
I tried multiple times to install LTTng 2.12 without success, so I stayed in 
LTTng 2.10.

Find below the output of :  lttng-sessiond -vvv --verbose-consumer
https://paste.ubuntu.com/p/NrtyCqhHZc/

(I have got the same with 
LTTNG_CONSUMERD32_BIN=/usr/local/lib32/lttng/libexec/lttng-consumerd 
LTTNG_CONSUMERD32_LIBDIR=/usr/local/lib32 lttng-sessiond -vvv 
--verbose-consumer)

Yes, I had the issue below few days ago with LTTng 2.12
"For 2.12 there is some problems when compiling only lttng-consumerd 
(fd-tracker lib and libcommon problem)."

Thank you ! The following modification is working :
lttng-sessiond --consumerd32-libdir=/usr/local/lib32 
--consumerd32-path=/usr/local/lib32/lttng/libexec/lttng-consumerd --daemonize

Regards,

Julien


De : Jonathan Rajotte-Julien 
Envoyé : lundi 7 juin 2021 17:24
À : MONTET Julien 
Cc : lttng-dev 
Objet : Re: [lttng-dev] LTTng 32 bits on 64 bits systems - kernel events... but 
no user-event

Hi,

> I am currently trying to add lttng 32bits on a 64bits system with a custom
> kernel 4.19.177+ x86_64 (Debian 10.9)
> My former attempt only with only a 100% arm32 bits was successful (raspberry).
> I am facing now a strange issue : I can record kernel event but not userspace
> event... but I can see these user-event (see below)!

Ok.

> (tuto : [ 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2F%23doc-instrumenting-32-bit-app-on-64-bit-systemdata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7C9b85f601067749845a2408d929c85983%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637586762735252731%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=5NB2nNrI17uwSE1Xpn1Uf6sbrRXbEePidPNXUh4ps7k%3Dreserved=0
> | 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2F%23doc-instrumenting-32-bit-app-on-64-bit-systemdata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7C9b85f601067749845a2408d929c85983%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637586762735252731%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=5NB2nNrI17uwSE1Xpn1Uf6sbrRXbEePidPNXUh4ps7k%3Dreserved=0
>  ] )

> I followed two times the tutorial : on lttng 2.12 and lttng 2.10.
> I am on lttng (LTTng Trace Control) 2.10.11 - KeKriek now

Any particular reasons?

> All installation seemed to install without any error.
> The only thing I added is at the very end where I did a : ln -s
> /usr/local/bin/lttng /usr/bin/lttng (to directly call commands like "lttng
> start")

> Here some interesting outputs, I am using this projet as an example : [
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2F%23doc-tracing-your-own-user-applicationdata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7C9b85f601067749845a2408d929c85983%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637586762735262680%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=6RTEjlZmpq2CZoIUD%2Fs5ySx60Zqd4JMByVejF8nQ7Vk%3Dreserved=0
>  |
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flttng.org%2Fdocs%2F%23doc-tracing-your-own-user-applicationdata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7C9b85f601067749845a2408d929c85983%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637586762735262680%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=6RTEjlZmpq2CZoIUD%2Fs5ySx60Zqd4JMByVejF8nQ7Vk%3Dreserved=0
>  ]
> lttng list -u : OK [ 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpaste.ubuntu.com%2Fp%2FhZptnNzySw%2Fdata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7C9b85f601067749845a2408d929c85983%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637586762735262680%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=o71InJDxz1UgYSztgj5ENRr48MmLhsvQvPCO4GeOJ40%3Dreserved=0
>  | Ubuntu Pastebin ]
> LTTNG_UST_DEBUG=1 ./hello : seems OK : [ 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpaste.ubuntu.com%2Fp%2Fw6xHrJsWJ9%2Fdata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7C9b85f601067749845a2408d929c85983%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637586762735262680%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=%2FWcFiJDvDaGP0BxZaHVsk7MH91zwvcgmw7vOnDkkHGg%3Dreserved=0
> | Ubuntu Pastebin ]
> kernel-event output : [ 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpaste.ubuntu.com%2Fp%2F5KfyS8wVdb%2Fdata=04%7C01%7Cjulien.montet%40reseau.ese

Re: [lttng-dev] LTTng 32 bits on 64 bits systems - kernel events... but no user-event

2021-06-07 Thread Jonathan Rajotte-Julien via lttng-dev
Hi, 

> I am currently trying to add lttng 32bits on a 64bits system with a custom
> kernel 4.19.177+ x86_64 (Debian 10.9)
> My former attempt only with only a 100% arm32 bits was successful (raspberry).
> I am facing now a strange issue : I can record kernel event but not userspace
> event... but I can see these user-event (see below)!

Ok. 

> (tuto : [ 
> https://lttng.org/docs/#doc-instrumenting-32-bit-app-on-64-bit-system
> | https://lttng.org/docs/#doc-instrumenting-32-bit-app-on-64-bit-system ] )

> I followed two times the tutorial : on lttng 2.12 and lttng 2.10.
> I am on lttng (LTTng Trace Control) 2.10.11 - KeKriek now

Any particular reasons?

> All installation seemed to install without any error.
> The only thing I added is at the very end where I did a : ln -s
> /usr/local/bin/lttng /usr/bin/lttng (to directly call commands like "lttng
> start")

> Here some interesting outputs, I am using this projet as an example : [
> https://lttng.org/docs/#doc-tracing-your-own-user-application |
> https://lttng.org/docs/#doc-tracing-your-own-user-application ]
> lttng list -u : OK [ https://paste.ubuntu.com/p/hZptnNzySw/ | Ubuntu Pastebin 
> ]
> LTTNG_UST_DEBUG=1 ./hello : seems OK : [ 
> https://paste.ubuntu.com/p/w6xHrJsWJ9/
> | Ubuntu Pastebin ]
> kernel-event output : [ https://paste.ubuntu.com/p/5KfyS8wVdb/ | Ubuntu 
> Pastebin
> ]
> user-event ouput : none - the output folder stated with lttng create
> my-user-session --output=/tmp/my-user-session doesn't even appear
> apt-file search lttng : [ https://paste.ubuntu.com/p/xMcCc7bqwk/ | Ubuntu
> Pastebin ]
> lsmod | grep lttng : [ https://paste.ubuntu.com/p/7YNkNsh9Fr/ | Ubuntu 
> Pastebin
> ]

Can you provide the config.log from the configure steps 

We are missing the most important logs: lttng-sessiond. 

$ lttng-sessiond -vvv --verbose-consumer 

> Note : file /usr/local/bin/lttng (the exe I am using) gives :
> /usr/local/bin/lttng: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV),
> dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux
> 3.2.0, BuildID[sha1]=542a108c40e386c17027c2c8f8fcb30e278b7748, with 
> debug_info,
> not stripped

This is normal. Concretely to support tracing of 32 bit applications only a 32 
bits
lttng-consumerd executable is needed which is launched by the lttng-sessiond 
executable. 

> I am wondering if the installation didn't pick a 64bits RCU and analyses
> programms only on this.

Hmmm, what indicate this? I seriously doubt that this can happen. 

> However, I have currently absolutely no idea how to debug this.

I decided to do the process on a clean ubuntu 20.04 machine.
For 2.12 there is some problems when compiling only lttng-consumerd (fd-tracker 
lib and libcommon problem). I plan on providing a patch shortly.

But the major problem is that the path passed at configure time for the 
lttng-consumerd32 bit daemon are simply not used at runtime.
See this bug I just opened [1]. This is valid for 2.10 all the way to master 
AFAIK.

[1] https://bugs.lttng.org/issues/1318 

In the meantime you will have to pass the lttng-consumerd 32 bit path as 
argument or env variables to lttng-sessiond as such: 

$ lttng-sessiond -vvv --verbose-consumer --consumerd32-libdir=/usr/local/lib32 
--consumerd32-path=/usr/local/lib32/lttng/libexec/lttng-consumerd

or 

$ LTTNG_CONSUMERD32_BIN=/usr/local/lib32/lttng/libexec/lttng-consumerd 
LTTNG_CONSUMERD32_LIBDIR=/usr/local/lib32 lttng-sessiond -vvv --verbose-consumer


>From there the 32bits application is traced correctly and data is gathered 
>without any problem (at least on 2.12).


```
[15:22:24.500655502] (+?.?) ubuntu2004.localdomain 
lttng_ust_statedump:start: { cpu_id = 3 }, { }
[15:22:24.500661538] (+0.06036) ubuntu2004.localdomain 
lttng_ust_statedump:procname: { cpu_id = 3 }, { procname = "hello" }
[15:22:24.502611224] (+0.001949686) ubuntu2004.localdomain 
lttng_ust_statedump:bin_info: { cpu_id = 3 }, { baddr = 0xF7EC9000, memsz = 
20596, path = "/usr/lib32/libdl-2.31.so", is_pic = 1, has_build_id = 1, 
has_debug_link = 1 }
[15:22:24.502614501] (+0.03277) ubuntu2004.localdomain 
lttng_ust_statedump:build_id: { cpu_id = 3 }, { baddr = 0xF7EC9000, 
_build_id_length = 20, build_id = [ [0] = 0x75, [1] = 0x1E, [2] = 0xDD, [3] = 
0x47, [4] = 0x72, [5] = 0x3, [6] 
```

Cheers
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng 32 bits on 64 bits systems - kernel events... but no user-event

2021-06-07 Thread Mathieu Desnoyers via lttng-dev



- On Jun 4, 2021, at 9:09 AM, lttng-dev lttng-dev@lists.lttng.org wrote:

> Hi LTTng team,

> I am currently trying to add lttng 32bits on a 64bits system with a custom
> kernel 4.19.177+ x86_64 (Debian 10.9)
> My former attempt only with only a 100% arm32 bits was successful (raspberry).
> I am facing now a strange issue : I can record kernel event but not userspace
> event... but I can see these user-event (see below)!
> (tuto : [ 
> https://lttng.org/docs/#doc-instrumenting-32-bit-app-on-64-bit-system
> | https://lttng.org/docs/#doc-instrumenting-32-bit-app-on-64-bit-system ] )

> I followed two times the tutorial : on lttng 2.12 and lttng 2.10.
> I am on lttng (LTTng Trace Control) 2.10.11 - KeKriek now
> All installation seemed to install without any error.
> The only thing I added is at the very end where I did a : ln -s
> /usr/local/bin/lttng /usr/bin/lttng (to directly call commands like "lttng
> start")

> Here some interesting outputs, I am using this projet as an example : [
> https://lttng.org/docs/#doc-tracing-your-own-user-application |
> https://lttng.org/docs/#doc-tracing-your-own-user-application ]
> lttng list -u : OK [ https://paste.ubuntu.com/p/hZptnNzySw/ | Ubuntu Pastebin 
> ]
> LTTNG_UST_DEBUG=1 ./hello : seems OK : [ 
> https://paste.ubuntu.com/p/w6xHrJsWJ9/
> | Ubuntu Pastebin ]
> kernel-event output : [ https://paste.ubuntu.com/p/5KfyS8wVdb/ | Ubuntu 
> Pastebin
> ]
> user-event ouput : none - the output folder stated with lttng create
> my-user-session --output=/tmp/my-user-session doesn't even appear
> apt-file search lttng : [ https://paste.ubuntu.com/p/xMcCc7bqwk/ | Ubuntu
> Pastebin ]
> lsmod | grep lttng : [ https://paste.ubuntu.com/p/7YNkNsh9Fr/ | Ubuntu 
> Pastebin
> ]

> Note : file /usr/local/bin/lttng (the exe I am using) gives :
> /usr/local/bin/lttng: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV),
> dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux
> 3.2.0, BuildID[sha1]=542a108c40e386c17027c2c8f8fcb30e278b7748, with 
> debug_info,
> not stripped

> I am wondering if the installation didn't pick a 64bits RCU and analyses
> programms only on this.
> However, I have currently absolutely no idea how to debug this.

> Could you advise me some methods ?

Did you follow this section of the documentation ? 

https://lttng.org/docs/#doc-instrumenting-32-bit-app-on-64-bit-system 

Thanks, 

Mathieu 

> Regards,

> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng 32 bits on 64 bits systems - kernel events... but no user-event

2021-06-04 Thread MONTET Julien via lttng-dev
Hi LTTng team,

I am currently trying to add lttng 32bits on a 64bits system with a custom 
kernel 4.19.177+ x86_64 (Debian 10.9)
My former attempt only with only a 100% arm32 bits was successful (raspberry).
I am facing now a strange issue : I can record kernel event but not userspace 
event... but I can see these user-event (see below)!
(tuto : https://lttng.org/docs/#doc-instrumenting-32-bit-app-on-64-bit-system)


I followed two times the tutorial : on lttng 2.12 and lttng 2.10.
I am on lttng (LTTng Trace Control) 2.10.11 - KeKriek now
All installation seemed to install without any error.
The only thing I added is at the very end where I did a : ln -s 
/usr/local/bin/lttng /usr/bin/lttng (to directly call commands like "lttng 
start")

Here some interesting outputs, I am using this projet as an example : 
https://lttng.org/docs/#doc-tracing-your-own-user-application
lttng list -u : OK Ubuntu Pastebin
LTTNG_UST_DEBUG=1 ./hello : seems OK : Ubuntu 
Pastebin
kernel-event output : Ubuntu Pastebin
user-event ouput : none - the output folder stated with lttng create 
my-user-session --output=/tmp/my-user-session doesn't even appear
apt-file search lttng : Ubuntu Pastebin
lsmod | grep lttng : Ubuntu Pastebin


Note : file /usr/local/bin/lttng (the exe I am using) gives :
/usr/local/bin/lttng: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), 
dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 
3.2.0, BuildID[sha1]=542a108c40e386c17027c2c8f8fcb30e278b7748, with debug_info, 
not stripped

I am wondering if the installation didn't pick a 64bits RCU and analyses 
programms only on this.
However, I have currently absolutely no idea how to debug this.

Could you advise me some methods ?

Regards,
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-25 Thread Norbert Lange via lttng-dev
Am Fr., 21. Mai 2021 um 12:13 Uhr schrieb MONTET Julien
:
>
> Hello Mathieu, Norbert and Jan,
>
> Thank you for all of your explainations and the overview of the system.
> No I didn't change the ipipe patch for the vDSO, I may try this.
> If I have correctly understood, this patch prevents Cobalt from entering in a 
> deadlock when the kernel is using the vDSO and the program interrupts the 
> kernel at the same time. On which kernel does it word (aroubd 4.19) ?
> I currently try to avoid kernel 5.4 because I remember I faced some boot 
> issues (but it is on another topic).

That patch was for 4.19 AFAIR, but its specific to x86. The Linux
kernel cleaned up the vdso handling to have a common base with some
5.x version,
but back then its been separate for each arch

>
> Here the issues i faced (drawn on TraceCompass). Are these the deadlocks we 
> are talking about ?
> https://postimg.cc/BP4G3bF0 (on 11:02:56:380)
> https://postimg.cc/q6wHvrcC

Nope, if you get such a deadlock, then your only hope is Xenomai's
Watchdog killing the process.
It should happen very rarely (but thats not an argument if your
software should run for years).

Norbert
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng container awareness

2021-05-25 Thread 杨海 via lttng-dev
Hi


LTTng had the 2019 plan to decouple tooling for container awareness, how is the 
progress on that?
https://archive.fosdem.org/2019/schedule/event/containers_lttng/


As stated in page 18,LTTng is comprised of many components 
thatexpect a “monolitic” system. How about the future to containerize the 
LTTng?




Regards
Hai






--Original--
From: "Jonathan Rajotte-Julien"https://lttng.org/docs/v2.10/

Cheers
-- 
Jonathan Rajotte-Julien
EfficiOS___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-21 Thread MONTET Julien via lttng-dev
Hello Mathieu, Norbert and Jan,

Thank you for all of your explainations and the overview of the system.
No I didn't change the ipipe patch for the vDSO, I may try this.
If I have correctly understood, this patch prevents Cobalt from entering in a 
deadlock when the kernel is using the vDSO and the program interrupts the 
kernel at the same time. On which kernel does it word (aroubd 4.19) ?
I currently try to avoid kernel 5.4 because I remember I faced some boot issues 
(but it is on another topic).

Here the issues i faced (drawn on TraceCompass). Are these the deadlocks we are 
talking about ?
https://postimg.cc/BP4G3bF0 (on 11:02:56:380)
https://postimg.cc/q6wHvrcC

Regards,



De : Norbert Lange 
Envoyé : jeudi 20 mai 2021 17:39
À : Mathieu Desnoyers 
Cc : MONTET Julien ; lttng-dev 
; Jan Kiszka ; Xenomai 

Objet : Re: [lttng-dev] LTTng - Xenomai : different results between 
timestamp-lttng and rt_time_read()

Am Do., 20. Mai 2021 um 17:09 Uhr schrieb Mathieu Desnoyers
:
>
> - On May 20, 2021, at 9:56 AM, Mathieu Desnoyers 
> mathieu.desnoy...@efficios.com wrote:
>
> > - On May 20, 2021, at 9:54 AM, lttng-dev lttng-dev@lists.lttng.org 
> > wrote:
> >
> >> ----- On May 20, 2021, at 5:11 AM, lttng-dev lttng-dev@lists.lttng.org 
> >> wrote:
> >>
> >>> Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
> >>> :
> >>>>
> >>>> Hi Norbert,
> >>>>
> >>>> Thank you for your answer !
> >>>>
> >>>> Yes, I am using a Xenomai cobalt - xenomai is 3.1
> >>>> cat /proc/xenomai/version => 3.1
> >>>>
> >>>> After the installation, I tested "test tools" in /proc/xenomai/ and it 
> >>>> worked
> >>>> nice.
> >>>
> >>> Just asked to make sure, thought the scripts usual add some -xeno tag
> >>> to the kernel version.
> >>>
> >>>> What do you mean by "it might deadlock really good" ?
> >>>
> >>> clock_gettime will either use a syscall (kills realtime always) or is
> >>> optimized via VDSO (which very likely is your case).
> >>>
> >>> What happens is that the kernel will take a spinlock, then write new
> >>> values, then releases the spinlock.
> >>> your program will aswell spin (but just to see if the spinlock is
> >>> free), read the values and interpolates them.
> >>>
> >>> But if your program interrupts the kernel while the kernel holds the
> >>> lock (all on the same cpu core), then it will spin forever and the
> >>> kernel will never execute.
> >>
> >> Just one clarification: the specific locking strategy used by the
> >> Linux kernel monotonic clock vDSO is a "seqlock", where the kernel
> >> sets a bit which keeps concurrent readers looping until they observe
> >
> > When I say "sets a bit", I actually mean "increment a sequence counter",
> > and readers observe either odd or even state, thus knowing whether
> > they need to retry, and whether the value read before/after reading
> > the data structure changed.
>
> Looking again at the Linux kernel's kernel/time/vsyscall.c implementation
> of vdso_update_{begin,end}, I notice that interrupts are disabled across
> the entire update. So I understand that the Interrupt pipeline (I-pipe)
> interrupt gets delivered even when the kernel disables interrupts. Did
> you consider modifying the I-pipe kernel patch to change the vdso update so
> it updates the vdso from within an I-pipe virq handler ?

Yes, I did use an non-upstreamed patch for a while to get things in order:
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.xenomai.org%2Fpipermail%2Fxenomai%2F2018-December%2F040134.htmldata=04%7C01%7Cjulien.montet%40reseau.eseo.fr%7Cef0b71ac314f4ab2321f08d91ba57c9d%7C4d7ad1591265437ab9f62946247d5bf9%7C0%7C0%7C637571219835495365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=dOiRFzeKFQA%2B25R6aqrjL2ZJMkV5c782DSBGiHHoYZc%3Dreserved=0

I would prefer just a NMI safe source that might jump back a bit, no matter how.

> AFAIU this would allow Xenomai userspace to use the Linux kernel vDSO
> clock sources.

The Xenomai folks are trying to get their next-gen abstraction "dovetail" closer
coupled to the kernel, AFAIR their will be VDSO support and
unification of the clock sources.

Still need to get stuff running today =)

Norbert
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Norbert Lange via lttng-dev
Am Do., 20. Mai 2021 um 17:09 Uhr schrieb Mathieu Desnoyers
:
>
> - On May 20, 2021, at 9:56 AM, Mathieu Desnoyers 
> mathieu.desnoy...@efficios.com wrote:
>
> > - On May 20, 2021, at 9:54 AM, lttng-dev lttng-dev@lists.lttng.org 
> > wrote:
> >
> >> ----- On May 20, 2021, at 5:11 AM, lttng-dev lttng-dev@lists.lttng.org 
> >> wrote:
> >>
> >>> Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
> >>> :
> >>>>
> >>>> Hi Norbert,
> >>>>
> >>>> Thank you for your answer !
> >>>>
> >>>> Yes, I am using a Xenomai cobalt - xenomai is 3.1
> >>>> cat /proc/xenomai/version => 3.1
> >>>>
> >>>> After the installation, I tested "test tools" in /proc/xenomai/ and it 
> >>>> worked
> >>>> nice.
> >>>
> >>> Just asked to make sure, thought the scripts usual add some -xeno tag
> >>> to the kernel version.
> >>>
> >>>> What do you mean by "it might deadlock really good" ?
> >>>
> >>> clock_gettime will either use a syscall (kills realtime always) or is
> >>> optimized via VDSO (which very likely is your case).
> >>>
> >>> What happens is that the kernel will take a spinlock, then write new
> >>> values, then releases the spinlock.
> >>> your program will aswell spin (but just to see if the spinlock is
> >>> free), read the values and interpolates them.
> >>>
> >>> But if your program interrupts the kernel while the kernel holds the
> >>> lock (all on the same cpu core), then it will spin forever and the
> >>> kernel will never execute.
> >>
> >> Just one clarification: the specific locking strategy used by the
> >> Linux kernel monotonic clock vDSO is a "seqlock", where the kernel
> >> sets a bit which keeps concurrent readers looping until they observe
> >
> > When I say "sets a bit", I actually mean "increment a sequence counter",
> > and readers observe either odd or even state, thus knowing whether
> > they need to retry, and whether the value read before/after reading
> > the data structure changed.
>
> Looking again at the Linux kernel's kernel/time/vsyscall.c implementation
> of vdso_update_{begin,end}, I notice that interrupts are disabled across
> the entire update. So I understand that the Interrupt pipeline (I-pipe)
> interrupt gets delivered even when the kernel disables interrupts. Did
> you consider modifying the I-pipe kernel patch to change the vdso update so
> it updates the vdso from within an I-pipe virq handler ?

Yes, I did use an non-upstreamed patch for a while to get things in order:
https://www.xenomai.org/pipermail/xenomai/2018-December/040134.html

I would prefer just a NMI safe source that might jump back a bit, no matter how.

> AFAIU this would allow Xenomai userspace to use the Linux kernel vDSO
> clock sources.

The Xenomai folks are trying to get their next-gen abstraction "dovetail" closer
coupled to the kernel, AFAIR their will be VDSO support and
unification of the clock sources.

Still need to get stuff running today =)

Norbert
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Jan Kiszka via lttng-dev
On 20.05.21 17:09, Mathieu Desnoyers wrote:
> - On May 20, 2021, at 9:56 AM, Mathieu Desnoyers 
> mathieu.desnoy...@efficios.com wrote:
> 
>> - On May 20, 2021, at 9:54 AM, lttng-dev lttng-dev@lists.lttng.org wrote:
>>
>>> - On May 20, 2021, at 5:11 AM, lttng-dev lttng-dev@lists.lttng.org 
>>> wrote:
>>>
>>>> Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
>>>> :
>>>>>
>>>>> Hi Norbert,
>>>>>
>>>>> Thank you for your answer !
>>>>>
>>>>> Yes, I am using a Xenomai cobalt - xenomai is 3.1
>>>>> cat /proc/xenomai/version => 3.1
>>>>>
>>>>> After the installation, I tested "test tools" in /proc/xenomai/ and it 
>>>>> worked
>>>>> nice.
>>>>
>>>> Just asked to make sure, thought the scripts usual add some -xeno tag
>>>> to the kernel version.
>>>>
>>>>> What do you mean by "it might deadlock really good" ?
>>>>
>>>> clock_gettime will either use a syscall (kills realtime always) or is
>>>> optimized via VDSO (which very likely is your case).
>>>>
>>>> What happens is that the kernel will take a spinlock, then write new
>>>> values, then releases the spinlock.
>>>> your program will aswell spin (but just to see if the spinlock is
>>>> free), read the values and interpolates them.
>>>>
>>>> But if your program interrupts the kernel while the kernel holds the
>>>> lock (all on the same cpu core), then it will spin forever and the
>>>> kernel will never execute.
>>>
>>> Just one clarification: the specific locking strategy used by the
>>> Linux kernel monotonic clock vDSO is a "seqlock", where the kernel
>>> sets a bit which keeps concurrent readers looping until they observe
>>
>> When I say "sets a bit", I actually mean "increment a sequence counter",
>> and readers observe either odd or even state, thus knowing whether
>> they need to retry, and whether the value read before/after reading
>> the data structure changed.
> 
> Looking again at the Linux kernel's kernel/time/vsyscall.c implementation
> of vdso_update_{begin,end}, I notice that interrupts are disabled across
> the entire update. So I understand that the Interrupt pipeline (I-pipe)
> interrupt gets delivered even when the kernel disables interrupts. Did
> you consider modifying the I-pipe kernel patch to change the vdso update so
> it updates the vdso from within an I-pipe virq handler ?
> 
> AFAIU this would allow Xenomai userspace to use the Linux kernel vDSO
> clock sources.

In fact, this is what happens with upcoming Xenomai 3.2, based on the
Dovetail kernel patch (replacement of I-pipe). Implies kernel 5.10.

For I-pipe, we have the CLOCK_HOST_REALTIME infrastructure to obtain the
kernel's view on CLOCK_REALTIME from within an Xenomai task. That is
available up to kernel 5.4.

HTH,
Jan

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Mathieu Desnoyers via lttng-dev
- On May 20, 2021, at 9:56 AM, Mathieu Desnoyers 
mathieu.desnoy...@efficios.com wrote:

> - On May 20, 2021, at 9:54 AM, lttng-dev lttng-dev@lists.lttng.org wrote:
> 
>> - On May 20, 2021, at 5:11 AM, lttng-dev lttng-dev@lists.lttng.org wrote:
>> 
>>> Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
>>> :
>>>>
>>>> Hi Norbert,
>>>>
>>>> Thank you for your answer !
>>>>
>>>> Yes, I am using a Xenomai cobalt - xenomai is 3.1
>>>> cat /proc/xenomai/version => 3.1
>>>>
>>>> After the installation, I tested "test tools" in /proc/xenomai/ and it 
>>>> worked
>>>> nice.
>>> 
>>> Just asked to make sure, thought the scripts usual add some -xeno tag
>>> to the kernel version.
>>> 
>>>> What do you mean by "it might deadlock really good" ?
>>> 
>>> clock_gettime will either use a syscall (kills realtime always) or is
>>> optimized via VDSO (which very likely is your case).
>>> 
>>> What happens is that the kernel will take a spinlock, then write new
>>> values, then releases the spinlock.
>>> your program will aswell spin (but just to see if the spinlock is
>>> free), read the values and interpolates them.
>>> 
>>> But if your program interrupts the kernel while the kernel holds the
>>> lock (all on the same cpu core), then it will spin forever and the
>>> kernel will never execute.
>> 
>> Just one clarification: the specific locking strategy used by the
>> Linux kernel monotonic clock vDSO is a "seqlock", where the kernel
>> sets a bit which keeps concurrent readers looping until they observe
> 
> When I say "sets a bit", I actually mean "increment a sequence counter",
> and readers observe either odd or even state, thus knowing whether
> they need to retry, and whether the value read before/after reading
> the data structure changed.

Looking again at the Linux kernel's kernel/time/vsyscall.c implementation
of vdso_update_{begin,end}, I notice that interrupts are disabled across
the entire update. So I understand that the Interrupt pipeline (I-pipe)
interrupt gets delivered even when the kernel disables interrupts. Did
you consider modifying the I-pipe kernel patch to change the vdso update so
it updates the vdso from within an I-pipe virq handler ?

AFAIU this would allow Xenomai userspace to use the Linux kernel vDSO
clock sources.

Thanks,

Mathieu

> 
> Thanks,
> 
> Mathieu
> 
>> a consistent value. With Xenomai it indeed appears to be prone to
>> deadlock if a high priority Xenomai thread interrupts the kernel
>> while the write seqlock is held, and then proceeds to loop forever
>> on the read-side of the seqlock.
>> 
>> Note that for the in-kernel tracer clock read use-case, which
>> needs to be able to happen from NMI context, I've contributed a
>> modified version of the seqlock to the Linux kernel:
>> 
>> https://lwn.net/Articles/831540/ The seqcount latch lock type
>> 
>> It basically keeps two copies of the clock data structures, so the
>> read-side never has to loop waiting for the updater: it simply gets
>> redirected to the "stable" copy of the data.
>> 
>> The trade-off here is that with the latch lock used for clocks, a
>> reader may observe time going slightly backwards between two clock
>> reads when reading while specific clock rate adjustments are made
>> by an updater. The clock user needs to be aware of this.
>> 
>> Thanks,
>> 
>> Mathieu
>> 
>> --
>> Mathieu Desnoyers
>> EfficiOS Inc.
>> http://www.efficios.com
>> ___
>> lttng-dev mailing list
>> lttng-dev@lists.lttng.org
>> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> 
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Mathieu Desnoyers via lttng-dev
- On May 20, 2021, at 9:54 AM, lttng-dev lttng-dev@lists.lttng.org wrote:

> - On May 20, 2021, at 5:11 AM, lttng-dev lttng-dev@lists.lttng.org wrote:
> 
>> Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
>> :
>>>
>>> Hi Norbert,
>>>
>>> Thank you for your answer !
>>>
>>> Yes, I am using a Xenomai cobalt - xenomai is 3.1
>>> cat /proc/xenomai/version => 3.1
>>>
>>> After the installation, I tested "test tools" in /proc/xenomai/ and it 
>>> worked
>>> nice.
>> 
>> Just asked to make sure, thought the scripts usual add some -xeno tag
>> to the kernel version.
>> 
>>> What do you mean by "it might deadlock really good" ?
>> 
>> clock_gettime will either use a syscall (kills realtime always) or is
>> optimized via VDSO (which very likely is your case).
>> 
>> What happens is that the kernel will take a spinlock, then write new
>> values, then releases the spinlock.
>> your program will aswell spin (but just to see if the spinlock is
>> free), read the values and interpolates them.
>> 
>> But if your program interrupts the kernel while the kernel holds the
>> lock (all on the same cpu core), then it will spin forever and the
>> kernel will never execute.
> 
> Just one clarification: the specific locking strategy used by the
> Linux kernel monotonic clock vDSO is a "seqlock", where the kernel
> sets a bit which keeps concurrent readers looping until they observe

When I say "sets a bit", I actually mean "increment a sequence counter",
and readers observe either odd or even state, thus knowing whether
they need to retry, and whether the value read before/after reading
the data structure changed.

Thanks,

Mathieu

> a consistent value. With Xenomai it indeed appears to be prone to
> deadlock if a high priority Xenomai thread interrupts the kernel
> while the write seqlock is held, and then proceeds to loop forever
> on the read-side of the seqlock.
> 
> Note that for the in-kernel tracer clock read use-case, which
> needs to be able to happen from NMI context, I've contributed a
> modified version of the seqlock to the Linux kernel:
> 
> https://lwn.net/Articles/831540/ The seqcount latch lock type
> 
> It basically keeps two copies of the clock data structures, so the
> read-side never has to loop waiting for the updater: it simply gets
> redirected to the "stable" copy of the data.
> 
> The trade-off here is that with the latch lock used for clocks, a
> reader may observe time going slightly backwards between two clock
> reads when reading while specific clock rate adjustments are made
> by an updater. The clock user needs to be aware of this.
> 
> Thanks,
> 
> Mathieu
> 
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Mathieu Desnoyers via lttng-dev
- On May 20, 2021, at 5:11 AM, lttng-dev lttng-dev@lists.lttng.org wrote:

> Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
> :
>>
>> Hi Norbert,
>>
>> Thank you for your answer !
>>
>> Yes, I am using a Xenomai cobalt - xenomai is 3.1
>> cat /proc/xenomai/version => 3.1
>>
>> After the installation, I tested "test tools" in /proc/xenomai/ and it worked
>> nice.
> 
> Just asked to make sure, thought the scripts usual add some -xeno tag
> to the kernel version.
> 
>> What do you mean by "it might deadlock really good" ?
> 
> clock_gettime will either use a syscall (kills realtime always) or is
> optimized via VDSO (which very likely is your case).
> 
> What happens is that the kernel will take a spinlock, then write new
> values, then releases the spinlock.
> your program will aswell spin (but just to see if the spinlock is
> free), read the values and interpolates them.
> 
> But if your program interrupts the kernel while the kernel holds the
> lock (all on the same cpu core), then it will spin forever and the
> kernel will never execute.

Just one clarification: the specific locking strategy used by the
Linux kernel monotonic clock vDSO is a "seqlock", where the kernel
sets a bit which keeps concurrent readers looping until they observe
a consistent value. With Xenomai it indeed appears to be prone to
deadlock if a high priority Xenomai thread interrupts the kernel
while the write seqlock is held, and then proceeds to loop forever
on the read-side of the seqlock.

Note that for the in-kernel tracer clock read use-case, which
needs to be able to happen from NMI context, I've contributed a
modified version of the seqlock to the Linux kernel:

https://lwn.net/Articles/831540/ The seqcount latch lock type

It basically keeps two copies of the clock data structures, so the
read-side never has to loop waiting for the updater: it simply gets
redirected to the "stable" copy of the data.

The trade-off here is that with the latch lock used for clocks, a
reader may observe time going slightly backwards between two clock
reads when reading while specific clock rate adjustments are made
by an updater. The clock user needs to be aware of this.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Norbert Lange via lttng-dev
Am Do., 20. Mai 2021 um 10:28 Uhr schrieb MONTET Julien
:
>
> Hi Norbert,
>
> Thank you for your answer !
>
> Yes, I am using a Xenomai cobalt - xenomai is 3.1
> cat /proc/xenomai/version => 3.1
>
> After the installation, I tested "test tools" in /proc/xenomai/ and it worked 
> nice.

Just asked to make sure, thought the scripts usual add some -xeno tag
to the kernel version.

> What do you mean by "it might deadlock really good" ?

clock_gettime will either use a syscall (kills realtime always) or is
optimized via VDSO (which very likely is your case).

What happens is that the kernel will take a spinlock, then write new
values, then releases the spinlock.
your program will aswell spin (but just to see if the spinlock is
free), read the values and interpolates them.

But if your program interrupts the kernel while the kernel holds the
lock (all on the same cpu core), then it will spin forever and the
kernel will never execute.

The ugly truth is, that any library you use could call those
functions, you need to very closely look at your realtime path.
Some checkers can help: https://github.com/nolange/preload_checkers



Norbert.
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread MONTET Julien via lttng-dev
Hi Norbert,

Thank you for your answer !

Yes, I am using a Xenomai cobalt - xenomai is 3.1
cat /proc/xenomai/version => 3.1

After the installation, I tested "test tools" in /proc/xenomai/ and it worked 
nice.

What do you mean by "it might deadlock really good" ?

Cheers,



De : Norbert Lange 
Envoyé : jeudi 20 mai 2021 10:20
À : MONTET Julien 
Cc : lttng-dev@lists.lttng.org 
Objet : Re: [lttng-dev] LTTng - Xenomai : different results between 
timestamp-lttng and rt_time_read()

Am Do., 20. Mai 2021 um 09:58 Uhr schrieb MONTET Julien via lttng-dev
:
>
> Hi the developers !
>
> CONTEXT
> I am currently working on a Raspberry pi 3B with Xenomai and LTTng tools.
> Raspbian 10.9 Buster - kernel 4.19.85
> uname -a : Linux raspberrypi 4.19.85-v7+ #5 SMP PREEMPT Wed May 12 10:13:37
> Both tools are working, but I wonder about the accuracy of LTTng libraries.
>
>
> METHOD
> The code used is quite simple, it is written with the alchemy skin.
> A rt_task_spawn calls a function that has rt_task_set_periodic(NULL, TM_NOW, 
> period) and rt_task_wait_period(NULL).
> ->The rt_task_set_periodic is based on 1ms.
> ->The  rt_task_wait_period(NULL) is of course inside a while loop (see below 
> at the very end).
>
> My goal is to get accurate traces from Xenomai.
> I took two methods to do so :
> -> lttng
> -> basic calculation based on  rt_timer_read()
>
> What a surprise when I found both method have two different results.
> -> LTTng shows me traces [0.870;1.13] ms (or even less precise)
> -> rt_time_read shows me traces [0.980;1.020] ms
>
> Thing to note :
> -> The use of LTTng has no influence on rt_time_read(), you can use both 
> methods at the same time.
>
> Then, I saved the output of rt_time_read inside a tracepoint.
> It appeared the LTTng is always called at the right time because the value 
> got by rt_time_read () is really good.
>
>
> QUESTIONS
> These are now my questions :
> - What is the method I should trust ?
> - I have searched on the forum and I found LTTng uses a MONOTONIC clock for 
> the timestamp. Can/Should I modify it ?
>
>
> CODE
> ---
> A small part of my function called by rt_task_spawn :
> [...]
> RTIME period = 1000*1000; // in ns
> RTIME now;
> RTIME previous = 0;
> RTIME duration;
> [...]
>  while(1)
> {
> overruns = 0;
> err = rt_task_wait_period();
> now = rt_timer_read();
> tracepoint(tp_provider, tracepoint_tick_ms, now, "tick");
>
> if (previous != 0)
> {
> duration=now-previous;
> rt_printf("%llu\n \n", duration/1000);
> }
>previous=now;
>[...]
> }

Are you using the Xenomai kernel ("Cobalt"), or just skins via
copperplate ("Mercury")?
You have some file /proc/xenomai/version?

The Xenomai kernel has his own clock, which in general is not
correlated to the linux monotonic clock.
(Under some circumstances it might be identical).

My plan is to use a clock plugin for Lttng, particularly because if
lttng uses the linux monotonic clock from a realtime thread
it might deadlock really good ;)

Norbert
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread Norbert Lange via lttng-dev
Am Do., 20. Mai 2021 um 09:58 Uhr schrieb MONTET Julien via lttng-dev
:
>
> Hi the developers !
>
> CONTEXT
> I am currently working on a Raspberry pi 3B with Xenomai and LTTng tools.
> Raspbian 10.9 Buster - kernel 4.19.85
> uname -a : Linux raspberrypi 4.19.85-v7+ #5 SMP PREEMPT Wed May 12 10:13:37
> Both tools are working, but I wonder about the accuracy of LTTng libraries.
>
>
> METHOD
> The code used is quite simple, it is written with the alchemy skin.
> A rt_task_spawn calls a function that has rt_task_set_periodic(NULL, TM_NOW, 
> period) and rt_task_wait_period(NULL).
> ->The rt_task_set_periodic is based on 1ms.
> ->The  rt_task_wait_period(NULL) is of course inside a while loop (see below 
> at the very end).
>
> My goal is to get accurate traces from Xenomai.
> I took two methods to do so :
> -> lttng
> -> basic calculation based on  rt_timer_read()
>
> What a surprise when I found both method have two different results.
> -> LTTng shows me traces [0.870;1.13] ms (or even less precise)
> -> rt_time_read shows me traces [0.980;1.020] ms
>
> Thing to note :
> -> The use of LTTng has no influence on rt_time_read(), you can use both 
> methods at the same time.
>
> Then, I saved the output of rt_time_read inside a tracepoint.
> It appeared the LTTng is always called at the right time because the value 
> got by rt_time_read () is really good.
>
>
> QUESTIONS
> These are now my questions :
> - What is the method I should trust ?
> - I have searched on the forum and I found LTTng uses a MONOTONIC clock for 
> the timestamp. Can/Should I modify it ?
>
>
> CODE
> ---
> A small part of my function called by rt_task_spawn :
> [...]
> RTIME period = 1000*1000; // in ns
> RTIME now;
> RTIME previous = 0;
> RTIME duration;
> [...]
>  while(1)
> {
> overruns = 0;
> err = rt_task_wait_period();
> now = rt_timer_read();
> tracepoint(tp_provider, tracepoint_tick_ms, now, "tick");
>
> if (previous != 0)
> {
> duration=now-previous;
> rt_printf("%llu\n \n", duration/1000);
> }
>previous=now;
>[...]
> }

Are you using the Xenomai kernel ("Cobalt"), or just skins via
copperplate ("Mercury")?
You have some file /proc/xenomai/version?

The Xenomai kernel has his own clock, which in general is not
correlated to the linux monotonic clock.
(Under some circumstances it might be identical).

My plan is to use a clock plugin for Lttng, particularly because if
lttng uses the linux monotonic clock from a realtime thread
it might deadlock really good ;)

Norbert
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng - Xenomai : different results between timestamp-lttng and rt_time_read()

2021-05-20 Thread MONTET Julien via lttng-dev
Hi the developers !

CONTEXT
I am currently working on a Raspberry pi 3B with Xenomai and LTTng tools.
Raspbian 10.9 Buster - kernel 4.19.85
uname -a : Linux raspberrypi 4.19.85-v7+ #5 SMP PREEMPT Wed May 12 10:13:37
Both tools are working, but I wonder about the accuracy of LTTng libraries.


METHOD
The code used is quite simple, it is written with the alchemy skin.
A rt_task_spawn calls a function that has rt_task_set_periodic(NULL, TM_NOW, 
period) and rt_task_wait_period(NULL).
->The rt_task_set_periodic is based on 1ms.
->The  rt_task_wait_period(NULL) is of course inside a while loop (see below at 
the very end).

My goal is to get accurate traces from Xenomai.
I took two methods to do so :
-> lttng
-> basic calculation based on  rt_timer_read()

What a surprise when I found both method have two different results.
-> LTTng shows me traces [0.870;1.13] ms (or even less precise)
-> rt_time_read shows me traces [0.980;1.020] ms

Thing to note :
-> The use of LTTng has no influence on rt_time_read(), you can use both 
methods at the same time.

Then, I saved the output of rt_time_read inside a tracepoint.
It appeared the LTTng is always called at the right time because the value got 
by rt_time_read () is really good.


QUESTIONS
These are now my questions :
- What is the method I should trust ?
- I have searched on the forum and I found LTTng uses a MONOTONIC clock for the 
timestamp. Can/Should I modify it ?


CODE
---
A small part of my function called by rt_task_spawn :
[...]
RTIME period = 1000*1000; // in ns
RTIME now;
RTIME previous = 0;
RTIME duration;
[...]
 while(1)
{
overruns = 0;
err = rt_task_wait_period();
now = rt_timer_read();
tracepoint(tp_provider, tracepoint_tick_ms, now, "tick");

if (previous != 0)
{
duration=now-previous;
rt_printf("%llu\n \n", duration/1000);
}
   previous=now;
   [...]
}


Regards,

Julien
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng live event loss with babeltrace2

2021-05-10 Thread Jonathan Rajotte-Julien via lttng-dev
Hi, 

> I ran my test with the new changes and it seems to improve the lossiness but 
> not
> eliminate it. For example it seems in the hello world example that if the
> application exits too quickly after emitting the tracepoint, we might still
> lose some events in the consumer even though they are present in the relayd
> output folder.

Here by consumer you mean a "live reader" right? If so I would suggest you use 
the term 
"live consumer reader" or "live reader" since "consumer" alone can yield some 
confusion 
with lttng-consumerd. 

I would need a reproducer for this, you can base yourself on the reproducer in 
the commit message for the previous fix. 
AFAIK there is no valid reason for this to happen, at 
least based on my review of the code for the fix we just merged.
(This is only valid for per-uid tracing, per-pid tracing and lttng live have 
not so minor limitation inherent to the current lttng-live implementation) 

> I don't quite understand why that is. Would love some explanations on the
> various timing windows where we have a potential to lose the events on the
> consumer side.

>From a live reader perspective, in per-uid mode, there should be no "loss" of 
>event from the moment the viewer is attached and 
the moment it detach. Here "loss" mean that the events are present in the trace 
gathered by lttng-relayd but not by the live 
reader for the same "time window". This is only valid if the viewer is not 
"late" and have consumed everything at the moment it detach.


Cheers

- Original Message -
> From: "Eqbal" 
> To: "Jonathan Rajotte-Julien" 
> Cc: "lttng-dev" 
> Sent: Monday, May 10, 2021 8:35:41 PM
> Subject: Re: [lttng-dev] lttng live event loss with babeltrace2

Hi, 

> I ran my test with the new changes and it seems to improve the lossiness but 
> not
> eliminate it. For example it seems in the hello world example that if the
> application exits too quickly after emitting the tracepoint, we might still
> lose some events in the consumer even though they are present in the relayd
> output folder.

Here by consumer you mean a "live reader" right? If so I would suggest you use 
the term "live consumer reader" or "live reader" since "consumer" alone can 
yield some confusion with lttng-consumerd. 

I would need a reproducer for this, you can base yourself on the reproducer in 
the commit message for the previous fix. 
AFAIK there is no valid reason for this to happen, at least based on my review 
of the code for the fix we just merged. (This is only valid for per-uid 
tracing, per-pid tracing and lttng live have not so minor limitation inherent 
to the current lttng-live implementation) 

> I don't quite understand why that is. Would love some explanations on the
> various timing windows where we have a potential to lose the events on the
> consumer side.

>From a live reader perspective, in per-uid mode, there should be no "loss" of 
>event from the moment the viewer is attached and the moment it detach. Here 
>"loss" mean that the events are present in the trace gathered by lttng-relayd 
>but not by the live reader for the same "time window". This is only valid if 
>the viewer is not "late" and have consumed everything at the detach moment. 

Cheers 
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng live event loss with babeltrace2

2021-05-10 Thread Eqbal via lttng-dev
Hi,

I ran my test with the new changes and it seems to improve the lossiness
but not eliminate it. For example it seems in the hello world example that
if the application exits too quickly after emitting the tracepoint, we
might still lose some events in the consumer even though they are present
in the relayd output folder. I don't quite understand why that is. Would
love some explanations on the various timing windows where we have a
potential to lose the events on the consumer side.

Thanks,
Eqbal

On Fri, May 7, 2021 at 11:05 AM Jonathan Rajotte-Julien <
jonathan.rajotte-jul...@efficios.com> wrote:

> Hi,
>
> See
> https://github.com/lttng/lttng-tools/commit/c876d657fb08a91ca7c907b92f1b7604aee664ee
> .
>
> We would appreciate your feedback on this if possible based on your use
> case.
>
> Cheers
>
> On Mon, May 03, 2021 at 05:05:10PM -0700, Eqbal via lttng-dev wrote:
> > Hi,
> >
> > I have a lttng live session trace consumer application using
> libbabeltrace2
> > where I create a graph to consume lttng live session traces and output to
> > another sink. I am running the graph in a loop at some polling interval
> as
> > long as I get BT_GRAPH_RUN_STATUS_AGAIN status. What I am noticing is
> that
> > if my polling interval is large enough I tend to lose either all or some
> of
> > the events. I experimented with various polling intervals and it seems if
> > the polling interval is less than *DELAYUS *from "lttng-create
> > --live=DELAYUS" option then I am able to get all the events, otherwise I
> > tend to lose events.
> >
> > Here are the steps I follow:
> > 1. start session daemon and relay daemon
> > 2. create a live session (with default delay of 1s), enable events and
> start
> > 3. Start my application (hello world example from lttng docs)
> > 4. Start the consumer application built using libbabeltrace that connects
> > to the live session
> >
> > I noticed that the events are actually persisted in the ~/lttng-traces by
> > the relay daemon, but it does not reach babeltrace consumer application.
> I
> > have noticed the same behavior with babeltrace2 cli.
> >
> > I would like to understand what is the reason for such behavior and if
> > playing with the polling interval in relation to the DELAYUS value is the
> > right thing to do.
> >
> > Thanks,
> > Eqbal
>
> > ___
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
>
> --
> Jonathan Rajotte-Julien
> EfficiOS
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng live event loss with babeltrace2

2021-05-07 Thread Jonathan Rajotte-Julien via lttng-dev
Hi,

See 
https://github.com/lttng/lttng-tools/commit/c876d657fb08a91ca7c907b92f1b7604aee664ee
 . 

We would appreciate your feedback on this if possible based on your use case.

Cheers

On Mon, May 03, 2021 at 05:05:10PM -0700, Eqbal via lttng-dev wrote:
> Hi,
> 
> I have a lttng live session trace consumer application using libbabeltrace2
> where I create a graph to consume lttng live session traces and output to
> another sink. I am running the graph in a loop at some polling interval as
> long as I get BT_GRAPH_RUN_STATUS_AGAIN status. What I am noticing is that
> if my polling interval is large enough I tend to lose either all or some of
> the events. I experimented with various polling intervals and it seems if
> the polling interval is less than *DELAYUS *from "lttng-create
> --live=DELAYUS" option then I am able to get all the events, otherwise I
> tend to lose events.
> 
> Here are the steps I follow:
> 1. start session daemon and relay daemon
> 2. create a live session (with default delay of 1s), enable events and start
> 3. Start my application (hello world example from lttng docs)
> 4. Start the consumer application built using libbabeltrace that connects
> to the live session
> 
> I noticed that the events are actually persisted in the ~/lttng-traces by
> the relay daemon, but it does not reach babeltrace consumer application. I
> have noticed the same behavior with babeltrace2 cli.
> 
> I would like to understand what is the reason for such behavior and if
> playing with the polling interval in relation to the DELAYUS value is the
> right thing to do.
> 
> Thanks,
> Eqbal

> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


-- 
Jonathan Rajotte-Julien
EfficiOS
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng live event loss with babeltrace2

2021-05-04 Thread Jonathan Rajotte-Julien via lttng-dev
On Mon, May 03, 2021 at 05:05:10PM -0700, Eqbal via lttng-dev wrote:
> Hi,
> 
> I have a lttng live session trace consumer application using libbabeltrace2
> where I create a graph to consume lttng live session traces and output to
> another sink. I am running the graph in a loop at some polling interval as
> long as I get BT_GRAPH_RUN_STATUS_AGAIN status. What I am noticing is that
> if my polling interval is large enough I tend to lose either all or some of
> the events. I experimented with various polling intervals and it seems if
> the polling interval is less than *DELAYUS *from "lttng-create
> --live=DELAYUS" option then I am able to get all the events, otherwise I
> tend to lose events.
> 
> Here are the steps I follow:
> 1. start session daemon and relay daemon
> 2. create a live session (with default delay of 1s), enable events and start
> 3. Start my application (hello world example from lttng docs)

Not sure if you modified it in any way, but be careful with short lived apps
since an app can terminate before lttng-ust have a chance to register.

> 4. Start the consumer application built using libbabeltrace that connects
> to the live session

hmm. Note that when attaching to a session it does not start at the beginning 
of the
trace collected by lttng-relayd, it start at the last received data from
lttng-relayd from the lttng-consumerd (LTTNG_VIEWER_SEEK_LAST).

Hence I would recommend that these steps be inversed:

4. Start the consumer application built using libbabeltrace that connects
to the live session
3. Start my application (hello world example from lttng docs)


> 
> I noticed that the events are actually persisted in the ~/lttng-traces by
> the relay daemon, but it does not reach babeltrace consumer application. I
> have noticed the same behavior with babeltrace2 cli.
> 
> I would like to understand what is the reason for such behavior and if
> playing with the polling interval in relation to the DELAYUS value is the
> right thing to do.

I think I reproduced the issue but I'm not completely sure it is the same
problem. Please file an issue on the bug tracker [1] with
as much information as possible, the exact lttng
commands used, the current behaviour and the expected behaviour. 
I'll add my findings if relevant.

But I think it might be a weird handling of how we handle the first "empty"
retry and the subsequent get phase. After the initial phase everything seems to
work as expected.

[1] https://bugs.lttng.org/


-- 
Jonathan Rajotte-Julien
EfficiOS
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng live event loss with babeltrace2

2021-05-04 Thread Jonathan Rajotte-Julien via lttng-dev
Hi,

On Mon, May 03, 2021 at 05:05:10PM -0700, Eqbal via lttng-dev wrote:
> Hi,
> 
> I have a lttng live session trace consumer application using libbabeltrace2
> where I create a graph to consume lttng live session traces and output to
> another sink. I am running the graph in a loop at some polling interval as
> long as I get BT_GRAPH_RUN_STATUS_AGAIN status. What I am noticing is that
> if my polling interval is large enough I tend to lose either all or some of
> the events. I experimented with various polling intervals and it seems if
> the polling interval is less than *DELAYUS *from "lttng-create
> --live=DELAYUS" option then I am able to get all the events, otherwise I
> tend to lose events.
> 
> Here are the steps I follow:
> 1. start session daemon and relay daemon
> 2. create a live session (with default delay of 1s), enable events and start
> 3. Start my application (hello world example from lttng docs)
> 4. Start the consumer application built using libbabeltrace that connects
> to the live session
> 
> I noticed that the events are actually persisted in the ~/lttng-traces by
> the relay daemon, but it does not reach babeltrace consumer application. I
> have noticed the same behavior with babeltrace2 cli.

Could you also test against babeltrace 1.5? It might give us a bit of a head
start to debug this if it turns out to be unexpected behaviour.

-- 
Jonathan Rajotte-Julien
EfficiOS
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] lttng live event loss with babeltrace2

2021-05-03 Thread Eqbal via lttng-dev
Hi,

I have a lttng live session trace consumer application using libbabeltrace2
where I create a graph to consume lttng live session traces and output to
another sink. I am running the graph in a loop at some polling interval as
long as I get BT_GRAPH_RUN_STATUS_AGAIN status. What I am noticing is that
if my polling interval is large enough I tend to lose either all or some of
the events. I experimented with various polling intervals and it seems if
the polling interval is less than *DELAYUS *from "lttng-create
--live=DELAYUS" option then I am able to get all the events, otherwise I
tend to lose events.

Here are the steps I follow:
1. start session daemon and relay daemon
2. create a live session (with default delay of 1s), enable events and start
3. Start my application (hello world example from lttng docs)
4. Start the consumer application built using libbabeltrace that connects
to the live session

I noticed that the events are actually persisted in the ~/lttng-traces by
the relay daemon, but it does not reach babeltrace consumer application. I
have noticed the same behavior with babeltrace2 cli.

I would like to understand what is the reason for such behavior and if
playing with the polling interval in relation to the DELAYUS value is the
right thing to do.

Thanks,
Eqbal
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 8

2021-04-07 Thread Ramesh Errabolu via lttng-dev
Is there a dummy test application and libraries one can download and build.
I could then validate my setup against it.

Regards,
Ramesh


On Wed, Apr 7, 2021 at 11:00 AM  wrote:

> Send lttng-dev mailing list submissions to
> lttng-dev@lists.lttng.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> or, via email, send a message with subject or body 'help' to
> lttng-dev-requ...@lists.lttng.org
>
> You can reach the person managing the list at
> lttng-dev-ow...@lists.lttng.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of lttng-dev digest..."
>
>
> Today's Topics:
>
>1. Re: lttng-dev Digest, Vol 156, Issue 3 (Jonathan Rajotte-Julien)
>2. Re: lttng-dev Digest, Vol 156, Issue 3 (Ramesh Errabolu)
>3. Re: [PATCH lttng-tools] testapp: gen-ust-events: added help
>   and sync-before-first-event (Mathieu Desnoyers)
>4. Re: [PATCH lttng-tools] Fix: test code assumes that child
>   process is schedule to run before parent (Mathieu Desnoyers)
>
>
> --
>
> Message: 1
> Date: Tue, 6 Apr 2021 19:46:21 -0400
> From: Jonathan Rajotte-Julien 
> To: Ramesh Errabolu 
> Cc: lttng-dev@lists.lttng.org
> Subject: Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 3
> Message-ID: <20210406234621.GA913141@joraj-alpa>
> Content-Type: text/plain; charset=us-ascii
>
> > I followed your command sequence and noticed a bunch of files being
> > created. However when I tried to run babeltrace on these outputs I don't
> > see any of the functions that should have been called.  Including below
> the
> > search for symbols.
>
> At least userspace tracing seems to work.
>
> I would recommended, again, that you first try tracing on a dummy
> application, a
> really simple one. Then move to the usage of cyg_profile on that dummy app
> then
> to your application.
>
> The cyg profile events are the following:
>
>   lttng_ust_cyg_profile_fast:func_entry
>   lttng_ust_cyg_profile_fast:func_exit
>
> and, when using liblttng-ust-cyg-profile.so:
>
>   lttng_ust_cyg_profile:func_entry
>   lttng_ust_cyg_profile:func_exit
>
> I would recommend that you first start grep-ing (lttng_ust_cyg) for this
> in the
> trace to see if any is getting hit and recorded. If it is not the case,
> take a
> step back try with a dummy app and if nothing works with the dummy app we
> can at
> least try to help you from there and remove all other variable since you
> will be
> able to share the dummy app with us.
>
> Cheers
>
>
> --
>
> Message: 2
> Date: Tue, 6 Apr 2021 18:56:39 -0500
> From: Ramesh Errabolu 
> To: Jonathan Rajotte-Julien 
> Cc: lttng-dev@lists.lttng.org
> Subject: Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 3
> Message-ID:
> <
> cafgsprwnihley_cqm8jyb9b0r2rt52u2sapevlvsdsqcyxy...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Will try the dummy app to check if the basic setup is good. Wondering if
> being able to trace shared libraries is something not supported.
>
> Regards,
> Ramesh
>
>
> On Tue, Apr 6, 2021 at 6:46 PM Jonathan Rajotte-Julien <
> jonathan.rajotte-jul...@efficios.com> wrote:
>
> > > I followed your command sequence and noticed a bunch of files being
> > > created. However when I tried to run babeltrace on these outputs I
> don't
> > > see any of the functions that should have been called.  Including below
> > the
> > > search for symbols.
> >
> > At least userspace tracing seems to work.
> >
> > I would recommended, again, that you first try tracing on a dummy
> > application, a
> > really simple one. Then move to the usage of cyg_profile on that dummy
> app
> > then
> > to your application.
> >
> > The cyg profile events are the following:
> >
> >   lttng_ust_cyg_profile_fast:func_entry
> >   lttng_ust_cyg_profile_fast:func_exit
> >
> > and, when using liblttng-ust-cyg-profile.so:
> >
> >   lttng_ust_cyg_profile:func_entry
> >   lttng_ust_cyg_profile:func_exit
> >
> > I would recommend that you first start grep-ing (lttng_ust_cyg) for this
> > in the
> > trace to see if any is getting hit and recorded. If it is not the case,
> > take a
> > step back try with a dummy app and if nothing works with the dummy app we
> > can at
> > least try to help you from there and remove all other variable since yo

Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 3

2021-04-06 Thread Ramesh Errabolu via lttng-dev
Will try the dummy app to check if the basic setup is good. Wondering if
being able to trace shared libraries is something not supported.

Regards,
Ramesh


On Tue, Apr 6, 2021 at 6:46 PM Jonathan Rajotte-Julien <
jonathan.rajotte-jul...@efficios.com> wrote:

> > I followed your command sequence and noticed a bunch of files being
> > created. However when I tried to run babeltrace on these outputs I don't
> > see any of the functions that should have been called.  Including below
> the
> > search for symbols.
>
> At least userspace tracing seems to work.
>
> I would recommended, again, that you first try tracing on a dummy
> application, a
> really simple one. Then move to the usage of cyg_profile on that dummy app
> then
> to your application.
>
> The cyg profile events are the following:
>
>   lttng_ust_cyg_profile_fast:func_entry
>   lttng_ust_cyg_profile_fast:func_exit
>
> and, when using liblttng-ust-cyg-profile.so:
>
>   lttng_ust_cyg_profile:func_entry
>   lttng_ust_cyg_profile:func_exit
>
> I would recommend that you first start grep-ing (lttng_ust_cyg) for this
> in the
> trace to see if any is getting hit and recorded. If it is not the case,
> take a
> step back try with a dummy app and if nothing works with the dummy app we
> can at
> least try to help you from there and remove all other variable since you
> will be
> able to share the dummy app with us.
>
> Cheers
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 3

2021-04-06 Thread Jonathan Rajotte-Julien via lttng-dev
> I followed your command sequence and noticed a bunch of files being
> created. However when I tried to run babeltrace on these outputs I don't
> see any of the functions that should have been called.  Including below the
> search for symbols.

At least userspace tracing seems to work.

I would recommended, again, that you first try tracing on a dummy application, a
really simple one. Then move to the usage of cyg_profile on that dummy app then
to your application.

The cyg profile events are the following:

  lttng_ust_cyg_profile_fast:func_entry
  lttng_ust_cyg_profile_fast:func_exit

and, when using liblttng-ust-cyg-profile.so:

  lttng_ust_cyg_profile:func_entry
  lttng_ust_cyg_profile:func_exit

I would recommend that you first start grep-ing (lttng_ust_cyg) for this in the
trace to see if any is getting hit and recorded. If it is not the case, take a
step back try with a dummy app and if nothing works with the dummy app we can at
least try to help you from there and remove all other variable since you will be
able to share the dummy app with us.

Cheers
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 3

2021-04-06 Thread Jonathan Rajotte-Julien via lttng-dev
Hi,

I think you need to take a step back and figure out how lttng is deployed and 
more
importantly the overall architecture of lttng and its principal moving pieces.

First, you need a functioning and running lttng-sessiond process.

>From lttng-console-log.txt:

   root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/bin#  lttng list -u
   Error: Unable to list UST events: No session daemon is available

The error is clear here, no lttng-sessiond is running or at least, the lttng CLI
cannot communicate with it.

Here the sessiond was killed before performing this operation based on
CmdSequence.txt. You seem to have forgot to start a lttng-sessiond before
performing the call to your application and `lttng` commands.

My previous email stated the following:
  With the app running and having the LD_PRELOAD correctly set, and a sessiond
  (sessiond should have read lttng-sessiond) running.

Then, if we look at kernel modules loading.

>From lttng-sessiond.txt:

DEBUG1 - 19:50:34.293777619 [27093/27093]: libkmod: could not find module by 
name='lttng_ring_buffer_client_discard'
 (in log_kmod() at modprobe.c:108)
Error: Unable to load required module lttng-ring-buffer-client-discard
Warning: No kernel tracer available

libkmod expect the presence of lttng-modules kernel modules in: 
/lib/modules/$(uname -r)/

>From lttng-system-env.txt:

  root@RocrLnx23:~# find /lib/modules/$(uname -r)/ | grep lttng

Yield nothing. I suspect that your modules are installed under:

  ~/git/compute/out/ubuntu-18.04/18.04/lib/modules/$(uname -r)/

Now, currently, lttng-sessiond when built with kmod support does not support
defining the modules locations (as far as I know) since we pass (NULL, NULL)
to the creation context (kmod_new). The default behaviour is to look into
/lib/modules/ .

Albeit it would be nice to support such scenarios, this will not help you in the
short term. I would recommend the usage of symbolic link here to provide 
"/lib/modules/$(uname -r)/".

Note that this is overly complicated by the deployment scenario you are
currently using, which does not represent a typical deployment for most of our
users.

In any cases, again, this is only useful if you plan on doing kernel tracing. 

I would suggest that you might want to put effort in making sure Userspace
Tracing works.

Here I think the only thing missing is to have a running lttng-sessiond process
while you launch  ./rocminfo

Concretely:
$ sudo -s
# systemctl stop lttng-sessiond.service
# pkill lttng-sessiond
# cd ~/git/compute/out/ubuntu-18.04/18.04/lib
# export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
# lttng-sessiond -vvv --verbose-consumer -b > /tmp/lttng-sessiond.log 2>&1
# lttng create my_userspace_tracing_session
# lttng enable-event -u -a
# lttng start
# LD_PRELOAD=liblttng-ust-cyg-profile-fast.so  LTTNG_UST_DEBUG=1 
./rocminfo
# lttng destroy
# babeltrace 
# pkill lttng-sessiond


Note: here the manual starting and termination of lttng-sessiond is only for the
purpose of testing. In a normal deployment lttng-sessiond is a daemon.

Note that the sequence of `lttng` commands is important here since I expect
./rocminfo to not have a long lifetime. In other word, the `rocminfo` will have
finished and unregistered from lttng-sessiond perspective before you are able to
manually either list its tracepoints or setup a tracing session after you 
started
the application.

Please read the following and try to perform it under your deployment scenario:

https://lttng.org/docs/v2.12/#doc-tracing-your-own-user-application


Hope this helps.

Cheers


On Mon, Apr 05, 2021 at 08:32:00PM -0500, Ramesh Errabolu via lttng-dev wrote:
> Jonathan, et al
> 
> Attaching here with logs of the run.
> 
> 1. Command sequence I ran
> 2. Console log of commands as I ran them
> 3. Log of lttng-service daemon *lttng-sessiond.txt*
> 4. Info about the system, things such as kernel, lttng kernel modules, etc
> 5. Lastly I tried to build lttng-modules for my kernel 5.9.x. This failed
> as lacking support for it.
> 
> Any help info is appreciated.
> 
> Regards,
> Ramesh
> 
> 
> On Fri, Apr 2, 2021 at 11:00 AM  wrote:
> 
> > Send lttng-dev mailing list submissions to
> > lttng-dev@lists.lttng.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> > or, via email, send a message with subject or body 'help' to
> > lttng-dev-requ...@lists.lttng.org
> >
> > You can reach the person managing the list at
> > lttng-dev-ow...@lists.lttng.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of lttng-dev digest..."
> >
> >
> > Today's Topics:
> >
> >1. Re: Can't trace function calls (Jonathan Rajotte-Julien)
> >
> >
> > --
> >
> > Message: 1
> > Date: Fri, 2 Apr 2021 11:07:19 -0400
> > From: Jonathan 

Re: [lttng-dev] lttng-dev Digest, Vol 156, Issue 3

2021-04-05 Thread Ramesh Errabolu via lttng-dev
Jonathan, et al

Attaching here with logs of the run.

1. Command sequence I ran
2. Console log of commands as I ran them
3. Log of lttng-service daemon *lttng-sessiond.txt*
4. Info about the system, things such as kernel, lttng kernel modules, etc
5. Lastly I tried to build lttng-modules for my kernel 5.9.x. This failed
as lacking support for it.

Any help info is appreciated.

Regards,
Ramesh


On Fri, Apr 2, 2021 at 11:00 AM  wrote:

> Send lttng-dev mailing list submissions to
> lttng-dev@lists.lttng.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> or, via email, send a message with subject or body 'help' to
> lttng-dev-requ...@lists.lttng.org
>
> You can reach the person managing the list at
> lttng-dev-ow...@lists.lttng.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of lttng-dev digest..."
>
>
> Today's Topics:
>
>1. Re: Can't trace function calls (Jonathan Rajotte-Julien)
>
>
> --
>
> Message: 1
> Date: Fri, 2 Apr 2021 11:07:19 -0400
> From: Jonathan Rajotte-Julien 
> To: Ramesh Errabolu 
> Cc: lttng-dev@lists.lttng.org
> Subject: Re: [lttng-dev] Can't trace function calls
> Message-ID: <20210402150719.GB79283@joraj-alpa>
> Content-Type: text/plain; charset=us-ascii
>
> On Wed, Mar 31, 2021 at 12:55:53PM -0500, Ramesh Errabolu wrote:
> > root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/bin# ls ~/ | grep
> -i ltt
> > root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/bin# lttng create
> > my-kernel-session --output=~/my-kernel-trace
> > Session my-kernel-session created.
> > Traces will be output to
> > /home/user1/git/compute/out/ubuntu-18.04/18.04/bin/~/my-kernel-trace
> > root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/bin# *lttng list
> > --kerne*l
> > *Error: Unable to list kernel events: Kernel tracer not available*
>
> Well this would be the first thing to look at.
>
> First let's deactivate the lttng-sessiond.service installed by the
> packages.
>
>systemctl stop lttng-sessiond.service
>
> You might want to reanable it later.
>
> >
> > root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/bin# ps -ef | grep
> ltt
> > root  1002 1  0 12:16 ?00:00:00 /usr/bin/lttng-sessiond
> > root  1054  1002  0 12:16 ?00:00:00 */usr/bin/lttng-sessiond*
> > root  3145  2861  0 12:51 pts/000:00:00 grep --color=auto ltt
> > root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/bin#
>
> Make sure after the systemctl call that no lttng-sessiond process is
> running.
>
> Now let's launch a lttng-sessiond by hand with a bit more verbosity. For
> now
> let's stick to the root user.
>
>   # lttng-sessiond -vvv --verbose-consumer -b > /tmp/lttng-sessiond.log
> 2>&1
>   # pkill lttng-sessiond
>
> Please share the content of /tmp/lttng-sessiond.log using a pasting service
> (paste.ubuntu.com).
>
> Also please provide the output of:
>
>  # uname -a
>  # find /lib/modules/$(uname -r)/ | grep lttng
>  # dmesg | grep lttng
>
>
> But again, the cyg-profile helper library is meant for Userspace tracing.
>
> With the app running and having the LD_PRELOAD correctly set, and a
> sessiond
> running.
>
>  # lttng list -u
>
> If there is nothing, well you can start the application with the following
> and
> share the output of it (make sure to remove any output from your
> application if
> sensitive data is present)
>
>  # LD_PRELOAD= LTTNG_UST_DEBUG=1 your_application_here
>
> Note that debug log will be outputted on stderr.
>
> Cheers
>
> --
> Jonathan Rajotte-Julien
> EfficiOS
>
>
> --
>
> Subject: Digest Footer
>
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
>
> --
>
> End of lttng-dev Digest, Vol 156, Issue 3
> *
>

$ sudo -s
#
# systemctl stop lttng-sessiond.service
# ps -ef | grep -i lttng
# lttng-sessiond -vvv --verbose-consumer -b > /tmp/lttng-sessiond.log 2>&1
# pkill lttng-sessiond
# uname -a
# find /lib/modules/$(uname -r)/ | grep lttng
# dmesg | grep lttng
#
# cd ~/git/compute/out/ubuntu-18.04/18.04/lib
# export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
# echo $LD_LIBRARY_PATH
#
# cd ../bin
# LD_PRELOAD=liblttng-ust-cyg-profile-fast.so  ./rocminfo
# lttng list -u
#
#  LD_PRELOAD=liblttng-ust-cyg-profile-fast.so  LTTNG_UST_DEBUG=1 ./rocminfo
#
#
#
#




root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/lib#
root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/lib# export 
LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/lib#
root@RocrLnx23:~/git/compute/out/ubuntu-18.04/18.04/lib# echo $LD_LIBRARY_PATH
/home/user1/git/compute/out/ubuntu-18.04/18.04/lib:

Re: [lttng-dev] LTTng 32 bits : Cannot find liburcu-bp 0.11 or newer

2021-03-29 Thread Jonathan Rajotte-Julien via lttng-dev
Hi Julien,

Yeah, the urcu version is wrong here it should be >= 0.11. We will get that 
fixed.

Thanks

On Mon, Mar 29, 2021 at 01:24:48PM +, MONTET Julien via lttng-dev wrote:
> Oh my bad !
> You just have to choose modify the lines of the tutorial (rcu).
> I chose 
> userspace-rcu-0.12.2.tar.bz2
>  and it installed well.
> 
> De : MONTET Julien
> Envoyé : lundi 29 mars 2021 11:43
> À : lttng-dev@lists.lttng.org 
> Objet : LTTng 32 bits : Cannot find liburcu-bp 0.11 or newer
> 
> Hi everyone,
> 
> I successfully installed and used LTTng on x86 and x64 (kernel event, 
> userspace, and other tools like TraceCompass).
> 
> Now, I am looking for a way to use LTTng on other processors (arm32, ...).
> 
> I tried to follow the tutorial and, to be honest, I am quite not lost on step 
> 2. (libuuid, popt, libxml2)
> https://lttng.org/docs/v2.12/#doc-instrumenting-32-bit-app-on-64-bit-system
> 
> My current issue is on the step 3.
> During the installation (./configure), I have the following error :
> error: Cannot find liburcu-bp 0.11 or newer. Use LDFLAGS=-Ldir to specify its 
> location.
> 
> This file - liburcu - is in /usr/local/lib32 : Ubuntu 
> Pastebin
> 
> This topic was started a long time ago here :
> https://lists.lttng.org/pipermail/lttng-dev/2013-February/019641.html
> 
> Would you mind helping me ?

> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


-- 
Jonathan Rajotte-Julien
EfficiOS
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng 32 bits : Cannot find liburcu-bp 0.11 or newer

2021-03-29 Thread MONTET Julien via lttng-dev
Oh my bad !
You just have to choose modify the lines of the tutorial (rcu).
I chose 
userspace-rcu-0.12.2.tar.bz2
 and it installed well.

De : MONTET Julien
Envoyé : lundi 29 mars 2021 11:43
À : lttng-dev@lists.lttng.org 
Objet : LTTng 32 bits : Cannot find liburcu-bp 0.11 or newer

Hi everyone,

I successfully installed and used LTTng on x86 and x64 (kernel event, 
userspace, and other tools like TraceCompass).

Now, I am looking for a way to use LTTng on other processors (arm32, ...).

I tried to follow the tutorial and, to be honest, I am quite not lost on step 
2. (libuuid, popt, libxml2)
https://lttng.org/docs/v2.12/#doc-instrumenting-32-bit-app-on-64-bit-system

My current issue is on the step 3.
During the installation (./configure), I have the following error :
error: Cannot find liburcu-bp 0.11 or newer. Use LDFLAGS=-Ldir to specify its 
location.

This file - liburcu - is in /usr/local/lib32 : Ubuntu 
Pastebin

This topic was started a long time ago here :
https://lists.lttng.org/pipermail/lttng-dev/2013-February/019641.html

Would you mind helping me ?
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] LTTng 32 bits : Cannot find liburcu-bp 0.11 or newer

2021-03-29 Thread MONTET Julien via lttng-dev
Hi everyone,

I successfully installed and used LTTng on x86 and x64 (kernel event, 
userspace, and other tools like TraceCompass).

Now, I am looking for a way to use LTTng on other processors (arm32, ...).

I tried to follow the tutorial and, to be honest, I am quite not lost on step 
2. (libuuid, popt, libxml2)
https://lttng.org/docs/v2.12/#doc-instrumenting-32-bit-app-on-64-bit-system

My current issue is on the step 3.
During the installation (./configure), I have the following error :
error: Cannot find liburcu-bp 0.11 or newer. Use LDFLAGS=-Ldir to specify its 
location.

This file - liburcu - is in /usr/local/lib32 : Ubuntu 
Pastebin

This topic was started a long time ago here :
https://lists.lttng.org/pipermail/lttng-dev/2013-February/019641.html

Would you mind helping me ?
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


  1   2   3   4   5   6   7   8   9   10   >