Re: [lttng-dev] LTTng UST Benchmarks

2024-04-25 Thread Kienan Stewart via lttng-dev

Hi Aditya,

It has been suggested to me that the following publication[1] would also 
be of interest. It gives a good comparison of micro-benchmarking tracers.


[1]: https://dl.acm.org/doi/10.1145/3158644

thanks,
kienan

On 4/25/24 1:53 PM, Kienan Stewart via lttng-dev wrote:

Hi Aditya,

On 4/24/24 11:25 AM, Aditya Kurdunkar via lttng-dev wrote:
Hello everyone, I was working on a use case where I am working on 
enabling LTTng on an embedded ARM device running the OpenBMC linux 
distribution. I have enabled the lttng yocto recipe and I am able to 
trace my code. The one thing I am concerned about is the performance 
overhead. Although the documentation mentions that LTTng has the 
lowest overhead amongst all the available solutions, I am concerned 
about the overhead of the LTTng UST in comparison to 
other available tracers/profilers. I have used the benchmarking setup 
from lttng-ust/tests/benchmark at master · lttng/lttng-ust 
(github.com) 
 to 
benchmark the overhead of the tracepoints (on the device). The 
benchmark, please correct me if I am wrong, gives the overhead of a 
single tracepoint in your code.


This seems to be what it does.

Although this might be fine for now, I
was just wondering if there are any published benchmarks comparing 
LTTng with the available tracing/profiling solutions. 


I don't know of any published ones that do an exhaustive comparison.

There is this one[1] which references a comparison with some parts of 
eBPF. The source for the benchmarking is also available[2].


If not, how can I go

about benchmarking the overhead of the applications?



I'm not really sure how to answer you here.

I guess the most pertinent to your use case is to test your application 
with and without tracing to see the complete effect?


It would be good to have a dedicated system, disable CPU frequency 
scaling, and to perform the tests repeatedly and measure the mean, 
median, and standard deviation.


You could pull methodological inspiration from prior publications[3], 
which while outdated in terms of software version and hardware 
demonstrate the process of creating and comparing benchmarks.


It would also be useful to identify how your application and tracing 
setup works, and to understand which parts of the system you are 
interested in measuring.


For example, the startup time of tracing rapidly spawning processes will 
depend on the type of buffering scheme in use, if the tracing 
infrastructure is loaded before or after forking, etc.


Your case might be a long running application and you aren't interested 
in startup time performance but more concretely the impact of the static 
instrumentation on one of your hot paths.


If you're not sure what kind of tracing setups work best in your case, 
or would like us to characterize at certain aspect of the tool-set's 
performance, EfficiOS[4] offers consultation and support for 
instrumentation and performance in applications.


I have come across the lttng/lttng-ust-benchmarks (github.com) 
 repository which has 
no documentation on how to run it, apart from one commit message on 
how to run the benchmark script.




To run those benchmarks when you have babeltrace2, lttng-tools, urcu, 
lttng-ust, and optional lttng-modules installed:


```
$ make
$ python3 ./benchmark.py
```

This should produce a file, `benchmarks.json`

You can also inspect how the CI job runs it: 
https://ci.lttng.org/view/LTTng-ust/job/lttng-ust-benchmarks_master_linuxbuild/



Any help is really appreciated. Thank you.

Regards,
Aditya

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[1]: 
https://tracingsummit.org/ts/2022/files/Tracing_Summit_2022-LTTng_Beyond_Ring-Buffer_Based_Tracing_Jeremie_Galarneau_.pdf

[2]: https://github.com/jgalar/LinuxCon2022-Benchmarks
[3]: https://www.dorsal.polymtl.ca/files/publications/desnoyers.pdf
[4]: https://www.efficios.com/contact/

thanks,
kienan
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] LTTng UST Benchmarks

2024-04-25 Thread Kienan Stewart via lttng-dev

Hi Aditya,

On 4/24/24 11:25 AM, Aditya Kurdunkar via lttng-dev wrote:
Hello everyone, I was working on a use case where I am working on 
enabling LTTng on an embedded ARM device running the OpenBMC linux 
distribution. I have enabled the lttng yocto recipe and I am able to 
trace my code. The one thing I am concerned about is the performance 
overhead. Although the documentation mentions that LTTng has the lowest 
overhead amongst all the available solutions, I am concerned about the 
overhead of the LTTng UST in comparison to 
other available tracers/profilers. I have used the benchmarking setup 
from lttng-ust/tests/benchmark at master · lttng/lttng-ust (github.com) 
 to 
benchmark the overhead of the tracepoints (on the device). The 
benchmark, please correct me if I am wrong, gives the overhead of a 
single tracepoint in your code.


This seems to be what it does.

Although this might be fine for now, I
was just wondering if there are any published benchmarks comparing LTTng 
with the available tracing/profiling solutions. 


I don't know of any published ones that do an exhaustive comparison.

There is this one[1] which references a comparison with some parts of 
eBPF. The source for the benchmarking is also available[2].


If not, how can I go

about benchmarking the overhead of the applications?



I'm not really sure how to answer you here.

I guess the most pertinent to your use case is to test your application 
with and without tracing to see the complete effect?


It would be good to have a dedicated system, disable CPU frequency 
scaling, and to perform the tests repeatedly and measure the mean, 
median, and standard deviation.


You could pull methodological inspiration from prior publications[3], 
which while outdated in terms of software version and hardware 
demonstrate the process of creating and comparing benchmarks.


It would also be useful to identify how your application and tracing 
setup works, and to understand which parts of the system you are 
interested in measuring.


For example, the startup time of tracing rapidly spawning processes will 
depend on the type of buffering scheme in use, if the tracing 
infrastructure is loaded before or after forking, etc.


Your case might be a long running application and you aren't interested 
in startup time performance but more concretely the impact of the static 
instrumentation on one of your hot paths.


If you're not sure what kind of tracing setups work best in your case, 
or would like us to characterize at certain aspect of the tool-set's 
performance, EfficiOS[4] offers consultation and support for 
instrumentation and performance in applications.


I have come across the lttng/lttng-ust-benchmarks (github.com) 
 repository which has no 
documentation on how to run it, apart from one commit message on how to 
run the benchmark script.




To run those benchmarks when you have babeltrace2, lttng-tools, urcu, 
lttng-ust, and optional lttng-modules installed:


```
$ make
$ python3 ./benchmark.py
```

This should produce a file, `benchmarks.json`

You can also inspect how the CI job runs it: 
https://ci.lttng.org/view/LTTng-ust/job/lttng-ust-benchmarks_master_linuxbuild/



Any help is really appreciated. Thank you.

Regards,
Aditya

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[1]: 
https://tracingsummit.org/ts/2022/files/Tracing_Summit_2022-LTTng_Beyond_Ring-Buffer_Based_Tracing_Jeremie_Galarneau_.pdf

[2]: https://github.com/jgalar/LinuxCon2022-Benchmarks
[3]: https://www.dorsal.polymtl.ca/files/publications/desnoyers.pdf
[4]: https://www.efficios.com/contact/

thanks,
kienan
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] [babeltrace2]about python self-defined plugin loading

2024-04-25 Thread Kienan Stewart via lttng-dev

My apologies,

there is a typo in my previous e-mail, the library directory should be:

`/babeltrace2/plugin-providers/'

thanks,
kienan

On 4/25/24 11:41 AM, Kienan Stewart via lttng-dev wrote:

Hi Amanda,

could you double-check to ensure that babeltrace2 was built with 
`--enable-python-plugins`, and that `import bt2` works?


There should be a babeltrace2-python-plugin-provider.so in 
`/babeltrace2/plugin-provides`


thanks,
kienan


On 4/24/24 11:28 PM, Wu, Yannan via lttng-dev wrote:

Hi, There,

I am trying to construct a customized filter and sink based on 
babeltrace2 python binding. However, nether mine plugin nor the 
plugins sample I could find on the internet all dont work.


For example, 
https://github.com/simark/babeltrace-fun-plugins/tree/master/my-first-components 


I just downloaded the py file and run the exact command, it failed. 
The log is as following:



babeltrace2 --plugin-path . -c source.demo.MyFirstSource -c 
sink.demo.MyFirstSink
04-24 16:52:04.349 919805 919805 E CLI 
add_descriptor_to_component_descriptor_set@babeltrace2.c:1720 Cannot 
find component class: plugin-name="demo", 
comp-cls-name="MyFirstSource", comp-cls-type=1
04-24 16:52:04.349 919805 919805 E CLI 
cmd_run_ctx_init@babeltrace2.c:1882 Cannot find an operative message 
interchange protocol version to use to create the `run` command's 
graph: status=ERROR
04-24 16:52:04.349 919805 919805 E CLI cmd_run@babeltrace2.c:2465 
Cannot initialize the command's context.


ERROR:    [Babeltrace CLI] (babeltrace2.c:2465)
   Cannot initialize the command's context.
CAUSED BY [Babeltrace CLI] (babeltrace2.c:1882)
   Cannot find an operative message interchange protocol version to 
use to create the `run` command's graph: status=ERROR

CAUSED BY [Babeltrace CLI] (babeltrace2.c:1720)
   Cannot find component class: plugin-name="demo", 
comp-cls-name="MyFirstSource", comp-cls-type=1


babeltrace2 --version
Babeltrace 2.0.7 "Amqui" [v2.0.6-1-g825a0ed6d]

Amqui (/ɒmkwiː/) is a town in eastern Québec, Canada, at the base of 
the Gaspé peninsula in Bas-Saint-Laurent. Located at the
confluence of the Humqui and Matapédia Rivers, its proximity to 
woodlands makes it a great destination for outdoor activities such as

camping, hiking, and mountain biking.
yannanwu@ue91e96f2951b5c:~/trees/lttng_test_run$

Is the cli changed or something? How can I make it right?

Besides, is it possible we create a pipeline in python and make use 
the the python drafted plugin? Can you advise me how?


Amanda



___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] [babeltrace2]about python self-defined plugin loading

2024-04-25 Thread Kienan Stewart via lttng-dev

Hi Amanda,

could you double-check to ensure that babeltrace2 was built with 
`--enable-python-plugins`, and that `import bt2` works?


There should be a babeltrace2-python-plugin-provider.so in 
`/babeltrace2/plugin-provides`


thanks,
kienan


On 4/24/24 11:28 PM, Wu, Yannan via lttng-dev wrote:

Hi, There,

I am trying to construct a customized filter and sink based on 
babeltrace2 python binding. However, nether mine plugin nor the plugins 
sample I could find on the internet all dont work.


For example, 
https://github.com/simark/babeltrace-fun-plugins/tree/master/my-first-components 


I just downloaded the py file and run the exact command, it failed. The 
log is as following:



babeltrace2 --plugin-path . -c source.demo.MyFirstSource -c 
sink.demo.MyFirstSink
04-24 16:52:04.349 919805 919805 E CLI 
add_descriptor_to_component_descriptor_set@babeltrace2.c:1720 Cannot 
find component class: plugin-name="demo", comp-cls-name="MyFirstSource", 
comp-cls-type=1
04-24 16:52:04.349 919805 919805 E CLI 
cmd_run_ctx_init@babeltrace2.c:1882 Cannot find an operative message 
interchange protocol version to use to create the `run` command's graph: 
status=ERROR
04-24 16:52:04.349 919805 919805 E CLI cmd_run@babeltrace2.c:2465 Cannot 
initialize the command's context.


ERROR:    [Babeltrace CLI] (babeltrace2.c:2465)
   Cannot initialize the command's context.
CAUSED BY [Babeltrace CLI] (babeltrace2.c:1882)
   Cannot find an operative message interchange protocol version to use 
to create the `run` command's graph: status=ERROR

CAUSED BY [Babeltrace CLI] (babeltrace2.c:1720)
   Cannot find component class: plugin-name="demo", 
comp-cls-name="MyFirstSource", comp-cls-type=1


babeltrace2 --version
Babeltrace 2.0.7 "Amqui" [v2.0.6-1-g825a0ed6d]

Amqui (/ɒmkwiː/) is a town in eastern Québec, Canada, at the base of the 
Gaspé peninsula in Bas-Saint-Laurent. Located at the
confluence of the Humqui and Matapédia Rivers, its proximity to 
woodlands makes it a great destination for outdoor activities such as

camping, hiking, and mountain biking.
yannanwu@ue91e96f2951b5c:~/trees/lttng_test_run$

Is the cli changed or something? How can I make it right?

Besides, is it possible we create a pipeline in python and make use the 
the python drafted plugin? Can you advise me how?


Amanda



___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev