Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-04-03 Thread Erica Bugden via lttng-dev

Hello Zvika,

On 2024-04-02 23:52, Zvi Vered wrote:

Hi Erica,

Thank you very much for your answer.
Can you please tell what is the added value of ftrace (compared to using 
only lttng) ?


I don't think I understand the intention behind your question. I'll make 
some guesses below and you're welcome to clarify if you wish.


ftrace and LTTng are different tools that have some overlap in the 
tracing use cases they can address. ftrace is a Linux kernel tracer that 
is included in the kernel; it isn't an LTTng add-on.


Both ftrace and LTTng can trace the Linux kernel if the tracepoints have 
been included. LTTng doesn't use ftrace, but most kernels that are 
configured to include the tracepoints typically also include ftrace.


That being said, if you only want to trace userspace applications with 
LTTng and don't also want kernel traces, then you don't need an 
ftrace-enabled kernel.


Best,
Erica



Best regards,
Zvika


On Tue, Apr 2, 2024 at 5:11 PM Erica Bugden > wrote:


Hello Zvika,

On 2024-03-29 01:09, Zvi Vered via lttng-dev wrote:
 > Hi Christopher,
 >
 > Thank you very much for your reply.
 > Can you please explain what do you mean by ftrace-enabled kernel ?

I believe what Christopher means by "ftrace-enabled" kernel is that the
Linux kernel has been configured to include ftrace. Both the ftrace
tracer and the LTTng tracer use the same kernel tracepoints to extract
execution information and these tracepoints are included in the kernel
if ftrace is included.

Most Linux distributions will include ftrace by default. However, you
can check whether this is the case by searching for `tracefs` in
`/proc/filesystems` (assuming it's already mounted) or by trying to
mount `tracefs`. `tracefs` is the filesystem ftrace uses to communicate
with users.

More details about how to check if ftrace is enabled and how to enable
it if not:
https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/ftrace 


The "More Information" section points to the primary sources (Linux
kernel documentation), but I find this page to be a good starting point.

Best,
Erica

 >
 > Best regards,
 > Zvika
 >
 > On Wed, Mar 27, 2024 at 7:32 PM Christopher Harvey via lttng-dev
 > mailto:lttng-dev@lists.lttng.org>
>> wrote:
 >
 >     you can use an ftrace-enabled kernel with lttng (maybe even just
 >     tracecompass) or perfetto to get that kind of trace
 >
 >

https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
 

 
>
 >
 >     or
 >
 > https://ui.perfetto.dev/ 
>
 >
 >     On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
 >      > Hello,
 >      >
 >      > I have an application with 4 threads.
 >      > I'm required to display on the graph when thread starts
working
 >     till it
 >      > blocks for the next semaphore.
 >      >
 >      > But without using the lttng userspace library.
 >      >
 >      > Is it possible ?
 >      >
 >      > Thank you,
 >      > Zvika
 >      > ___
 >      > lttng-dev mailing list
 >      > lttng-dev@lists.lttng.org
 >
 >      > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

 >     >
 >     ___
 >     lttng-dev mailing list
 > lttng-dev@lists.lttng.org 
>
 > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

 >     >
 >
 >
 > ___
 > lttng-dev mailing list
 > 

Re: [lttng-dev] Lttng: display active threads in multiple cores.

2024-04-02 Thread Erica Bugden via lttng-dev

Hello Zvika,

On 2024-03-29 01:09, Zvi Vered via lttng-dev wrote:

Hi Christopher,

Thank you very much for your reply.
Can you please explain what do you mean by ftrace-enabled kernel ?


I believe what Christopher means by "ftrace-enabled" kernel is that the 
Linux kernel has been configured to include ftrace. Both the ftrace 
tracer and the LTTng tracer use the same kernel tracepoints to extract 
execution information and these tracepoints are included in the kernel 
if ftrace is included.


Most Linux distributions will include ftrace by default. However, you 
can check whether this is the case by searching for `tracefs` in 
`/proc/filesystems` (assuming it's already mounted) or by trying to 
mount `tracefs`. `tracefs` is the filesystem ftrace uses to communicate 
with users.


More details about how to check if ftrace is enabled and how to enable 
it if not: 
https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/ftrace


The "More Information" section points to the primary sources (Linux 
kernel documentation), but I find this page to be a good starting point.


Best,
Erica



Best regards,
Zvika

On Wed, Mar 27, 2024 at 7:32 PM Christopher Harvey via lttng-dev 
mailto:lttng-dev@lists.lttng.org>> wrote:


you can use an ftrace-enabled kernel with lttng (maybe even just
tracecompass) or perfetto to get that kind of trace


https://archive.eclipse.org/tracecompass.incubator/doc/org.eclipse.tracecompass.incubator.ftrace.doc.user/User-Guide.html
 


or

https://ui.perfetto.dev/ 

On Wed, Mar 27, 2024, at 5:26 AM, Zvi Vered via lttng-dev wrote:
 > Hello,
 >
 > I have an application with 4 threads.
 > I'm required to display on the graph when thread starts working
till it
 > blocks for the next semaphore.
 >
 > But without using the lttng userspace library.
 >
 > Is it possible ?
 >
 > Thank you,
 > Zvika
 > ___
 > lttng-dev mailing list
 > lttng-dev@lists.lttng.org 
 > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org 
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev



___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Adding a simple "look here" event to the trace

2023-09-27 Thread Erica Bugden via lttng-dev




On 2023-09-08 06:56, Danter, Richard via lttng-dev wrote:

Hi all,

I am investigating an issue that takes some time to reproduce. Finding
the right point in the logs is therefore very difficult.

Since I can detect when the issue happens in the kernel I would like to
be able to emit an event into the trace that I can then search for in
Trace Compass of through Babeltrace. So basically a kind of flag that
says "look here". That way I can jump right to the problem and then
look backwards from there to see what happened just before.

I have looked at the docs for how to add a trace point, but it seems
pretty complicated. I may have missed something though, so I wonder if
there is a trivial way to add such a flag to the log? Up to now I just
put a printk() in which helps, but would still be nicer to have
something directly in the log.


Hello Rich,

This is a good question! The easiest way to point directly to the 
relevant part of a trace is to stop capturing trace data immediately 
after the identified issue is encountered. This means you know what 
you're looking for is right at the end of the trace. Stopping the trace 
seems like a good fit in this scenario because you're only interested in 
what happens immediately before the issue and you're able to identify 
when the problem has happened.


Assuming you would like to avoid modifying the kernel code, LTTng 
triggers [1] may be a good fit. Triggers allow you to associate a 
condition (e.g. event X happened) with an action you would like to take 
(e.g. stop tracing). When the condition is encountered, the associated 
action is automatically triggered.


In this scenario we would recommend:

 1. Trace in overwrite mode (flight recorder mode): Since the issue 
takes a while to reproduce and only the events immediately preceding the 
issue are relevant, keeping just a limited amount of the most recent 
data avoids accumulating useless data volume.


 2. Determine when the issue is encountered with a trigger: This will 
focus the trace on the problem area.


 3. When the issue is encountered, take a snapshot: This will give you 
a trace that contains what is relevant. What happened immediately before 
the trigger will be at the end of the trace.


In terms of defining the trigger condition, you can add a trigger [2] 
that matches a kernel event type that happens as close as possible to 
right after the issue is encountered and then specify additional details 
for the condition using the capture descriptor [3]. Ideally, you want a 
condition that will only be true when the issue is encountered to avoid 
having to manually sort through the snapshots afterwards. The add 
trigger man page provides several examples [4] that illustrate the 
condition and action syntax.


Hope this helps!

Best,
Erica

[1] LTTng triggers - https://lttng.org/docs/v2.13/#doc-trigger
[2] Add trigger - https://lttng.org/man/1/lttng-add-trigger/v2.13/
[3] Trigger capture descriptor - 
https://lttng.org/man/1/lttng-add-trigger/v2.13/#doc-capture-descr
[4] Trigger examples - 
https://lttng.org/man/1/lttng-add-trigger/v2.13/#doc-examples




If there isn't such a thing already, then would it be a reasonable
enhancement request to be able to add such a feature?

Thanks
Rich


___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Discarded events

2023-08-23 Thread Erica Bugden via lttng-dev

Hello Bala,

If you'd like more information about which event types (tracepoints) are 
practical for investigating which areas, the `ltting-utils` [1] project 
could be useful! The repository contains scripts that record LTTng 
traces. Each script is focused on a different investigation area (e.g. 
network, interrupts, disk, etc).


You could use the scripts themselves and/or read the `*.profile` files 
[2] to see the event types enabled by the different profiles. For 
example, in the disk analysis profile (disk.profile), the following 
event types are enabled:


- block_rq_complete
- block_rq_insert
- block_rq_issue
- block_bio_frontmerge

As Millian suggested, it is often best to only trace the event types you 
need for the type of issue you are investigating since this increases 
the accuracy of the data collected (less tracing overhead, fewer dropped 
events) and also makes the trace easier to understand (less data volume, 
less data noise).


Best,

Erica



LINKS

[1] lttng-utils: https://github.com/tahini/lttng-utils/tree/master
[2] Tracing Profiles: 
https://github.com/tahini/lttng-utils/tree/master/lttngutils/profiles


On 2023-08-15 01:24, Milian Wolff via lttng-dev wrote:

On Sonntag, 13. August 2023 16:58:17 CEST Bala Gundeboina via lttng-dev wrote:

Hi,

I am Using LTTng in my project , I am recording a kernel session for long
duration (eg more than 15minutes) I am enabling all the events for
recording but some events discarding after stopping the session, if i run
more than 5 minutes only some events are discarding. I have tried to change
in the .ltttngrc file also to increase buffer-size.


While not directly related to your question, I strongly encourage you to
review the list of events and only enable those that are actually required for
the specific issue you are investigating. Some kernel events are _extremely_
high frequency but have often no practical use for many (at least in my
experience and for my own purposes). Examples for the latter are esp. the
kmalloc tracepoints.


lttng destroy
Destroying session `my-kernel-session`...
Session `my-kernel-session` destroyed
Warning: 94424667 events were discarded, please refer to the documentation
on channel configuration

actually when i am running below command i am getting below warning i am
thinking this is the issue because sometimes this warning is not coming
that i didn't seen discarded events.  how to overcome this problem can you
provide some detailed lttng-sessiond --daemonize
Warning: Failed to produce a random seed using getrandom(), falling back to
pseudo-random device seed generation which will block until its pool is
initialized: Failed to get true random data using getre

Thanks & Regards
Bala Gundeboina
This message contains information that may be privileged or confidential and
is the property of the KPIT Technologies Ltd. It is intended only for the
person to whom it is addressed. If you are not the intended recipient, you
are not authorized to read, print, retain copy, disseminate, distribute, or
use this message or any part thereof. If you receive this message in error,
please notify the sender immediately and delete all copies of this message.
KPIT Technologies Ltd. does not accept any liability for virus infected
mails.




___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] userspace logs cpu analysis and LTTng UST Callstack

2023-08-23 Thread Erica Bugden via lttng-dev


On 2023-08-23 13:14, Matthew Khouzam via lttng-dev wrote:
Here is a youtube video showing how userspace and kernel traces can play 
nicely together.


https://www.youtube.com/watch?v=K1JNQ-HkC6w 




*From:* lttng-dev  on behalf of Bala 
Gundeboina via lttng-dev 

*Sent:* Wednesday, August 23, 2023 2:17 AM
*To:* lttng-dev@lists.lttng.org 
*Subject:* [lttng-dev] userspace logs cpu analysis and LTTng UST Callstack
Hi all,
             I have capturing kernel and userspace trace logs with LTTng 
but kernel logs will give more useful information but userspace logs is 
not giving more useful information compared to kernel trace logs, how to 
get the more data from userspace trace logs. PFA of userspace and kernel 
screenshots if you observe the kernel screenshot we will get per core 
cpu usage, thread analysis and more .


Hello Bala,

I'm not sure exactly what you mean by, "userspace logs is not giving 
more useful information compared to kernel trace logs", but I would 
guess you mean: when reading the _userspace trace by itself_ in Trace 
Compass, there are way fewer visualisations available compared to when 
you view the _kernel trace by itself_ in Trace Compass.


It can be normal for a kernal trace to provide much more data than a 
userspace trace. Each visualisation in Trace Compass requires certain 
kinds of data (event types). The amount of events in a trace is a 
function of the number of tracepoints (the number of instrumented 
locations) in the code that are available and enabled. The kernel has a 
very large number of tracepoints that already come with the code. On the 
other hand, userspace applications often need to have the tracepoints 
added manually by the folks interested in tracing it. This means that 
userspace traces could have fewer events (if there are fewer available 
tracepoints).


One purpose of a userspace trace is to provide context for a kernel 
trace. To get the most information, you would capture both a kernel and 
userspace trace simultaneously and then view the traces together as a 
combined trace (rather that viewing them separately). When viewing the 
userspace and kernel information together you get the volume and 
richness of kernel information combined with the more "human-readable" 
abstract information from userspace that makes the kernel information 
easier to navigate.


As Matthew mentioned, the tool Trace Compass allows you to open the 
kernel and userspace traces together as a combined trace. Trace Compass 
uses the term "Experiment" to refer to a set of traces viewed together 
on a single timeline, but in practice this is a combined trace. To get 
the full analytical power of the kernel and userspace traces I would 
recommend creating a Trace Compass Experiment with the kernel and 
userspace traces. The video Matthew shared gives you information about 
how to combine and view the traces using an Experiment (see resources 
section below).


In the resources section below, I also included a link to a set of 
tracing analysis examples. I haven't checked, but there may be an 
example there that shows how to create a combined trace (Trace Compass 
Experiment). The Trace Compass user interface has likely changed since 
the screenshots were taken so you may not be able to follow the guide 
word-for-word, but I think it may still serve as a useful reference.


Best regards,

Erica



RESOURCES

- Creating a combined kernel and userspace trace using a Trace Compass 
Experiment (via Matthew Khouzam):

 https://www.youtube.com/watch?v=K1JNQ-HkC6w
 (A method of trace Experiment creation is shown starting at 0:58)

- Trace viz labs: Analyze a kernel trace in Trace Compass
https://github.com/dorsal-lab/Tracevizlab/tree/master/labs/101-analyze-system-trace-in-tracecompass 



- All Trace viz labs:
https://github.com/dorsal-lab/Tracevizlab/tree/master/labs



i am not able to load userspace trace logs in tracealyzer ,is 
tracealyzer is not supported the userspace logs ? , is there anyway we 
can make enable it ? kindly please me to sort out of this problem.


I'm not familiar with Tracealyzer so I'm not sure about its 
compatibility with these traces. I would suggest you reach out to the 
Tracealyzer folks to ask about compatibility with their tools.


Tracealyzer is a proprietary tool and my understanding is that on this 
mailing list we prefer to focus on free and open-source tools when 
possible (since information about them can be more widely used).




Thanks
Bala Gundeboina
This message contains information that may be privileged or confidential 
and is the property of the KPIT Technologies Ltd. It is intended only 
for the person to whom it is addressed. If you are not the intended 
recipient, you are not authorized to read, print, retain copy, 
disseminate, distribute, or use this message or any part thereof. If you 
receive this message in error, please notify the sender immediately and 

Re: [lttng-dev] [Ext] Re: How to use lttng-live plugin with babeltrace2 python api for live LTTng trace reading?

2023-08-14 Thread Erica Bugden via lttng-dev

You're welcome Ruoxiang!

On 2023-08-10 22:50, LI Ruoxiang wrote:

HiErica,

Thank you for your kind help.

It works for my case.

Best,

Ruoxiang

*From: *Erica Bugden 
*Date: *Wednesday, August 9, 2023 at 05:29
*To: *LI Ruoxiang , 
lttng-dev@lists.lttng.org 
*Subject: *[Ext] Re: How to use lttng-live plugin with babeltrace2 
python api for live LTTng trace reading?


*CAUTION: External email. Do not reply, click on links or open 
attachments unless you recognize the sender and know the content is safe. *


Hello Ruoxiang!

Thank you for your question. It's true that there are no Python bindings 
examples specific to lttng live (we've provided a brief example below). 
Unfortunately, the python bindings documentation is currently 
incomplete, but information about using lttng live can be pieced 
together using the various documentation links below (see Useful Links 
section).


Some adjustments typically needed when using lttng live and the Python 
bindings are:


- When referring to a trace source, use a URL (e.g. 
net://localhost/host/luna/my-session) rather than a file path (e.g. 
/path/to/trace)


- When querying, use the source.ctf.lttng-live component class (rather 
than the file system class: source.ctf.fs) - source.ctf.lttng-live docs 
https://babeltrace.org/docs/v2.0/man7/babeltrace2-source.ctf.lttng-live.7/ 


Hope this helps!

Best,

Erica



Useful links

- Python bindings docs (Installation, Examples) - 
https://babeltrace.org/docs/v2.0/python/bt2/index.html 



- LTTng live, General information - 
https://lttng.org/docs/v2.13/#doc-lttng-live 
 (e.g. How to express a 
live trace source: net://localhost/host/HOSTNAME/my-session)


- source.ctf.lttng-live component - 
https://babeltrace.org/docs/v2.0/man7/babeltrace2-source.ctf.lttng-live.7/  (C API doc not Python, but can be used to adapt the source.ctf.fs examples by comparing)




Quick Example: Trace reading with python bindings and lttng live

  Note: This is a local example where the tracing and reading with 
babeltrace are happening on the same machine (luna).


1. REQUIREMENTS

Make sure babeltrace is installed with the python plugins and python 
bindings: https://babeltrace.org/docs/v2.0/python/bt2/installation.html 



2. PROCEDURE

  - Start root session daemon: $sudo lttng-sessiond --daemonize

  - Create live session: $lttng create my-session --live

  - Enable events: $lttng enable-event --kernel 
sched_switch,sched_process_fork


  - Start tracing: $lttng start

  - Run python script: (see below) $python3 lttng-live.py

3. PYTHON SCRIPT

File name: lttng-live.py

File contents:

  import bt2

  import time

  msg_iter = 
bt2.TraceCollectionMessageIterator('net://localhost/host/luna/my-session') # The hostname (i.e. machine name) is 'luna'


  while True:

      try:

          for msg in msg_iter:

              if type(msg) is bt2._EventMessageConst:

                  print(msg.event.name)

      except bt2.TryAgain:

          print('Try again. The iterator has no events to provide 
right now.')


      time.sleep(0.5)

Reading trace data using the bt2.TraceCollectionMessageIterator and 
lttng-live: When using the message iterator with live, the iterator 
never ends by default and can be polled for trace data infinitely (hence 
the while loop in the example). When there is available data, the 
iterator will return it. However, there will not always be data 
available to consume. When this is the case, the iterator returns a "Try 
again" exception which must be caught in order to continue polling.


4. OUTPUT SNIPPET (Example)

  erica@luna:~$ python3 lttng-live-example.py

  Try again. The iterator has no events to provide right now.

  Try again. The iterator has no events to provide right now.

  Try again. The iterator has no events to provide right now.

  Try again. The iterator has no events to provide right now.

  Try again. The iterator has no events to provide right now.

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_process_fork

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  sched_switch

  [...]



*From:*lttng-dev  on behalf of LI 
Ruoxiang via lttng-dev 

*Sent:* August 3, 2023 12:44 PM
*To:* lttng-dev@lists.lttng.org 
*Subject:* [lttng-dev] How to use lttng-live plugin with babeltrace2 
python api for live 

Re: [lttng-dev] Discarded events

2023-08-14 Thread Erica Bugden via lttng-dev

On 2023-08-13 10:58, Bala Gundeboina via lttng-dev wrote:

Hi,

I am Using LTTng in my project , I am recording a kernel session for 
long duration (eg more than 15minutes) I am enabling all the events for 
recording but some events discarding after stopping the session, if i 
run more than 5 minutes only some events are discarding. I have tried to 
change in the .ltttngrc file also to increase buffer-size.


lttng destroy
Destroying session `my-kernel-session`...
Session `my-kernel-session` destroyed
Warning: 94424667 events were discarded, please refer to the 
documentation on channel configuration




Hello Bala,

Based on my understanding of the scenario, discarded events are to be 
expected. Rather than trying to avoid discarded events, I would 
recommend adjusting how much data is being collected. In most tracing 
use cases, it is not relevant to enable all kernel tracepoints for a 
long period of time (e.g. several minutes) as it can quickly generate an 
enormous amount of noisy data that is nearly impossible to sort through. 
As a reference, a busy 8 core machine could generate 100 MB/s of data.


Typically when trying to understand a problem in detail with tracing, 
you would start with a very small number of tracepoints enabled for a 
longer period of time (with the goal of developing a high-level 
understanding of when/where the problem is happening). Then as you 
iteratively narrow down when/where the problem happens you can gradually 
increase the number of relevant tracepoints without being overwhelmed 
with data.


A general rule of thumb is that if you're storing trace data for a long 
period of time (e.g. more than a minute) then very few tracepoints 
should be enabled (e.g. 2-5). Maximum tracing detail would typically 
only be used to trace for a couple seconds (or ideally less if you can 
automate starting and stopping tracing).


Here are some general references about tracing strategy:
 - Whether tracing is the appropriate approach: 
https://github.com/tuxology/tracevizlab/tree/master/labs/001-what-is-tracing#when-to-trace
 - Iterative investigation, selecting tracepoints: 
https://wiki.linuxfoundation.org/realtime/documentation/howto/debugging/debug-steps 
(Recommended sections: Isolate the source, Trace detail)


Hope this helps!

Best,
Erica

actually when i am running below command i am getting below warning i am 
thinking this is the issue because sometimes this warning is not coming 
that i didn't seen discarded events.  how to overcome this problem can 
you provide some detailed

lttng-sessiond --daemonize
Warning: Failed to produce a random seed using getrandom(), falling back 
to pseudo-random device seed generation which will block until its pool 
is initialized: Failed to get true random data using getre




At first glance, this error seems unrelated to whether or not events are 
discarded.



Thanks & Regards
Bala Gundeboina
This message contains information that may be privileged or confidential 
and is the property of the KPIT Technologies Ltd. It is intended only 
for the person to whom it is addressed. If you are not the intended 
recipient, you are not authorized to read, print, retain copy, 
disseminate, distribute, or use this message or any part thereof. If you 
receive this message in error, please notify the sender immediately and 
delete all copies of this message. KPIT Technologies Ltd. does not 
accept any liability for virus infected mails.


___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] How to use lttng-live plugin with babeltrace2 python api for live LTTng trace reading?

2023-08-08 Thread Erica Bugden via lttng-dev
Hello Ruoxiang!

Thank you for your question. It's true that there are no Python bindings 
examples specific to lttng live (we've provided a brief example below). 
Unfortunately, the python bindings documentation is currently incomplete, but 
information about using lttng live can be pieced together using the various 
documentation links below (see Useful Links section).

Some adjustments typically needed when using lttng live and the Python bindings 
are:
- When referring to a trace source, use a URL (e.g. 
net://localhost/host/luna/my-session) rather than a file path (e.g. 
/path/to/trace)
- When querying, use the source.ctf.lttng-live component class (rather than the 
file system class: source.ctf.fs) - source.ctf.lttng-live docs 
https://babeltrace.org/docs/v2.0/man7/babeltrace2-source.ctf.lttng-live.7/

Hope this helps!

Best,
Erica



Useful links

- Python bindings docs (Installation, Examples) - 
https://babeltrace.org/docs/v2.0/python/bt2/index.html
- LTTng live, General information - 
https://lttng.org/docs/v2.13/#doc-lttng-live (e.g. How to express a live trace 
source: net://localhost/host/HOSTNAME/my-session)
- source.ctf.lttng-live component - 
https://babeltrace.org/docs/v2.0/man7/babeltrace2-source.ctf.lttng-live.7/ (C 
API doc not Python, but can be used to adapt the source.ctf.fs examples by 
comparing)



Quick Example: Trace reading with python bindings and lttng live

  Note: This is a local example where the tracing and reading with 
babeltrace are happening on the same machine (luna).

1. REQUIREMENTS

Make sure babeltrace is installed with the python plugins and python bindings: 
https://babeltrace.org/docs/v2.0/python/bt2/installation.html

2. PROCEDURE

 - Start root session daemon: $sudo lttng-sessiond --daemonize
 - Create live session: $lttng create my-session --live
 - Enable events: $lttng enable-event --kernel sched_switch,sched_process_fork
 - Start tracing: $lttng start
 - Run python script: (see below) $python3 lttng-live.py

3. PYTHON SCRIPT

File name: lttng-live.py
File contents:

  import bt2
  import time

  msg_iter = 
bt2.TraceCollectionMessageIterator('net://localhost/host/luna/my-session') # 
The hostname (i.e. machine name) is 'luna'
  while True:
  try:
  for msg in msg_iter:
  if type(msg) is bt2._EventMessageConst:
  print(msg.event.name)
  except bt2.TryAgain:
  print('Try again. The iterator has no events to provide right 
now.')

  time.sleep(0.5)

Reading trace data using the bt2.TraceCollectionMessageIterator and lttng-live: 
When using the message iterator with live, the iterator never ends by default 
and can be polled for trace data infinitely (hence the while loop in the 
example). When there is available data, the iterator will return it. However, 
there will not always be data available to consume. When this is the case, the 
iterator returns a "Try again" exception which must be caught in order to 
continue polling.

4. OUTPUT SNIPPET (Example)

  erica@luna:~$ python3 lttng-live-example.py
  Try again. The iterator has no events to provide right now.
  Try again. The iterator has no events to provide right now.
  Try again. The iterator has no events to provide right now.
  Try again. The iterator has no events to provide right now.
  Try again. The iterator has no events to provide right now.
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_process_fork
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  sched_switch
  [...]


From: lttng-dev  on behalf of LI Ruoxiang 
via lttng-dev 
Sent: August 3, 2023 12:44 PM
To: lttng-dev@lists.lttng.org 
Subject: [lttng-dev] How to use lttng-live plugin with babeltrace2 python api 
for live LTTng trace reading?

Hi there,

I am currently involved in a project on designing a Python program for LTTng 
trace data analysis online. The following figure illustrates the program with a 
live trace data reader using babeltrace2 Python bindings (the yellow box) 
connected to the LTTng relay daemon. The program will read (such as 
periodically) the trace data from the relay daemon and then process them while 
the LLTng keeps tracing. The above “read” and “process” phases repeat in a loop.

[cid:4435d4c9-8a71-4667-bd25-21ef64401df9]


After reading the babeltrace2 documents, examples, and some source code, I 
found the lttng-live plugin may be an option for reading trace data from LTTng 
relay daemon. However, I didn't find any examples for using lttng-live plugin 
with babeltrace2 Python bindings. And I wonder if the Python bindings support 
the mentioned live LTTng trace reading for my case. Is it possible to receive 
any