At 2025-12-16 11:59:14, "Jason Wang" <[email protected]> wrote:
>On Mon, Dec 15, 2025 at 4:12 PM Lange Tang <[email protected]> wrote:
>>
>> At 2025-12-15 14:41:42, "Jason Wang" <[email protected]> wrote:
>> >On Sat, Dec 13, 2025 at 10:41 AM Lange Tang <[email protected]> wrote:
>> >>
>> >> At 2025-12-11 16:32:15, "Jason Wang" <[email protected]> wrote:
>> >> >On Thu, Dec 11, 2025 at 10:52 AM Lange Tang <[email protected]> wrote:
>> >> >>
>> >> >> At 2025-12-10 17:04:04, "Michael S. Tsirkin" <[email protected]> wrote:
>> >> >> >On Thu, Nov 27, 2025 at 11:24:00AM +0800, Longjun Tang wrote:
>> >> >> >> From: Tang Longjun <[email protected]>
>> >> >> >>
>> >> >> >> hi,
>> >> >> >> virtnet_mon is used to monitor the data packets of the virtio_net 
>> >> >> >> driver
>> >> >> >> and the related parameters of virtqueue, useful for tracking its 
>> >> >> >> status
>> >> >> >> and troubleshooting faults.
>> >> >> >>
>> >> >> >> pls review. tks
>> >> >> >>
>> >> >> >> Best regard.
>> >> >> >
>> >> >> >what does this achieve that direct use of tracing would not?
>> >> >>
>> >> >> I apologize that my explanation of virtnet_mon was not detailed enough.
>> >> >> virtnet_mon uses kprobe and buffers to monitor virtio_net.
>> >> >> To monitor virtio_net, it is necessary to track the member parameters 
>> >> >> of the virtqueue corresponding to each data packet and output them.
>> >> >> When PPS very high, other tracing techniques, such as ebpf, may not be 
>> >> >> able to handle it, resulting in data loss because they do not have 
>> >> >> sufficiently large buffers to batch export log data.
>> >> >
>> >> >Can you expand more about this? For example, in which kind of setup
>> >> >and what do you want to trace and why ebpf can't handle that. Note
>> >> >that the most lightweight stuff is the counter, have you tried that?
>> >>
>> >> For example, when there is occasional latency in data transmission 
>> >> between the
>> >> virtual network frontend (virtio_net) and backend (such as vhost_net),
>> >> we may need to track the time taken for each packet received and sent in 
>> >> the virtio_net driver.
>> >> Typically, we accomplish this using eBPF, such as bpftrace. The 
>> >> pseudocode might include the following:
>> >> """
>> >> kprobe:skb_recv_done {
>> >>         printf("%ld skb_recv_done Cpu:%d ...\n",...);
>> >> }
>> >> kprobe:skb_xmit_done {
>> >>         printf("%ld skb_xmit_done Cpu:%d ...\n",...);
>> >> }
>> >> kprobe:virtnet_poll {
>> >>         printf("%ld virtnet_poll Cpu:%d budget:%d ...\n",...);
>> >> }
>> >> kprobe:start_xmit {
>> >>   ...
>> >>   printf("%ld start_xmit Cpu:%d type:%s seq:%ld ...\n",...)
>> >> }
>> >> kprobe:gro_receive_skb {
>> >>   ...
>> >>   printf("%ld gro_receive_skb Cpu:%d type:%s seq:%ld ...\n",...)
>> >> }
>> >> kprobe:receive_buf {
>> >>   ...
>> >>   printf("%ld receive_buf Cpu:%d name:%s avali_idx:%d used_idx:%d 
>> >> ...\n",...);
>> >> }
>> >> """
>> >> Using the above bpftrace code, we can track the timestamps of the data as 
>> >> it passes through these functions,
>> >> along with skb and virtqueue information, and output logs via printf for 
>> >> further diagnosis of the causes of the latency.
>> >> Interestingly, a significant amount of logs were found to be missing when 
>> >> executing these bpftrace codes.
>> >> Below is the testing environment:
>> >> VM: 8G8C,virtio_net mq=4, kernel 6.18-rc7, iperf3 -s -p 1314
>> >> HOST: iperf3 -c 192.168.122.218 -t 100 -p 1314 -P 4
>> >> It was also found that when testing with mq=1, there was no log loss.
>> >>
>> >> Compared to mq=1, the reason for log loss at mq=4 is suspected to be due 
>> >> to data being sent or received
>> >> by different CPUs. Additionally, under the 4-thread iperf testing 
>> >> scenario with PPS > 150,000,
>> >> the log data is asynchronously output from different CPUs, leading to 
>> >> excessive IO pressure that causes log data loss.
>> >
>> >I think what I don't understand is how the things you introduced here
>> >may help in this case?
>>
>> The virtnet_mon module introduced here abandons eBPF and uses kprobe + kfifo.
>> In the aforementioned cases, all the information that needs to be tracked 
>> first enters kfifo,
>> then is formatted into logs and cached in a large buffer.
>> Finally, it is exported to user space in batches through the 
>> virtnet_mon_read function,
>> thereby reducing IO pressure and preventing log loss.
>
>Well, this "problem" seems not virtio-net specific. Have you tried
>with BPF ringbuf or perfbuf?

Concerning the ringbuf and perfbuf in bpf, I may need some time to verify 
whether this can resolve this "problem".
 I will get back to you with the results. 

On the other hand, I did not find any tracepoints in virtio_net that track the 
virtqueue, 
such as name, num_free, avail.idx, used.idx, last_used_idx, etc. 
Could you consider inserting these tracepoints in some key functions? This 
would facilitate direct tracking by perf. 
For example:
start_xmit
receive_buf
skb_xmit_done
skb_recv_done

>
>Thanks
>
>>
>> Thanks
>> >
>> >Thanks
>> >
>> >>
>> >> The above are some of my personal thoughts, and I would love to hear your 
>> >> opinion.
>> >> Best regard.
>> >>
>> >> >
>> >> >>
>> >> >> As for the duplicate code, it is only to obtain the layout of the 
>> >> >> relevant structure, and I have not yet thought of a way to avoid 
>> >> >> duplication. I would love to hear your suggestions.
>> >> >
>> >> >Thanks
>> >> >
>> >> >>
>> >> >> >
>> >> >> >> Tang Longjun (7):
>> >> >> >>   tools/virtio/virtnet_mon: create misc driver for virtnet_mon
>> >> >> >>   tools/virtio/virtnet_mon: add kfifo to virtnet_mon
>> >> >> >>   tools/virtio/virtnet_mon: add kprobe start_xmit
>> >> >> >>   tools/virtio/virtnet_mon: add kprobe gro_receive_skb
>> >> >> >>   tools/virtio/virtnet_mon: add kprobe ip_local_deliver
>> >> >> >>   tools/virtio/virtnet_mon: add kprobe skb_xmit_done and 
>> >> >> >> skb_recv_done
>> >> >> >>   tools/virtio/virtnet_mon: add README file for virtnet_moin
>> >> >> >>
>> >> >> >>  tools/virtio/virtnet_mon/Makefile      |   10 +
>> >> >> >>  tools/virtio/virtnet_mon/README        |   35 +
>> >> >> >>  tools/virtio/virtnet_mon/virtnet_mon.c | 1048 
>> >> >> >> ++++++++++++++++++++++++
>> >> >> >>  3 files changed, 1093 insertions(+)
>> >> >> >>  create mode 100644 tools/virtio/virtnet_mon/Makefile
>> >> >> >>  create mode 100644 tools/virtio/virtnet_mon/README
>> >> >> >>  create mode 100644 tools/virtio/virtnet_mon/virtnet_mon.c
>> >> >> >>
>> >> >> >> --
>> >> >> >> 2.43.0

Reply via email to