On Wed, Dec 06, 2017 at 02:07:45PM -0800, Yonghong Song wrote:
> Commit e87c6bc3852b ("bpf: permit multiple bpf attachments
> for a single perf event") added support to attach multiple
> bpf programs to a single perf event.
> Although this provides flexibility, users may want to know
> what other bpf programs attached to the same tp interface.
> Besides getting visibility for the underlying bpf system,
> such information may also help consolidate multiple bpf programs,
> understand potential performance issues due to a large array,
> and debug (e.g., one bpf program which overwrites return code
> may impact subsequent program results).
> 
> Commit 2541517c32be ("tracing, perf: Implement BPF programs
> attached to kprobes") utilized the existing perf ioctl
> interface and added the command PERF_EVENT_IOC_SET_BPF
> to attach a bpf program to a tracepoint. This patch adds a new
> ioctl command, given a perf event fd, to query the bpf program
> array attached to the same perf tracepoint event.
> 
> The new uapi ioctl command:
>   PERF_EVENT_IOC_QUERY_BPF
> 
> The new uapi/linux/perf_event.h structure:
>   struct perf_event_query_bpf {
>        __u32  ids_len;
>        __u32  prog_cnt;
>        __u32  ids[0];
>   };
> 
> User space provides buffer "ids" for kernel to copy to.
> When returning from the kernel, the number of available
> programs in the array is set in "prog_cnt".
> 
> The usage:
    struct perf_event_query_bpf *query =
        malloc(sizeof(*query) + sizeof(u32) * ids_len);
    query->ids_len = ids_len;
    err = ioctl(pmu_efd, PERF_EVENT_IOC_QUERY_BPF, query);
>   if (err == 0) {
>     /* query.prog_cnt is the number of available progs,
>      * number of progs in ids: (ids_len == 0) ? 0 : query.prog_cnt
>      */
>   } else if (errno == ENOSPC) {
>     /* query.ids_len number of progs copied,
>      * query.prog_cnt is the number of available progs
>      */
>   } else {
>       /* other errors */
>   }
> 
> Signed-off-by: Yonghong Song <y...@fb.com>

Yes this looks much better, thanks!

Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>

Reply via email to