On Thu, Oct 01, 2020 at 09:01:11AM -0500, Rob Herring wrote:

SNIP

>  
> +void *perf_evsel__mmap(struct perf_evsel *evsel, int pages)
> +{
> +     int ret;
> +     struct perf_mmap *map;
> +     struct perf_mmap_param mp = {
> +             .prot = PROT_READ | PROT_WRITE,
> +     };
> +
> +     if (FD(evsel, 0, 0) < 0)
> +             return NULL;
> +
> +     mp.mask = (pages * page_size) - 1;
> +
> +     map = zalloc(sizeof(*map));
> +     if (!map)
> +             return NULL;
> +
> +     perf_mmap__init(map, NULL, false, NULL);
> +
> +     ret = perf_mmap__mmap(map, &mp, FD(evsel, 0, 0), 0);

hum, so you map event for FD(0,0) but later in perf_evsel__read
you allow to read any cpu/thread combination ending up reading
data from FD(0,0) map:

        int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
                             struct perf_counts_values *count)
        {
                size_t size = perf_evsel__read_size(evsel);

                memset(count, 0, sizeof(*count));

                if (FD(evsel, cpu, thread) < 0)
                        return -EINVAL;

                if (evsel->mmap && !perf_mmap__read_self(evsel->mmap, count))
                        return 0;


I think we should either check cpu == 0, thread == 0, or make it
general and store perf_evsel::mmap in xyarray as we do for fds

thanks,
jirka

Reply via email to