> On Apr 20, 2021, at 10:31 AM, Jiri Olsa <jo...@redhat.com> wrote:
> 
> On Mon, Apr 19, 2021 at 01:36:48PM -0700, Song Liu wrote:
> 
> SNIP
> 
>>      if (stat_config.initial_delay < 0) {
>> @@ -784,11 +790,11 @@ static int __run_perf_stat(int argc, const char 
>> **argv, int run_idx)
>>      if (affinity__setup(&affinity) < 0)
>>              return -1;
>> 
>> -    if (target__has_bpf(&target)) {
>> -            evlist__for_each_entry(evsel_list, counter) {
>> -                    if (bpf_counter__load(counter, &target))
>> -                            return -1;
>> -            }
>> +    evlist__for_each_entry(evsel_list, counter) {
>> +            if (bpf_counter__load(counter, &target))
>> +                    return -1;
>> +            if (!evsel__is_bpf(counter))
>> +                    all_counters_use_bpf = false;
> 
> could be done in bpf_counter__load, check below:
> 
>>      }
>> 
>>      evlist__for_each_cpu (evsel_list, i, cpu) {
>> diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
>> index 5de991ab46af9..33b1888103dfa 100644
>> --- a/tools/perf/util/bpf_counter.c
>> +++ b/tools/perf/util/bpf_counter.c
>> @@ -790,7 +790,8 @@ int bpf_counter__load(struct evsel *evsel, struct target 
>> *target)
>> {
>>      if (target->bpf_str)
>>              evsel->bpf_counter_ops = &bpf_program_profiler_ops;
>> -    else if (target->use_bpf)
>> +    else if (target->use_bpf ||
>> +             evsel__match_bpf_counter_events(evsel->name))
>>              evsel->bpf_counter_ops = &bperf_ops;
> 
> with:
>       else
>               all_counters_use_bpf = false;
> 
> I was also thinking of oving it to evlist, but it's sat specific,
> so I think it's good as static.. thanks for changing the implementation

Hmm... then we need to somehow make all_counters_use_bpf visible in
bpf_counter.c, which won't be very clean. Also, since this is stat 
specific, I guess it is better to keep it inside builtin-stat.c?
The runtime overhead should be minimal. 

Thanks,
Song

Reply via email to