Hi,

On 02.01.2019 0:41, Jiri Olsa wrote:
> On Mon, Dec 24, 2018 at 03:24:36PM +0300, Alexey Budankov wrote:
> 
> SNIP
> 
>> +static void perf_mmap__aio_free(void **data, size_t len __maybe_unused)
>> +{
>> +    zfree(data);
>> +}
>> +
>> +static void perf_mmap__aio_bind(void *data __maybe_unused, size_t len 
>> __maybe_unused,
>> +                                int cpu __maybe_unused, int affinity 
>> __maybe_unused)
>> +{
>> +}
>> +#endif
>> +
>>  static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params 
>> *mp)
>>  {
>>      int delta_max, i, prio;
>> @@ -177,11 +220,13 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, 
>> struct mmap_params *mp)
>>              }
>>              delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
>>              for (i = 0; i < map->aio.nr_cblocks; ++i) {
>> -                    map->aio.data[i] = malloc(perf_mmap__mmap_len(map));
>> +                    size_t mmap_len = perf_mmap__mmap_len(map);
>> +                    perf_mmap__aio_alloc(&(map->aio.data[i]), mmap_len);
>>                      if (!map->aio.data[i]) {
>>                              pr_debug2("failed to allocate data buffer area, 
>> error %m");
>>                              return -1;
>>                      }
>> +                    perf_mmap__aio_bind(map->aio.data[i], mmap_len, 
>> map->cpu, mp->affinity);
> 
> this all does not work if bind fails.. I think we need to
> propagate the error value here and fail

Proceeding further from this point still makes sense because 
the buffer is available for operations and thread migration 
alone can bring performance benefits. So the error is not fatal 
and an explicit warning is implemented in v3. If you still think 
it is better to propagate error from here it can be implemented.

Thanks,
Alexey

> 
> jirka
> 

Reply via email to