On Mon, Sep 02, 2019 at 10:57:10AM -0300, Arnaldo Carvalho de Melo wrote: > Em Mon, Sep 02, 2019 at 02:12:54PM +0200, Jiri Olsa escreveu: > > To speed up cpu to node lookup, adding perf_env__numa_node > > function, that creates cpu array on the first lookup, that > > holds numa nodes for each stored cpu. > > > > Link: http://lkml.kernel.org/n/tip-qqwxklhissf3yjyuaszh6...@git.kernel.org > > Signed-off-by: Jiri Olsa <jo...@kernel.org> > > --- > > tools/perf/util/env.c | 35 +++++++++++++++++++++++++++++++++++ > > tools/perf/util/env.h | 6 ++++++ > > 2 files changed, 41 insertions(+) > > > > diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c > > index 3baca06786fb..6385961e45df 100644 > > --- a/tools/perf/util/env.c > > +++ b/tools/perf/util/env.c > > @@ -179,6 +179,7 @@ void perf_env__exit(struct perf_env *env) > > zfree(&env->sibling_threads); > > zfree(&env->pmu_mappings); > > zfree(&env->cpu); > > + zfree(&env->numa_map); > > > > for (i = 0; i < env->nr_numa_nodes; i++) > > perf_cpu_map__put(env->numa_nodes[i].map); > > @@ -338,3 +339,37 @@ const char *perf_env__arch(struct perf_env *env) > > > > return normalize_arch(arch_name); > > } > > + > > + > > +int perf_env__numa_node(struct perf_env *env, int cpu) > > +{ > > + if (!env->nr_numa_map) { > > + struct numa_node *nn; > > + int i, nr = 0; > > + > > + for (i = 0; i < env->nr_numa_nodes; i++) { > > + nn = &env->numa_nodes[i]; > > + nr = max(nr, perf_cpu_map__max(nn->map)); > > + } > > + > > + nr++; > > + env->numa_map = zalloc(nr * sizeof(int)); > > Why do you use zalloc()... > > > + if (!env->numa_map) > > + return -1; > > Only to right after allocating it set all entries to -1? > > That zalloc() should be downgraded to a plain malloc(), right? > > The setting to -1 is because we may have holes in the array, right? I > think this deserves a comment here as well.
yea, I added that later on and missed the zalloc above ;-) I'll send new version thanks, jirka