On Thu, Sep 8, 2016 at 8:59 AM, Dave Love <d.l...@liverpool.ac.uk> wrote:

> Brice Goglin <brice.gog...@inria.fr> writes:
>
> > Hello
> > It's not a feature. This should work fine.
> > Random guess: do you have NUMA headers on your build machine ? (package
> > libnuma-dev or numactl-devel)
> > (hwloc-info --support also report whether membinding is supported or not)
> > Brice
>
> Oops, you're right!  Thanks.  I thought what I'm using elsewhere was
> built from the same srpm, but the rpm on the KNL box doesn't actually
> require libnuma.  After a rebuild, it's OK and I'm suitably embarrassed.
>
> By the way, is it expected that binding will be slow on it?  hwloc-bind
> is ~10 times slower (~1s) than on two-socket sandybridge, and ~3 times
> slower than on a 128-core, 16-socket system.
>
> Is this a bottleneck in any application?  Are there codes bindings memory
frequently?

Because most things inside the kernel are limited by single-threaded
performance, it is reasonable for them to be slower than on a Xeon
processor, but I've not seen slowdowns that high.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users

Reply via email to