Well,
now it's more clear.
Thanks for the informations!
Regards.
2011/8/4 Samuel Thibault
> Gabriele Fatigati, le Thu 04 Aug 2011 16:56:22 +0200, a écrit :
> > L#0 and L#1 are physically near because hwloc consider shared caches map
> when
> > build topology?
>
>
Gabriele Fatigati, le Thu 04 Aug 2011 16:56:22 +0200, a écrit :
> L#0 and L#1 are physically near because hwloc consider shared caches map when
> build topology?
Yes. That's the whole point of sorting objects topologically first, and
numbering them afterwards. See the glossary entry for "logical
L#0 and L#1 are physically near because hwloc consider shared caches map
when build topology? Because if not, i don't know how hwloc understand the
physical proximity of cores :(
2011/8/4 Samuel Thibault
> Gabriele Fatigati, le Thu 04 Aug 2011 16:35:36 +0200, a écrit :
Gabriele Fatigati, le Thu 04 Aug 2011 16:35:36 +0200, a écrit :
> so physical OS index 0 and 1 are not true are physically near on the die.
They quite often aren't. See the updated glossary of the documentation:
"The index that the operating system (OS) uses to identify the object.
This may be
Ok,
so physical OS index 0 and 1 are not true are physically near on the die.
Considering that, how I can use cache locality and cache sharing by cores if
I don't know where my threads will physically bound?
If L#0 and L#1 where I bind my threads are physically far, may give me bad
Ok,
but i dont' understand how lstopo works. Suppose on the physical die the
disposition of my cores non SMT) are like this:
Socket:
__
| |
| |*core* | |*core* ||
| |
| |*core* | |*core*
Gabriele Fatigati, le Thu 04 Aug 2011 15:52:09 +0200, a écrit :
> how the topology gave by lstopo is built? In particolar, how the logical index
> P# are initialized?
P# are not logical indexes, they are physical indexes, as displayed in
/proc/cpuinfo & such.
The logical indexes, L#, displayed
Hello,
Gabriele Fatigati, le Mon 01 Aug 2011 12:32:44 +0200, a écrit :
> So, are not physically near. I aspect that with Hyperthreading, and 2 hardware
> threads each core, PU P#0 and PU P#1 are on the same core.
Since these are P#0 and 1, they may not be indeed (physical indexes).
That's the
It's just a coincidence. Most modern machines (many of them are NUMA)
have non sequential numbers (to maximize memory bandwidth in the dumb
cases).
Brice
Le 01/08/2011 15:29, Gabriele Fatigati a écrit :
> Ok,
>
> now it's more clear. Just a little question. Why in a NUMA machine,
> PU# are
You're confusing object types with index types.
PU is an object type, like Core, Socket, ... "logical processor" is a
generic name for cores when there's no SMT, hardware threads when
there's SMT/Hyperthreading, ... PU is basically the smallest thing that
can run a software thread.
"P#" is just
Hi Brice,
you said:
"PU P#0" means "PU object with physical index 0".
"P#" prefix means "physical index".
But from the hwloc manual, page 58:
HWLOC_OBJ_PU: Processing Unit, or (Logical) Processor..
but it is in conflict with what you said :(
2011/8/1 Brice Goglin
"PU P#0" means "PU object with physical index 0".
"P#" prefix means "physical index".
"L#" prefix means "logical index" (the one you want to use in
get_obj_by_type).
Use -l or -p to switch from one to the other in lstopo.
Brice
Le 01/08/2011 14:47, Gabriele Fatigati a écrit :
> Hi Brice,
>
>
Gabriele Fatigati, le Mon 01 Aug 2011 14:48:11 +0200, a écrit :
> so, if I inderstand well, PU P# numbers are not the same specified as
> HWLOC_OBJ_PU flag?
They are, in the os_index (aka physical index) field.
Samuel
Hi Brice,
so, if I inderstand well, PU P# numbers are not the same specified as
HWLOC_OBJ_PU flag?
2011/8/1 Brice Goglin
> Le 01/08/2011 12:16, Gabriele Fatigati a écrit :
> > Hi,
> >
> > reading a hwloc-v1.2-a4 manual, on page 15, i look an example
> > with 4-socket
Le 01/08/2011 12:16, Gabriele Fatigati a écrit :
> Hi,
>
> reading a hwloc-v1.2-a4 manual, on page 15, i look an example
> with 4-socket 2-core machine with hyperthreading.
>
> Core id's are not exclusive as said before. PU's id are exclusive but
> not physically sequential (I suppose)
>
> PU P#0
Gabriele Fatigati, le Fri 29 Jul 2011 13:34:29 +0200, a écrit :
> I forgot to tell you these code block is inside a parallel OpenMP region. This
> is the complete code:
>
> #pragma omp parallel num_threads(6)
> {
> int tid = omp_get_thread_num();
>
> hwloc_obj_t core =
Sorry,
I forgot to tell you these code block is inside a parallel OpenMP region.
This is the complete code:
#pragma omp parallel num_threads(6)
{
int tid = omp_get_thread_num();
hwloc_obj_t core = hwloc_get_obj_by_type(topology, HWLOC_OBJ_CORE, tid);
hwloc_cpuset_t set =
Gabriele Fatigati, le Fri 29 Jul 2011 13:24:17 +0200, a écrit :
> yhanks for yout quick reply!
>
> But i have a litte doubt. in a non SMT machine, Is it better use this:
>
> hwloc_obj_t core = hwloc_get_obj_by_type(topology, HWLOC_OBJ_CORE, tid);
> hwloc_cpuset_t set =
Hi Samuel,
yhanks for yout quick reply!
But i have a litte doubt. in a non SMT machine, Is it better use this:
hwloc_obj_t core = hwloc_get_obj_by_type(topology, HWLOC_OBJ_CORE, tid);
hwloc_cpuset_t set = hwloc_bitmap_dup(core->cpuset);
hwloc_bitmap_singlify(set);
hwloc_set_cpubind(topology,
Hello,
Gabriele Fatigati, le Fri 29 Jul 2011 12:43:47 +0200, a écrit :
> I'm so confused. I see couples of cores with the same core id! ( Core#8 for
> example) How is it possible?
That's because they are on different sockets. These are physical IDs
(not logical IDs), and are thus not garanteed
Dear hwloc users,
I have some questions about thread core affinity managed by hwloc.
1) A simple hwloc-hello.c program in the manual on my machine give me the
follow results:
*** Objects at level 0
Index 0: Machine#0(47GB)
*** Objects at level 1
Index 0: NUMANode#0(24GB)
Index 1:
21 matches
Mail list logo