Yes, they share L2 and L1i.

Brice



Le 30/08/2017 02:16, Gilles Gouaillardet a écrit :
> Prentice,
>
> could you please run
> lstopo --of=xml
> and post the output ?
>
> a simple workaround could be to bind each task to two consecutive cores
> (assuming two consecutive cores share the same FPU, will know for sure
> after i check the topology)
> that can be achieved with
> mpirun --map-by socket:span,PE=2 ...
>
> if two cores sharing the same FPU also happen to share the same L2
> cache, an other work around would be to bind to a L2 cache.
>
> Cheers,
>
> Gilles
>
> On Wed, Aug 30, 2017 at 6:00 AM, Prentice Bisbal <pbis...@pppl.gov> wrote:
>> I'd like to follow up to my own e-mail...
>>
>> After playing around with the --bind-to options, it seems there is no way to
>> do this with AMD CMT processors, since they are actual physical cores, and
>> not hardware threads that appear as "logical cores" as with Intel processors
>> with hyperthreading. Which, in hindsight, makes perfect sense.
>>
>> In the BIOS, you can turn reduce the number of cores to match the number of
>> FPUs. On the SuperMicro systems I was testing on, the options is called
>> "Downcore" (or somethiing like that) and I set to a value of "compute unit"
>>
>> Prentice
>>
>> On 08/24/2017 03:11 PM, Prentice Bisbal wrote:
>>> OpenMPI Users,
>>>
>>> I am using AMD processocers with CMT, where two cores constitute a module,
>>> and there is only one FPU per module, so each pair of cores has to share a
>>> single FPU.  I want to use only one core per module so there is no
>>> contention between cores in the same module for the single FPU. Is this
>>> possible from the command-line using mpirun with the correct binding
>>> specifications? If so, how would I do this?
>>>
>>> I am using OpenMPI 1.10.3. I read the man page regarding the bind-to-core
>>> options, and I'm not sure that will do exactly what I want, so I figured I'd
>>> ask the experts here.
>>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to