Re: [OMPI users] [OMPI devel] Change compiler

2016-07-18 Thread Jeff Squyres (jsquyres)
On Jul 18, 2016, at 4:06 PM, Emani, Murali  wrote:
> 
> I would like to know if there is Clang support for OpenMPI codebase.
> 
> I am trying to change the underlying compiler from gcc to clang in 
> ‘configure' and ‘make all install’, I changed these values in Makefile in 
> root dir and another one in config directory. The steps during ‘configure’ 
> reflect gcc again instead of clang. Is this the right way or am I missing 
> something here ?

(you don't need to mail to both devel and users -- I'm replying to just the 
users list)

You can set which compiler to use via the configure command line:

$ ./configure CC=clang CXX=clang++ ...

> Is the wrapper compiler environment variable ‘OMPI_CC’ intended to replace 
> the underlying compiler when compiling an MPI application.

Setting the compiler via the configure command line will propagate out your 
choice of compiler throughout all of Open MPI, to include the wrapper compilers.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



[OMPI users] Change compiler

2016-07-18 Thread Emani, Murali
Hi all,

I would like to know if there is Clang support for OpenMPI codebase.

I am trying to change the underlying compiler from gcc to clang in ‘configure' 
and ‘make all install’, I changed these values in Makefile in root dir and 
another one in config directory. The steps during ‘configure’ reflect gcc again 
instead of clang. Is this the right way or am I missing something here ?

Is the wrapper compiler environment variable ‘OMPI_CC’ intended to replace the 
underlying compiler when compiling an MPI application.


—
Murali



Re: [OMPI users] Affinity settings for hyperthreading

2016-07-18 Thread Ralph Castain
Yes, sadly the terminology is badly overloaded at this stage :-(

> On Jul 18, 2016, at 9:20 AM, John Hearns  wrote:
> 
> Thankyou Ralph.   i guess the information I did not have in my head was that  
>  core = physical core (not hyperthreaded core)
> 
> On 18 July 2016 at 14:45, Ralph Castain  > wrote:
> It sounds like you just want to bind procs to cores since each core is 
> composed of 2 HTs. So a simple “--map-by core --bind-to core" should do the 
> trick.
> 
> FWIW: the affinity settings are controlled by the bind-to  option. You 
> can use “mpirun -h”  to get the list of supported options and a little 
> explanation:
> 
> --bind-to 
> Bind processes to the specified object, defaults to core. Supported options 
> include slot, hwthread, core, l1cache, l2cache, l3cache, socket, numa, board, 
> and none.
> 
> https://www.open-mpi.org/doc/current/man1/mpirun.1.php#sect9 
> 
> 
> 
> 
> 
>> On Jul 17, 2016, at 11:25 PM, John Hearns > > wrote:
>> 
>> Please can someone point me towards the affinity settings for:
>> OpenMPI 1.10   used with Slurm version 15
>> 
>> I have some nodes with 2630-v4 processors.
>> So 10 cores per socket / 20 hyperthreads
>> Hyperthreading is enabled.
>> I would like to set affinity for 20 processes per node,
>> so that the processes are pinned to every second HT core - ie one process 
>> per physical thread.
>> 
>> I'm sure this is quite easy...
>> 
>> Thankyou
>> ___
>> users mailing list
>> us...@open-mpi.org 
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>> 
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/07/29674.php 
>> 
> 
> ___
> users mailing list
> us...@open-mpi.org 
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
> 
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29676.php 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29677.php



Re: [OMPI users] Affinity settings for hyperthreading

2016-07-18 Thread John Hearns
Thankyou Ralph.   i guess the information I did not have in my head was
that   core = physical core (not hyperthreaded core)

On 18 July 2016 at 14:45, Ralph Castain  wrote:

> It sounds like you just want to bind procs to cores since each core is
> composed of 2 HTs. So a simple “--map-by core --bind-to core" should do the
> trick.
>
> FWIW: the affinity settings are controlled by the bind-to  option.
> You can use “mpirun -h”  to get the list of supported options and a little
> explanation:
>
> *--bind-to * Bind processes to the specified object, defaults to
> *core*. Supported options include slot, hwthread, core, l1cache, l2cache,
> l3cache, socket, numa, board, and none.
>
> https://www.open-mpi.org/doc/current/man1/mpirun.1.php#sect9
>
>
>
>
> On Jul 17, 2016, at 11:25 PM, John Hearns  wrote:
>
> Please can someone point me towards the affinity settings for:
> OpenMPI 1.10   used with Slurm version 15
>
> I have some nodes with 2630-v4 processors.
> So 10 cores per socket / 20 hyperthreads
> Hyperthreading is enabled.
> I would like to set affinity for 20 processes per node,
> so that the processes are pinned to every second HT core - ie one process
> per physical thread.
>
> I'm sure this is quite easy...
>
> Thankyou
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29674.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29676.php
>


Re: [OMPI users] Affinity settings for hyperthreading

2016-07-18 Thread Ralph Castain
It sounds like you just want to bind procs to cores since each core is composed 
of 2 HTs. So a simple “--map-by core --bind-to core" should do the trick.

FWIW: the affinity settings are controlled by the bind-to  option. You can 
use “mpirun -h”  to get the list of supported options and a little explanation:

--bind-to 
Bind processes to the specified object, defaults to core. Supported options 
include slot, hwthread, core, l1cache, l2cache, l3cache, socket, numa, board, 
and none.

https://www.open-mpi.org/doc/current/man1/mpirun.1.php#sect9 





> On Jul 17, 2016, at 11:25 PM, John Hearns  wrote:
> 
> Please can someone point me towards the affinity settings for:
> OpenMPI 1.10   used with Slurm version 15
> 
> I have some nodes with 2630-v4 processors.
> So 10 cores per socket / 20 hyperthreads
> Hyperthreading is enabled.
> I would like to set affinity for 20 processes per node,
> so that the processes are pinned to every second HT core - ie one process per 
> physical thread.
> 
> I'm sure this is quite easy...
> 
> Thankyou
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29674.php



Re: [OMPI users] openmpi 1.10.2 and PGI 15.9

2016-07-18 Thread Thomas Jahns

Hello,

On 07/11/2016 02:54 PM, Michael Di Domenico wrote:

pgcc-Error-Unknown switch: - pthread


you want to use -noswitcherror in CFLAGS to work around the problem or edit the 
.la files as suggested. Alternatively, you can also use a wrapper that strips 
the -pthread option.


Regards, Thomas
--
Thomas Jahns
HD(CP)^2
Abteilung Anwendungssoftware

Deutsches Klimarechenzentrum GmbH
Bundesstraße 45a • D-20146 Hamburg • Germany

Phone:  +49 40 460094-151
Fax:+49 40 460094-270
Email:  Thomas Jahns 
URL:www.dkrz.de

Geschäftsführer: Prof. Dr. Thomas Ludwig
Sitz der Gesellschaft: Hamburg
Amtsgericht Hamburg HRB 39784



smime.p7s
Description: S/MIME Cryptographic Signature


[OMPI users] Affinity settings for hyperthreading

2016-07-18 Thread John Hearns
Please can someone point me towards the affinity settings for:
OpenMPI 1.10   used with Slurm version 15

I have some nodes with 2630-v4 processors.
So 10 cores per socket / 20 hyperthreads
Hyperthreading is enabled.
I would like to set affinity for 20 processes per node,
so that the processes are pinned to every second HT core - ie one process
per physical thread.

I'm sure this is quite easy...

Thankyou