Re: [OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
athan > > On Tue, Apr 14, 2015 at 02:41:27PM -0400, Andy Riebs wrote: > >Nick, > > > >You may have more luck looking into the OSHMEM layer of Open MPI; > SHMEM is > >designed for one-sided communications. > > > >BR, > >Andy > &

Re: [OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
Sorry, nevermind. It seems it has been generalized (found on wiki) Thanks for the help. 2015-04-14 20:50 GMT+02:00 Nick Papior Andersen <nickpap...@gmail.com>: > Thanks Andy! I will discontinue my hunt in openmpi then ;) > > Isn't SHMEM related only to shared memory nodes?

[OMPI users] One-sided communication, a missing/non-existing API call

2015-04-14 Thread Nick Papior Andersen
Dear all, I am trying to implement some features using a one-sided communication scheme. The problem is that I understand the different one-sided communication schemes as this (basic words): MPI_Get) fetches remote window memory to a local memory space MPI_Get_Accumulate) 1. fetches remote

Re: [OMPI users] Configuration error with external hwloc

2015-03-18 Thread Nick Papior Andersen
As it says check the config.log for any error messages. I have not had any problems using external hwloc on my debian boxes. 2015-03-18 1:30 GMT+00:00 Peter Gottesman : > Hey all, > I am trying to compile Open MPI on a 32bit laptop running debian wheezy > 7.8.0. When I >

Re: [OMPI users] error building BLACS with openmpi 1.8.4 and intel 2015

2015-03-06 Thread Nick Papior Andersen
S library. If any of these required components is not available, then > the user must build the needed component before proceeding with the > ScaLAPACK installation." > > Thank you, > > On Fri, Mar 6, 2015 at 9:36 AM, Nick Papior Andersen <nickpap...@gmail.com > > wr

Re: [OMPI users] error building BLACS with openmpi 1.8.4 and intel 2015

2015-03-06 Thread Nick Papior Andersen
Do you plan to use BLACS for anything else than scalapack? Else I would highly recommend you to just compile scalapack 2.0.2 which has BLACS shipped with it :) 2015-03-06 15:31 GMT+01:00 Irena Johnson : > Hello, > > I am trying to build BLACS for openmpi-1.8.4 and

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
26:13 + > > To: us...@open-mpi.org > Subject: Re: [OMPI users] configuring a code with MPI/OPENMPI > > I also concur with Jeff about asking software specific questions at the > software-site, abinit already has a pretty active forum: > http://forum.abinit.org/ > So any q

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
I also concur with Jeff about asking software specific questions at the software-site, abinit already has a pretty active forum: http://forum.abinit.org/ So any questions can also be directed there. 2015-02-03 19:20 GMT+00:00 Nick Papior Andersen <nickpap...@gmail.com>: > > > 2

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
2015-02-03 19:12 GMT+00:00 Elio Physics : > Hello, > > thanks for your help. I have tried: > > ./configure --with-mpi-prefix=/usr FC=ifort CC=icc > > But i still get the same error. Mind you if I compile it serially, that > is ./configure FC=ifort CC=icc > > It works

Re: [OMPI users] configuring a code with MPI/OPENMPI

2015-02-03 Thread Nick Papior Andersen
First try and correct your compilation by using the intel c-compiler AND the fortran compiler. You should not mix compilers. CC=icc FC=ifort Else the config.log is going to be necessary to debug it further. PS: You could also try and convince your cluster administrator to provide a more recent

Re: [OMPI users] vector type

2015-02-01 Thread Nick Papior Andersen
Because the compiler does not know that you want to send the entire sub-matrix, passing non-contiguous arrays to a function is, at best, dangerous, do not do that unless you know the function can handle that. Do AA(1,1,2) and then it works. (in principle you then pass the starting memory location

Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-19 Thread Nick Papior Andersen
I have been following this being very interested, I will create a PR for my branch then. To be clear, I already did the OMPI change before this discussion came up, so this will be the one, however the change to other naming schemes is easy. 2014-12-19 7:48 GMT+00:00 George Bosilca

Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-02 Thread Nick Papior Andersen
Just to drop in, I can/and will provide whatever interface you want (if you want my contribution). However just to help my ignorance, 1. Adam Moody's method still requires a way to create a distinguished string per processor, i.e. the spilt is entirely done via the string/color, which then needs

Re: [OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-27 Thread Nick Papior Andersen
gt; > On Nov 27, 2014, at 8:12 AM, Nick Papior Andersen <nickpap...@gmail.com> > wrote: > > > Sure, I will make the changes and commit to make them OMPI specific. > > > > I will post forward my problems on the devel list. > > > > I will keep you p

Re: [OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-27 Thread Nick Papior Andersen
Sure, I will make the changes and commit to make them OMPI specific. I will post forward my problems on the devel list. I will keep you posted. :) 2014-11-27 13:58 GMT+01:00 Jeff Squyres (jsquyres) <jsquy...@cisco.com>: > On Nov 26, 2014, at 2:08 PM, Nick Papior Andersen <nickpap.

[OMPI users] Fwd: [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-11-26 Thread Nick Papior Andersen
Dear Ralph (all ;)) In regards of these posts and due to you adding it to your todo list. I wanted to do something similarly and implemented a "quick fix". I wanted to create a communicator per node, and then create a window to allocate an array in shared memory, however, I came to short in the

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-07 Thread Nick Papior Andersen
You should redo it in terms of George's suggestion, in that way you should also circumvent the "manual" alignment of data. George's method is the best generic way of doing it. As for the -r8 thing, just do not use it :) And check the interface for the routines used to see why MPIstatus is used.

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
n 0.2_dp will "guess" the last >> digits, not exactly, but you get the point. >> >>> >>> Am I right? >>> What do you suggest as next step? >>> >> ??? The example I sent you worked perfectly. >> Good luck! >> >>> I coul

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
ou get the point. > > Am I right? > What do you suggest as next step? > ??? The example I sent you worked perfectly. Good luck! > I could create a type variable and try to send it from a processor to > another with MPI_SEND and MPI_RECV? > > Again thank > > Diego > &g

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
Dear Diego, Instead of instantly going about using cartesian communicators you should try and create a small test case, something like this: I have successfully runned this small snippet on my machine. As I state in the source, the culprit was the integer address size. It is inherently of type

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
be the definite absolute last thing. Not second to last, really _the_ last thing. :) I hope I made my point clear, if not I am at a loss... :) > > > On 3 October 2014 17:03, Nick Papior Andersen <nickpap...@gmail.com> > wrote: > >> selected_real_kind > > &

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
Might I chip in and ask "why in the name of fortran are you using -r8"? It seems like you do not really need it, more that it is a convenience flag for you (so that you have to type less?)? Again as I stated in my previous mail, I would never do that (and would discourage the use of it for almost

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
t, > [diedroLap:16824] ***and potentially your MPI job) > > Do you know something about this errors? > > Thanks again > > Diego > > > On 3 October 2014 15:29, Nick Papior Andersen <nickpap...@gmail.com> > wrote: > >> Yes, I guess this is correct. Te

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
4 bytes, and then I have > TYPES(1)=MPI_INTEGER > TYPES(2)=MPI_DOUBLE_PRECISION > TYPES(3)=MPI_DOUBLE_PRECISION > nBLOCKS(1)=2 > nBLOCKS(2)=2 > nBLOCKS(3)=4 > > Am I wrong? Do I have correctly understood? > Really Really thanks > > > Diego > > > On 3 Oc

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
(dummy%ip)+sizeof(dummy%fake)+sizeof(dummy%RP(1))+sizeof(dummy%RP(2)) > > CALL > MPI_TYPE_CREATE_STRUCT(3,nBLOCKS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE,MPI%ierr) > > This is how I compile > > mpif90 -r8 *.f90 > No, that was not what you said! You said you compiled it u

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
iedroLap:12267] *** MPI_ERR_OTHER: known error not in list > [diedroLap:12267] *** MPI_ERRORS_ARE_FATAL (processes in this communicator > will now abort, > [diedroLap:12267] *** and potentially your MPI job) > > > > What I can

Re: [OMPI users] SENDRECV + MPI_TYPE_CREATE_STRUCT

2014-10-03 Thread Nick Papior Andersen
If misalignment is the case then adding "sequence" to the data type might help. So: type :: sequence integer :: ... real :: ... end type Note that you cannot use the alignment on types with allocatables and pointers for obvious reasons. 2014-10-03 0:39 GMT+00:00 Kawashima, Takahiro

Re: [OMPI users] About debugging and asynchronous communication

2014-09-19 Thread Nick Papior Andersen
process would receive a wrong length(say 170 instead of 445) and the >>> process exits abnormally. Anyone has similar experience? >>> >>> On Thu, Sep 18, 2014 at 10:07 PM, XingFENG <xingf...@cse.unsw.edu.au >>> <mailto:xingf...@cse.unsw.edu.au&g

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
ar experience? > > On Thu, Sep 18, 2014 at 10:07 PM, XingFENG <xingf...@cse.unsw.edu.au> > wrote: > >> Thank you for your reply! I am still working on my codes. I would update >> the post when I fix bugs. >> >> On Thu, Sep 18, 2014 at 9:48 PM, Nick Papior Ande

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
9-18 13:39 GMT+02:00 Tobias Kloeffel <tobias.kloef...@fau.de>: > ok i have to wait until tomorrow, they have some problems with the > network... > > > > > On 09/18/2014 01:27 PM, Nick Papior Andersen wrote: > > I am not sure whether test will cover this... You

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
ions to process these messages are called. > > I will add the wait function and check the running results. > > On Thu, Sep 18, 2014 at 8:47 PM, Nick Papior Andersen < > nickpap...@gmail.com> wrote: > >> In complement to Jeff, I would add that using asynchronous messa

Re: [OMPI users] About debugging and asynchronous communication

2014-09-18 Thread Nick Papior Andersen
In complement to Jeff, I would add that using asynchronous messages REQUIRES that you wait (mpi_wait) for all messages at some point. Even though this might not seem obvious it is due to memory allocation "behind the scenes" which are only de-allocated upon completion through a wait statement.

Re: [OMPI users] removed maffinity, paffinity in 1.7+

2014-09-15 Thread Nick Papior Andersen
ta source: default, level: 9 dev/all, type: > string) > Comma-separated list of ranges specifying logical > cpus allocated to this job [default: none] >MCA hwloc: parameter "hwloc_base

[OMPI users] removed maffinity, paffinity in 1.7+

2014-09-15 Thread Nick Papior Andersen
Dear all maffinity, paffinity parameters have been removed since 1.7. For the uninitiated is this because it has been digested by the code so as the code would automatically decide on these values? For instance I have always been using paffinity_alone=1 for single node jobs with entire

Re: [hwloc-users] hwloc 1.9 and openmpi using intel compiler

2014-07-13 Thread Nick Papior Andersen
8.patch > Brice > > > Le 09/07/2014 23:42, Nick Papior Andersen a écrit : > > Dear Brice > > > 2014-07-09 21:34 GMT+00:00 Brice Goglin <brice.gog...@inria.fr>: > >> Le 09/07/2014 23:30, Nick Papior Andersen a écrit : >> >> Dear Brice >> >

Re: [hwloc-users] hwloc 1.9 and openmpi using intel compiler

2014-07-09 Thread Nick Papior Andersen
Dear Brice 2014-07-09 21:34 GMT+00:00 Brice Goglin <brice.gog...@inria.fr>: > Le 09/07/2014 23:30, Nick Papior Andersen a écrit : > > Dear Brice > > Here are my findings (apologies for not doing make check on before-hand!) > > 2014-07-09 20:42 GMT+00:00 Brice Go

Re: [hwloc-users] hwloc 1.9 and openmpi using intel compiler

2014-07-09 Thread Nick Papior Andersen
attached my config.log if that has any interest? > > thanks > Brice > > I tested the same things you mentioned here for the 1.8.1 version, in that case it only fails these: FAIL: test-hwloc-annotate.sh FAIL: test-hwloc-calc.sh I have not attached anything for the 1.8.1 version. Say the wo

[hwloc-users] hwloc 1.9 and openmpi using intel compiler

2014-07-09 Thread Nick Papior Andersen
Dear users, I think this is some kind of bug, but I would like to post here to hear if this is true. I have only tested this using the fortran compiler and fortran version of openmpi/hwloc. My setup: intel compiler: composer_xe_2013.3.163 ifort --version: 13.1.1 20130313 I am compiling