Re: [OMPI users] Problems compiling 1.2.4 using Intel Compiler 10.1.006 on Leopard

2007-12-11 Thread Warner Yuen
Has anyone gotten Open MPI 1.2.4 to compile with the latest Intel  
compilers 10.1.007 and Leopard? I can get Open MPI-1.2.4 to to build  
with GCC + Fortran IFORT 10.1.007. But I can't get any configuration  
to work with Intel's 10.1.007 Compilers.


The configuration completes, but the compilation fails fairly early,

My compiler settings are as follows:

For GCC + IFORT (This one works):
export CC=/usr/bin/cc
export CXX=/usr/bin/c++
export FC=/usr/bin/ifort
export F90=/usr/bin/ifort
export F77=/usr/bin/ifort


For using all Intel compilers (The configure works but the compilation  
fails):


export CC=/usr/bin/icc
export CXX=/usr/bin/icpc
export FC=/usr/bin/ifort
export F90=/usr/bin/ifort
export F77=/usr/bin/ifort
export FFLAGS=-no-multibyte-chars
export CFLAGS=-no-multibyte-chars
export CXXFLAGS=-no-multibyte-chars
export CCASFLAGS=-no-multibyte-chars

_defined,suppress -o libasm.la  asm.lo atomic-asm.lo  -lutil
libtool: link: ar cru .libs/libasm.a .libs/asm.o .libs/atomic-asm.o
ar: .libs/atomic-asm.o: No such file or directory
make[2]: *** [libasm.la] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1





Warner Yuen
Scientific Computing Consultant
Apple Computer
email: wy...@apple.com
Tel: 408.718.2859
Fax: 408.715.0133


On Nov 22, 2007, at 2:26 AM, users-requ...@open-mpi.org wrote:


--

Message: 2
Date: Wed, 21 Nov 2007 15:15:00 -0500
From: Mark Dobossy 
Subject: Re: [OMPI users] Problems compiling 1.2.4 using Intel
Compiler10.1.006 on Leopard
To: Open MPI Users 
Message-ID: <99ca0551-9bf4-47c0-85c2-6b2126a83...@princeton.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed

Thanks for the suggestion Jeff.

Unfortunately, that didn't fix the issue.

-Mark

On Nov 21, 2007, at 7:55 AM, Jeff Squyres wrote:


Can you try also adding CCASFLAGS=-no-multibyte-chars?







Re: [OMPI users] Problems with GATHERV on one process

2007-12-11 Thread Tim Mattox
Hello Ken,
This is a known bug, which is fixed in the upcoming 1.2.5 release.  We
expect 1.2.5
to come out very soon.  We should have a new release candidate for 1.2.5 posted
by tomorrow.

See these tickets about the bug if you care to look:
https://svn.open-mpi.org/trac/ompi/ticket/1166
https://svn.open-mpi.org/trac/ompi/ticket/1157

On Dec 11, 2007 2:48 PM, Moreland, Kenneth  wrote:
> I recently ran into a problem with GATHERV while running some randomized
> tests on my MPI code.  The problem seems to occur when running
> MPI_Gatherv with a displacement on a communicator with a single process.
> The code listed below exercises this errant behavior.  I have tried it
> on OpenMPI 1.1.2 and 1.2.4.
>
> Granted, this is not a situation that one would normally run into in a
> real application, but I just wanted to check to make sure I was not
> doing anything wrong.
>
> -Ken
>
>
>
> #include 
>
> #include 
> #include 
>
> int main(int argc, char **argv)
> {
>   int rank;
>   MPI_Comm smallComm;
>   int senddata[4], recvdata[4], length, offset;
>
>   MPI_Init(, );
>
>   MPI_Comm_rank(MPI_COMM_WORLD, );
>
>   // Split up into communicators of size 1.
>   MPI_Comm_split(MPI_COMM_WORLD, rank, 0, );
>
>   // Now try to do a gatherv.
>   senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] = 8;
>   recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] = 0;
>   length = 3;
>   offset = 1;
>   MPI_Gatherv(senddata, length, MPI_INT,
>   recvdata, , , MPI_INT, 0, smallComm);
>   if (senddata[0] != recvdata[offset])
> {
> printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]);
> }
>   else
> {
> printf("%d: Everything OK.\n", rank);
> }
>
>   return 0;
> }
>
>  Kenneth Moreland
> ***  Sandia National Laboratories
> ***
> *** *** ***  email: kmo...@sandia.gov
> **  ***  **  phone: (505) 844-8919
> ***  fax:   (505) 845-0833
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
 tmat...@gmail.com || timat...@open-mpi.org
I'm a bright... http://www.the-brights.net/


Re: [OMPI users] error with Vprotocol pessimist

2007-12-11 Thread Aurelien Bouteiller
I cannot reproduce the error. Please make sure you have the lib/ 
openmpi/mca_pml_v.so file in your build. If you don't, maybe you  
forgot to run autogen.sh at the root of the trunk when you  
removed .ompi_ignore.


If this does not fix the problem, please let me know your command line  
options to mpirun.


Aurelien

Le 11 déc. 07 à 14:36, Aurelien Bouteiller a écrit :


Mmm, I'll investigate this today.

Aurelien
Le 11 déc. 07 à 08:46, Thomas Ropars a écrit :


Hi,

I've tried to test the message logging component vprotocol pessimist.
(svn checkout revision 16926)
When I run an mpi application, I get the following error :

mca: base: component_find: unable to open vprotocol pessimist:
/local/openmpi/lib/openmpi/mca_vprotocol_pessimist.so: undefined
symbol:
pml_v_output (ignored)


Regards

Thomas
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Dr. Aurelien Bouteiller, Sr. Research Associate
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





[OMPI users] Problems with GATHERV on one process

2007-12-11 Thread Moreland, Kenneth
I recently ran into a problem with GATHERV while running some randomized
tests on my MPI code.  The problem seems to occur when running
MPI_Gatherv with a displacement on a communicator with a single process.
The code listed below exercises this errant behavior.  I have tried it
on OpenMPI 1.1.2 and 1.2.4.

Granted, this is not a situation that one would normally run into in a
real application, but I just wanted to check to make sure I was not
doing anything wrong.

-Ken



#include 

#include 
#include 

int main(int argc, char **argv)
{
  int rank;
  MPI_Comm smallComm;
  int senddata[4], recvdata[4], length, offset;

  MPI_Init(, );

  MPI_Comm_rank(MPI_COMM_WORLD, );

  // Split up into communicators of size 1.
  MPI_Comm_split(MPI_COMM_WORLD, rank, 0, );

  // Now try to do a gatherv.
  senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] = 8;
  recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] = 0;
  length = 3;
  offset = 1;
  MPI_Gatherv(senddata, length, MPI_INT,
  recvdata, , , MPI_INT, 0, smallComm);
  if (senddata[0] != recvdata[offset])
{
printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]);
}
  else
{
printf("%d: Everything OK.\n", rank);
}

  return 0;
}

     Kenneth Moreland
***  Sandia National Laboratories
***  
*** *** ***  email: kmo...@sandia.gov
**  ***  **  phone: (505) 844-8919
***  fax:   (505) 845-0833





Re: [OMPI users] error with Vprotocol pessimist

2007-12-11 Thread Aurelien Bouteiller

Mmm, I'll investigate this today.

Aurelien
Le 11 déc. 07 à 08:46, Thomas Ropars a écrit :


Hi,

I've tried to test the message logging component vprotocol pessimist.
(svn checkout revision 16926)
When I run an mpi application, I get the following error :

mca: base: component_find: unable to open vprotocol pessimist:
/local/openmpi/lib/openmpi/mca_vprotocol_pessimist.so: undefined  
symbol:

pml_v_output (ignored)


Regards

Thomas
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Dr. Aurelien Bouteiller, Sr. Research Associate
Innovative Computing Laboratory - MPI group
+1 865 974 6321
1122 Volunteer Boulevard
Claxton Education Building Suite 350
Knoxville, TN 37996




Re: [OMPI users] Does MPI_Bsend always use the buffer?

2007-12-11 Thread George Bosilca


On Dec 11, 2007, at 10:33 AM, Gleb Natapov wrote:

On Tue, Dec 11, 2007 at 10:27:32AM -0500, Bradley, Peter C. (MIS/ 
CFD) wrote:
In OpenMPI, does MPI_Bsend always copy the message to the user- 
specified
buffer, or will it avoid the copy in situations where it knows the  
send can

complete?

If the message size if smaller than eager limit Open MPI will not use
user-specified buffer for it.


This is an implementation details. You should avoid relying on such  
things in a portable MPI applications. The safe assumption here is  
that MPI_Bsend always copy the buffer, as described in the MPI standard.


  george.



smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI users] Does MPI_Bsend always use the buffer?

2007-12-11 Thread Gleb Natapov
On Tue, Dec 11, 2007 at 10:27:32AM -0500, Bradley, Peter C. (MIS/CFD) wrote:
> In OpenMPI, does MPI_Bsend always copy the message to the user-specified
> buffer, or will it avoid the copy in situations where it knows the send can
> complete?
If the message size if smaller than eager limit Open MPI will not use
user-specified buffer for it.

--
Gleb.


[OMPI users] Does MPI_Bsend always use the buffer?

2007-12-11 Thread Bradley, Peter C. (MIS/CFD)
In OpenMPI, does MPI_Bsend always copy the message to the user-specified
buffer, or will it avoid the copy in situations where it knows the send can
complete?

I recognize bsend is generally to be avoided, but I have a need to emulate
an in-house message-passing library that guarantees that writes won't block.

Pete


Re: [OMPI users] Open MPI 1.2.4 verbosity w.r.t. osc pt2pt

2007-12-11 Thread Lisandro Dalcin
On 12/10/07, Jeff Squyres  wrote:
> Brian / Lisandro --
> I don't think that I heard back from you on this issue.  Would you
> have major heartburn if I remove all linking of our components against
> libmpi (etc.)?
>
> (for a nicely-formatted refresher of the issues, check out 
> https://svn.open-mpi.org/trac/ompi/wiki/Linkers)

Sorry for the late response...

I've finally 'solved' this issue by using RTLD_GLOBAL for loading the
Python extension module that actually calls MPI_Init(). However, I'm
not completelly sure if my hackery is completelly portable.

Looking briefly at the end of the link to the wiki page, you say that
if the explicit linking to libmpi on componets is removed, then
dlopen() has to be explicitelly called.

Well, this would be a mayor headhace for me, because portability
issues. Please note that I've developed mpi4py on a rather old 32 bits
linux box, but it works in many different plataforms and OS's. I do
really do not have the time of testing and figure out how to
appropriatelly call dlopen() in platforms/OS's that I even do not have
access!!

Anyway, perhaps OpenMPI could provide an extension: a function call,
let say 'ompi_load_dso()' or something like that, that can be called
before MPI_Init() for setting-up the monster. What to you think about
this? Would it be hard for you?


-- 
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594



[OMPI users] error with Vprotocol pessimist

2007-12-11 Thread Thomas Ropars

Hi,

I've tried to test the message logging component vprotocol pessimist. 
(svn checkout revision 16926)

When I run an mpi application, I get the following error :

mca: base: component_find: unable to open vprotocol pessimist: 
/local/openmpi/lib/openmpi/mca_vprotocol_pessimist.so: undefined symbol: 
pml_v_output (ignored)



Regards

Thomas


Re: [OMPI users] Re :Re: what is MPI_IN_PLACE

2007-12-11 Thread George Bosilca

Neeraj,

The rationale is clearly explained in the MPI standard. Here is the  
relevant paragraph from section 7.3.2:


The ``in place'' operations are provided to reduce unnecessary memory  
motion by both the MPI implementation and by the user. Note that while  
the simple check of testing whether the send and receive buffers have  
the same address will work for some cases (e.g., MPI_ALLREDUCE), they  
are inadequate in others (e.g., MPI_GATHER, with root not equal to  
zero). Further, Fortran explicitly prohibits aliasing of arguments;  
the approach of using a special value to denote ``in place'' operation  
eliminates that difficulty.


  Thanks,
george.

On Dec 11, 2007, at 6:13 AM, Neeraj Chourasia wrote:


Thanks  George,

 But what is the need for user to specify it. The api can check  
the address of  input buffers and output buffers. Is there some  
extra advantage of MPI_IN_PLACE over automatically detecting it  
using pointers?


-Neeraj

On Tue, 11 Dec 2007 06:10:06 -0500 Open MPI Users wrote

Neeraj,



MPI_IN_PLACE is defined by the MPI standard in order to allow the

users to specify that the input and output buffers for the collectives

are the same. Moreover, not all collectives support MPI_IN_PLACE and

for those that support it some strict rules apply. Please read the

collective section in the MPI standard to see all the restrictions.



Thanks,

george.



On Dec 11, 2007, at 5:56 AM, Neeraj Chourasia wrote:



> Hello everyone,

>

> While going through collective algorithms, I came across

> preprocessor directive MPI_IN_PLACE which is (void *)1. Its always

> being compared against source buffer(sbuf). My question is when

> MPI_IN_PLACE == sbuf condition would be true. As far as i

> understand, sbuf is the address of source buffer, which every node

> has to transfer to remaining nodes based on recursive doubling or

> say bruck algo. And it can never be equal to (void *)1. Any help is

> appreciated.

>

> Regards

> Neeraj

>

> ___

> users mailing list

> us...@open-mpi.org

> http://www.open-mpi.org/mailman/listinfo.cgi/users






___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


[OMPI users] Re :Re: what is MPI_IN_PLACE

2007-12-11 Thread Neeraj Chourasia
Thanks George, But what is the need for user to 
specify it. The api can check the address of input buffers and output 
buffers. Is there some extra advantage of MPI_IN_PLACE over automatically 
detecting it using pointers?-NeerajOn Tue, 11 Dec 2007 06:10:06 -0500 Open MPI 
Users  wrote  Neeraj,MPI_IN_PLACE is defined by the MPI standard in order 
to allow theusers to specify that the input and output buffers for the 
collectivesare the same. Moreover, not all collectives support MPI_IN_PLACE 
andfor those that support it some strict rules apply. Please read the
collective section in the MPI standard to see all the restrictions.   
Thanks,   george.On Dec 11, 2007, at 5:56 AM, Neeraj Chourasia wrote:   
  Hello everyone, While going through collective 
algorithms, I came across preprocessor directive MPI_IN_PLACE which is 
(void *)1. Its always being compared against source buffer(sbuf). My 
question is when MPI_IN_PLACE == sbuf condition would be true. As far 
as i understand, sbuf is the address of source buffer, which every node 
has to transfer to remaining nodes based on recursive doubling or
 say bruck algo. And it can never be equal to (void *)1. Any help is
 appreciated. Regards   Neeraj 
___   users mailing list   
us...@open-mpi.org   http://www.open-mpi.org/mailman/listinfo.cgi/users 
 


[OMPI users] what is MPI_IN_PLACE

2007-12-11 Thread Neeraj Chourasia
Hello everyone, While going through collective algorithms, I 
came across preprocessor directive MPI_IN_PLACE which is (void *)1. Its always 
being compared against source buffer(sbuf). My question is when MPI_IN_PLACE == 
sbuf condition would be true. As far as i understand, sbuf is the address of 
source buffer, which every node has to transfer to remaining nodes based on 
recursive doubling or say bruck algo. And it can never be equal to (void *)1. 
Any help is appreciated.RegardsNeeraj