[OMPI users] MPI_Finalize runtime error

2006-07-17 Thread Manal Helal
Hi

after I finish execution, and all results are reported, and both
processes are about to call MPI_Finalize, I get this runtime error:

any help is appreciated, thanks

Manal


Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0xa
[0] func:/usr/local/bin/openmpi/lib/libopal.so.0 [0x3e526c]
[1] func:[0x4bfc7440]
[2] func:/usr/local/bin/openmpi/lib/libopal.so.0(free+0xb4) [0x3e9ff4]
[3] func:/usr/local/bin/openmpi/lib/libmpi.so.0 [0x70484e]
[4]
func:/usr/local/bin/openmpi//lib/openmpi/mca_btl_tcp.so(mca_btl_tcp_component_close+0x278)
 [0xc78a58]
[5]
func:/usr/local/bin/openmpi/lib/libopal.so.0(mca_base_components_close
+0x6a) [0x3d93fa]
[6] func:/usr/local/bin/openmpi/lib/libmpi.so.0(mca_btl_base_close+0xbd)
[0x75154d]
[7] func:/usr/local/bin/openmpi/lib/libmpi.so.0(mca_bml_base_close+0x17)
[0x751427]
[8]
func:/usr/local/bin/openmpi//lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_component_close+0x3a)
 [0x625a0a]
[9]
func:/usr/local/bin/openmpi/lib/libopal.so.0(mca_base_components_close
+0x6a) [0x3d93fa]
[10] func:/usr/local/bin/openmpi/lib/libmpi.so.0(mca_pml_base_close
+0x65) [0x7580e5]
[11] func:/usr/local/bin/openmpi/lib/libmpi.so.0(ompi_mpi_finalize
+0x1b4) [0x71e984]
[12] func:/usr/local/bin/openmpi/lib/libmpi.so.0(MPI_Finalize+0x4b)
[0x73cb5b]
[13] func:master/mmMaster(main+0x3cc) [0x804b2dc]
[14] func:/lib/libc.so.6(__libc_start_main+0xdc) [0x4bffa724]
[15] func:master/mmMaster [0x8049b91]
*** End of error message ***




[OMPI users] Why should the attached code wait on MPI_Bcast

2006-07-17 Thread s anwar

Please see attached source file.

According to my understanding of MPI_Comm_spawn(), the intercommunicator
returned is the same as it is returned by MPI_Comm_get_parent() in the
spawned processes. I am assuming that there is one intercommunicator which
contains all the (spawned) child processes as well as the parent process. If
this is the case, then why does an MPI_Bcast() using such an
intercommunicator wait indefinately?

Thanks.
#include 
#include 
#include 
#include 
#include 

int
main(int ac, char *av[])
{
	int  rank, size;
	char name[MPI_MAX_PROCESSOR_NAME];
	int  nameLen;
	int  n = 1, i;
	int  slave = 0;
	int  *errs;
	char *args[] = { "-W", NULL};
	MPI_Comm intercomm, icomm;
	int  err;
	char *line;
	char  *buff;
	int   buffSize;
	int   one_int;
	char  who[1024];

	memset(name, sizeof(name), 0);

	for(i=1; i

Re: [OMPI users] TM fixes on trunk

2006-07-17 Thread Caird, Andrew J
That's excellent, thanks.

--andy


> -Original Message-
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres 
> (jsquyres)
> Sent: Monday, July 17, 2006 2:08 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] TM fixes on trunk
> 
> For lack of a longer explanation, let's call it "internal 
> accounting errors" :-).  In an attempt to speed up the TM 
> launcher, we made some changes in Open MPI 1.1 which ended up 
> using the TM API the wrong way.
> So it was clearly a bug.  It *might* work in 1.1, but I 
> wouldn't recommend it (i.e., it's a timing issue -- sometimes 
> it might work, sometimes it might not).
> 
> More specifically -- if it's working for you, then it will 
> probably continue to work for you.  
> 
> There is a 1.1.1b2 tarball currently available 
> (http://www.open-mpi.org/software/ompi/v1.1/), and there are 
> nightly snapshots of the 1.1 branch available as well 
> (http://www.open-mpi.org/nightly/v1.1/).  
> 
> You can see a full list of the changes in the 1.1 branch in 
> the "1.1.1"
> section of NEWS:
> 
>   http://svn.open-mpi.org/svn/ompi/trunk/NEWS
>  
> 
> > -Original Message-
> > From: users-boun...@open-mpi.org
> > [mailto:users-boun...@open-mpi.org] On Behalf Of Caird, Andrew J
> > Sent: Monday, July 17, 2006 11:10 AM
> > To: Open MPI Users
> > Subject: Re: [OMPI users] TM fixes on trunk
> > 
> > Jeff,
> > 
> > What were the details of the problem/fixes?  
> > 
> > Is it worth us moving to the trunk or using what we have 
> until 1.1.1 
> > arrives?
> > 
> > Thanks.
> > --andy
> >   
> > 
> > > -Original Message-
> > > From: users-boun...@open-mpi.org
> > > [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres
> > > (jsquyres)
> > > Sent: Monday, July 17, 2006 10:22 AM
> > > To: Open MPI Users
> > > Subject: [OMPI users] TM fixes on trunk
> > > 
> > > All --
> > > 
> > > Martin Schaffoner reported some TM problems to this list a little 
> > > while ago.  It took a long time for he and I to synch up, but we 
> > > finally identified and fixed the problem.  This only affects Open 
> > > MPI 1.1 installs -- it is not an issue for 1.0.x 
> installs.  The fix 
> > > has been included in both the trunk and the 1.1 branch, 
> and will be 
> > > included in the upcoming
> > > 1.1.1 release.
> > > 
> > > --
> > > Jeff Squyres
> > > Server Virtualization Business Unit
> > > Cisco Systems
> > > 
> > > ___
> > > users mailing list
> > > us...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > 
> > 
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



Re: [OMPI users] TM fixes on trunk

2006-07-17 Thread Jeff Squyres (jsquyres)
For lack of a longer explanation, let's call it "internal accounting
errors" :-).  In an attempt to speed up the TM launcher, we made some
changes in Open MPI 1.1 which ended up using the TM API the wrong way.
So it was clearly a bug.  It *might* work in 1.1, but I wouldn't
recommend it (i.e., it's a timing issue -- sometimes it might work,
sometimes it might not).

More specifically -- if it's working for you, then it will probably
continue to work for you.  

There is a 1.1.1b2 tarball currently available
(http://www.open-mpi.org/software/ompi/v1.1/), and there are nightly
snapshots of the 1.1 branch available as well
(http://www.open-mpi.org/nightly/v1.1/).  

You can see a full list of the changes in the 1.1 branch in the "1.1.1"
section of NEWS:

http://svn.open-mpi.org/svn/ompi/trunk/NEWS


> -Original Message-
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Caird, Andrew J
> Sent: Monday, July 17, 2006 11:10 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] TM fixes on trunk
> 
> Jeff,
> 
> What were the details of the problem/fixes?  
> 
> Is it worth us moving to the trunk or using what we have until 1.1.1
> arrives?
> 
> Thanks.
> --andy
>   
> 
> > -Original Message-
> > From: users-boun...@open-mpi.org 
> > [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres 
> > (jsquyres)
> > Sent: Monday, July 17, 2006 10:22 AM
> > To: Open MPI Users
> > Subject: [OMPI users] TM fixes on trunk
> > 
> > All --
> > 
> > Martin Schaffoner reported some TM problems to this list a 
> > little while ago.  It took a long time for he and I to synch 
> > up, but we finally identified and fixed the problem.  This 
> > only affects Open MPI 1.1 installs -- it is not an issue for 
> > 1.0.x installs.  The fix has been included in both the trunk 
> > and the 1.1 branch, and will be included in the upcoming 
> > 1.1.1 release.
> > 
> > --
> > Jeff Squyres
> > Server Virtualization Business Unit
> > Cisco Systems
> > 
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



Re: [OMPI users] TM fixes on trunk

2006-07-17 Thread Caird, Andrew J
Jeff,

What were the details of the problem/fixes?  

Is it worth us moving to the trunk or using what we have until 1.1.1
arrives?

Thanks.
--andy


> -Original Message-
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres 
> (jsquyres)
> Sent: Monday, July 17, 2006 10:22 AM
> To: Open MPI Users
> Subject: [OMPI users] TM fixes on trunk
> 
> All --
> 
> Martin Schaffoner reported some TM problems to this list a 
> little while ago.  It took a long time for he and I to synch 
> up, but we finally identified and fixed the problem.  This 
> only affects Open MPI 1.1 installs -- it is not an issue for 
> 1.0.x installs.  The fix has been included in both the trunk 
> and the 1.1 branch, and will be included in the upcoming 
> 1.1.1 release.
> 
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



[OMPI users] TM fixes on trunk

2006-07-17 Thread Jeff Squyres (jsquyres)
All --

Martin Schaffoner reported some TM problems to this list a little while
ago.  It took a long time for he and I to synch up, but we finally
identified and fixed the problem.  This only affects Open MPI 1.1
installs -- it is not an issue for 1.0.x installs.  The fix has been
included in both the trunk and the 1.1 branch, and will be included in
the upcoming 1.1.1 release.

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems



Re: [MTT users] adding LWP::UserAgent to MTT repository

2006-07-17 Thread Jeff Squyres (jsquyres)
It might be ok to adapt the module to try to "include" it (vs. requiring
it) and if it's not there, fail over.

Make sense? 

> -Original Message-
> From: mtt-users-boun...@open-mpi.org 
> [mailto:mtt-users-boun...@open-mpi.org] On Behalf Of Ethan Mallove
> Sent: Monday, July 17, 2006 9:46 AM
> To: mtt-us...@open-mpi.org
> Subject: [MTT users] adding LWP::UserAgent to MTT repository
> 
> I thought it would be nice to not require users to have 
> LWP::UserAgent installed
>  (like we don't require them to have Config::IniFiles - it's 
> part of the MTT
> repository), since LWP::UserAgent isn't listed as a standard module
> (https://www.linuxnotes.net/perlcd/prog/ch32_01.htm). 
> However, LWP::UserAgent
> uses a platform dependent binary called Parser.so which makes 
> the addition of
> LWP::UserAgent to the repos slightly more involved. E.g., 
> putting Parser.so in
> sparc, i386, etc. directories and getting UserAgent to look 
> for them in the
> right place. For now, LWP::UserAgent can be put in a 
> centralized location that
> the PERLLIB env var can point to.
> 
> -Ethan
> 
> ___
> mtt-users mailing list
> mtt-us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
> 



[OMPI users] What Really Happens During OpenMPI MPI_INIT?

2006-07-17 Thread Mahesh Barve
Hi ,
  Can anyone please enlighten us about what really
happens in MPI_init() in openMPI? 
  More specifically i am interested in knowing 
1.Functions that needs to accomplished during
MPI_init()
2.What has already been implemented in openMPI
MPI_Init
2. The routines called/invoked that perform these
functions

 regards,
-Mahesh



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com