Hello all,
(this _might_ be related to https://svn.open-mpi.org/trac/ompi/ticket/1505)
I just compiled and installed 1.3.3 ins a CentOS 5 environment and we
noticed the
processes would deadlock as soon as they would start using TCP communications.
The
test program is one that has been
Jeff Squyres wrote:
On Oct 18, 2008, at 9:19 PM, Mostyn Lewis wrote:
Can OpenMPI do like Scali and MVAPICH2 and utilize 2 IB HCAs per machine
to approach double the bandwidth on simple tests such as IMB PingPong?
Yes. OMPI will automatically (and aggressively) use as many active
ports as
Hello all,
I am currently profiling a simple case where I replace multiple S/R
calls with Allgather calls and it would _seem_ the simple S/R calls are
faster. Now, *before* I come to any conclusion on this, one of the
pieces I am missing is more details on how /if/when the tuned coll MCA
lengthy since the entire system (or at
least all libs openmpi links to) needs to be rebuilt.
Eric
Eric Thibodeau wrote:
Prasanna,
Please send me your /etc/make.conf and the contents of
/var/db/pkg/sys-cluster/openmpi-1.2.7/
You can package this with the following command line:
tar -cjf
Sorry about that, I had misinterpreted your original post as being the
pair of send-receive. The example you give below does seem correct
indeed, which means you might have to show us the code that doesn't
work. Note that I am in no way a Fortran expert, I'm more versed in C.
The only hint I'd
Enrico Barausse wrote:
Hello,
I apologize in advance if my question is naive, but I started to use
open-mpi only one week ago.
I have a complicated fortran 90 code which is giving me a segmentation
fault (address not mapped). I tracked down the problem to the
following lines:
Prasanna,
Please send me your /etc/make.conf and the contents of
/var/db/pkg/sys-cluster/openmpi-1.2.7/
You can package this with the following command line:
tar -cjf data.tbz /etc/make.conf /var/db/pkg/sys-cluster/openmpi-1.2.7/
And simply send me the data.tbz file.
Thanks,
Eric
Prasanna,
I opened up a bug report to enable a better control over the
threading options (http://bugs.gentoo.org/show_bug.cgi?id=237435). In
the meanwhile, if your helloWorld isn't too fluffy, could you send it
over (off list if you prefer) so I can take a look at it, the
Segmentation
Jeff Squyres wrote:
On Sep 11, 2008, at 3:27 PM, Eric Thibodeau wrote:
Ok, added to the information from the README, I'm thinking none of
the 3 configure options have an impact on the said 'threaded TCP
listener' and the MCA option you suggested should still work, is this
correct
Jeff Squyres wrote:
On Sep 11, 2008, at 2:38 PM, Eric Thibodeau wrote:
In short:
Which of the 3 options is the one known to be unstable in the following:
--enable-mpi-threadsEnable threads for MPI applications (default:
disabled)
--enable-progress-threads
Jeff,
In short:
Which of the 3 options is the one known to be unstable in the following:
--enable-mpi-threadsEnable threads for MPI applications (default:
disabled)
--enable-progress-threads
Enable threads asynchronous communication
ed
and logged ;)
On Sep 10, 2008, at 7:52 PM, Eric Thibodeau wrote:
Prasanna, also make sure you try with USE=-threads ...as the ebuild
states, it's _experimental_ ;)
Keep your eye on:
https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport
Eric
Prasanna Ranganathan wrote:
H
Prasanna Ranganathan wrote:
Hi Eric,
Thanks a lot for the reply.
I am currently working on upgrading to 1.2.7
I do not quite follow your directions; What do you refer to when you say say
"try with USE=-threads..."
I am referring to the USE variable which is used to set global package
Prasanna, also make sure you try with USE=-threads ...as the ebuild
states, it's _experimental_ ;)
Keep your eye on:
https://svn.open-mpi.org/trac/ompi/wiki/ThreadSafetySupport
Eric
Prasanna Ranganathan wrote:
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
Prasanna Ranganathan wrote:
Hi,
I have upgraded my openMPI to 1.2.6 (We have gentoo and emerge showed
1.2.6-r1 to be the latest stable version of openMPI).
Prasanna, do a sync, 1.2.7 is in portage and report back.
Eric
I do still get the following error message when running my test
;
return 0;
}
You should probably check with Intel support for more details.
On Dec 6, 2007, at 11:25 PM, Eric Thibodeau wrote:
Hello all,
I am unable to get past ./configure as ICC fails on C++ tests (see
attached ompi-output.tar.gz). Configure was called without and the
with sourcing `
(well..intelligible for me that is ;P ) cause of the failure in
config.log. Any help would be appreciated.
Thanks,
Eric Thibodeau
ompi-output.tar.gz
Description: application/gzip
locking send, there the library
> do not return until the data is pushed on the network buffers, i.e.
> the library is the one in control until the send is completed.
>
>Thanks,
> george.
>
> On Oct 15, 2007, at 2:23 PM, Eric Thibodeau wrote:
>
> > Hello Georg
I'm attaching the functionnal code so that others can maybe see this one as an
example ;)
Le mercredi 4 avril 2007 11:47, Eric Thibodeau a écrit :
> Hello all,
>
> First off, please excuse the attached code as I may be naïve in my
> attempts to implement my own MPI_OP.
>
>
OPAL: 1.2
OPAL SVN revision: r14027
Prefix: /home/kyron/openmpi_i686
Configured architecture: i686-pc-linux-gnu
Configured by: kyron
Configured on: Wed Apr 4 10:21:34 EDT 2007
Le mercredi 4 avril 2007 11:47, Eric Thibodeau a écrit :
> He
ron:14074] [ 5] /lib/libc.so.6(__libc_start_main+0xe3) [0x6fcbd823]
[kyron:14074] *** End of error message ***
Eric Thibodeau
#include
#include
#include
#include
#define V_LEN 10 //Vector Length
#define E_CNT 10 //Element count
MPI_Op MPI_MySum; //Custom Sum function
MPI_Datatype MPI_MyTyp
> v1.2 because someone else out in the Linux community uses "libopal".
>
> I typically prefer using "mpicc" as CC and LINKER and therefore
> letting the OMPI wrapper handle everything for exactly this reason.
>
>
> On Feb 21, 2007, at 12:39 PM, Eric Th
Batiment 506
>BP 167
>F - 91403 ORSAY Cedex
> Site Web :http://www.idris.fr
> **
>
> Eric Thibodeau a écrit :
> > Hello all,
> >
> > As we all know, compiling OpenMPI is not a matter of adding -lmpi
>
xed a
> shared memory race condition, for example:
>
> http://www.open-mpi.org/nightly/v1.2/
>
>
> On Feb 16, 2007, at 12:12 AM, Eric Thibodeau wrote:
>
> > Hello devs,
> >
> > Thought I would let you know there seems to be a problem with
> &g
ild process!
Eric
Le jeudi 15 février 2007 19:51, Anthony Chan a écrit :
>
> As long as mpicc is working, try configuring mpptest as
>
> mpptest/configure MPICC=/bin/mpicc
>
> or
>
> mpptest/configure --with-mpich=
>
> A.Chan
>
> On Thu, 15 Feb 2007, Er
Thanks, now all makes more sense to me. I'll try the hard way, multiple builds
for multiple envs ;)
Eric
Le dimanche 16 juillet 2006 18:21, Brian Barrett a écrit :
> On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote:
> > Now that I have that out of the way, I'd like to know
14:31, Brian Barrett a écrit :
> On Jul 15, 2006, at 2:58 PM, Eric Thibodeau wrote:
> > But, for some reason, on the Athlon node (in their image on the
> > server I should say) OpenMPI still doesn't seem to be built
> > correctly since it crashes as follows:
> &
<--config log for the Opteron build (works locally)
config.log_node0<--config log for the Athlon build (on the node)
ompi_info.i686 <--ompi_info on the Athlon node
ompi_info.x86_64<--ompi_info on the Opteron Master
Thanks,
--
Eric Thibodeau
Neural Bucket Solution
l on open-mpi?
> Thank you ;)
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
t;
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
pecific points to add,
>
> Thanks again, I appreciate it,
>
> Manal
>
> On Mon, 2006-07-03 at 23:17 -0400, Eric Thibodeau wrote:
> > See comments below:
> >
> > Le lundi 3 juillet 2006 23:01, Manal Helal a écrit :
> > > Hi
> > >
> >
> running the
> same thing, as of yet.
>
> I have a cluster of two v440 that have 4 cpus each running Solaris 10.
> The tests I
> am running are np=2 one process on each node.
>
> --td
>
> Eric Thibodeau wrote:
>
> >Terry,
> >
> > I was a
ment issues on Solaris 64 bit platforms, but thought that we might
> > have had a pretty good handle on it in 1.1. Obviously we didn't solve
> > everything. Bonk.
> >
> > Did you get a corefile, perchance? If you could send a stack trace, that
> > woul
Yeah bummers, but something tells me it might not be OpenMPI's fault. Here's
why:
1- The tech that takes care of these machines told me that he gets RTC errors
on bootup (the cpu borads are apprantly "out of sync" since the clocks aren't
set correctly).
2- There is also a possibility that the
MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1)
MCA sds: seed (MCA v1.0, API v1.0, Component v1.1)
MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1)
Le mardi 20 juin 2006 17:06, Eric Thibodeau a écrit :
> Thanks for the pointer, it WORKS!!
Hello Jeff,
Fristly, don't worry about jumping in late, I'll send you a skid rope
;) Secondly, thanks for your nice little artilces on clustermonkey.net (good
refresher on MPI). And finally, down to my issues, thanks for clearing out the
--prefix LD_LIBRARY_PATH and all. The ebuild I
our prefix set to the lib dir, can you try without the
> lib64 part and rerun?
>
> Eric Thibodeau wrote:
> > Hello everyone,
> >
> > Well, first off, I hope this problem I am reporting is of some validity,
> > I tried finding simmilar situations off Google and the mai
37 matches
Mail list logo