Open MPI ships with a full set of man pages for all the MPI functions,
you might want to start with those.
Tim
Alberto Giannetti wrote:
I am looking to use MPI in a publisher/subscriber context. Haven't
found much relevant information online.
Basically I would need to deal with dynamic tag su
Hi Graham,
Have you tried running without the btl_tcp_if_include line in the .conf
file? Open MPI is usually smart enough to auto detect and choose the
correct interfaces.
Hope this helps,
Tim
Graham Jenkins wrote:
We're moving from using a single (eth0) interface on our execute nodes
to u
Hi Joao,
Thanks for the bug report! You do not have to call free/disconnect
before MPI_Finalize. If you do not, they will be called automatically.
Unfortunately, there was a bug in the code that did the free/disconnect
automatically. This is fixed in r18079.
Thanks again,
Tim
Joao Vicente
Hi Werner,
Open MPI does things a little bit differently than other MPIs when it
comes to supporting SLURM. See
http://www.open-mpi.org/faq/?category=slurm
for general information about running with Open MPI on SLURM.
After trying the commands you sent, I am actually a bit surprised by the
re
loaded and unloaded with the modules command.
Ashley,
On Tue, 2008-03-04 at 09:37 -0500, Tim Prins wrote:
Hi Ashley,
Yes, you can have this done automatically. Just use the
'--enable-mpirun-prefix-by-default' option to configure.
I'm actually a bit surprised this is not in the FA
Hi Ashley,
Yes, you can have this done automatically. Just use the
'--enable-mpirun-prefix-by-default' option to configure.
I'm actually a bit surprised this is not in the FAQ. I'll have to add it.
Hope this helps,
Tim
Ashley Pittman wrote:
Hello,
I work for medium sized UK based ISV and
To clean this up for the web archives, we were able to get it to work by
using '--disable-dlopen'
Tim
Tim Prins wrote:
Scott,
I can replicate this on big red. Seems to be a libtool problem. I'll
investigate...
Thanks,
Tim
Teige, Scott W wrote:
Hi all,
Attempting a buil
Scott,
I can replicate this on big red. Seems to be a libtool problem. I'll
investigate...
Thanks,
Tim
Teige, Scott W wrote:
Hi all,
Attempting a build of 1.2.5 on a ppc machine, particulars:
uname -a
Linux s10c2b2 2.6.5-7.286-pseries64-lustre-1.4.10.1 #2 SMP Tue Jun 26
11:36:04 EDT 200
Hi Joao,
Unfortunately, spawn is broken on the development trunk right now. We
are working on a major revamp of the runtime system which should fix
these problems, but it is not ready yet.
Sorry about that :(
Tim
Joao Vicente Lima wrote:
Hi all,
I'm getting errors with spawn in the situat
The fix I previously sent to the list has been committed in r17400.
Thanks,
Tim
Tim Prins wrote:
Hi Stefan,
I was able to verify the problem. Turns out this is a problem with other
onesided operations as well. Attached is a simple test case I made in c
using MPI_Put that also fails.
The
Hi Brock,
As far as I know there is no way to do this with Open MPI and torque. I
believe people usually use hostfiles to do this sort of thing, but
hostfiles do not work with torque.
You may want to look into the launcher commands to see if torque will do
it for you. Slurm has an option '--
version that fixes
this for me. I am also copying Brian Barrett, who did all the work on
the onesided code.
Brian: if possible, please take a look at the attached patch and test case.
Thanks for the report!
Tim Prins
Stefan Knecht wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all
Jody,
If you want to forward X connections through ssh, you should NOT set the
DISPLAY variable. ssh will set the proper one for you.
Tim
jody wrote:
Tim
Thank you for your explanation on how OpenMPI uses ssh.
There is a way to force the ssh sessions to stay open. However doing so
will r
Jody,
jody wrote:
Hi Tim
Your desktop is plankton, and you want
to run a job on both plankton and nano, and have xterms show up on nano.
Not on nano, but on plankton, but ithink this was just a typo :)
Correct.
It looks like you are already doing this, but to make sure, the way I
would us
Hi Jody,
Just to make sure I understand. Your desktop is plankton, and you want
to run a job on both plankton and nano, and have xterms show up on nano.
It looks like you are already doing this, but to make sure, the way I
would use xhost is:
plankton$ xhost +nano_00
plankton$ mpirun -np 4 -
Hi Kay,
Sorry for the delay in replying, looks like this one slipped through.
The dynamic process management should work fine on GM.
Hope this helps,
Tim
kay kay wrote:
I am looking for dynamic process management support (e.g.MPI_Comm_spawn)
on Myrinet platform. From the Myricom website, i
Open MPI v1.2 had some problems with the TM configuration code which was fixed
in v1.2.1. So any version v1.2.1 or later should work fine (and, as you
indicate, 1.2.4 works fine).
Tim
On Tuesday 18 December 2007 12:48:40 pm pat.o'bry...@exxonmobil.com wrote:
> Jeff,
> Here is the result of
-mpi.org]
On
Behalf Of Scott Atchley
Sent: Tuesday, July 10, 2007 3:31 PM
To: Open MPI Users
Subject: Re: [OMPI users] warning:regcache incompatible with malloc
On Jul 10, 2007, at 3:24 PM, Tim Prins wrote:
On Tuesday 10 July 2007 03:11:45 pm Scott Atchley wrote:
On Jul 10, 2007, at 2:58 PM
Or you can follow the advice in this faq:
http://www.open-mpi.org/faq/?category=tcp#tcp-connection-errors
and run:
perl -e 'die$!=131'
Tim
On Sunday 18 November 2007 09:29:25 pm George Bosilca wrote:
> There is a good reason for this. The errno is system dependent. As an
> example on my Debian c
I have seen situations where after installing Open MPI, the wrapper
compilers did not create any executables, and seemed to do nothing.
I was never able to figure out why the wrappers were broken, and
reinstalling Open MPI always seemed to make it work.
If I recall correctly, when this happen
Hi Clement,
I seem to recall (though this may have changed) that if a system supports
ipv6, we may open both ipv4 and ipv6 sockets. This can be worked around by
configuring Open MPI with --disable-ipv6
Other then that, I don't know of anything else to do except raise the limit
for the number o
Hi Jon,
Just to make sure, running 'ompi_info' shows that you have the udapl btl
installed?
Tim
On Wednesday 31 October 2007 06:11:39 pm Jon Mason wrote:
> I am having a bit of a problem getting udapl to work via mpirun (over
> open-mpi, obviously). I am running a basic pingpong test and I get
me that orted runs with --num_proc 3 when mpirun was executed with -np
2. Does this sound correct to you? I might open a new case for it
though...
Thank you for your help,
Jorge
On Mon, 22 Oct 2007, Tim Prins wrote:
Sorry to reply to my own mail.
Just browsing through the logs you sent,
Hi Ides,
Thanks for the report and reminder. I have filed a ticket on this
(https://svn.open-mpi.org/trac/ompi/ticket/1173) and you should receive email
as it is updated.
I do not know of any more elegant way to work around this at the moment.
Thanks,
Tim
On Friday 19 October 2007 06:31:53 a
not being maintained anymore).
Tim
On Monday 22 October 2007 08:41:30 pm Tim Prins wrote:
> Hi Jorge,
>
> This is interesting. The problem is the universe name:
> root@(none):default-universe
>
> The "(none)" part is supposed to be the hostname where mpirun is executed.
>
Hi Jorge,
This is interesting. The problem is the universe name:
root@(none):default-universe
The "(none)" part is supposed to be the hostname where mpirun is executed. Try
running:
hostname
and:
uname -n
These should both return valid hostnames for your machine.
Open MPI pretty much assumes
Hi Neeraj,
The GPR is maintained in the mpirun (orterun) process. The data is then
distributed via the RML/OOB.
Hope this helps,
Tim
Neeraj Chourasia wrote:
Hi everybody,
I have a doubt regarding ORTE. One of the major functionality of
orte is to maintain GPR, which subscribes and pub
So you did:
ssh which orted
and it found the orted?
Tim
Amit Kumar Saha wrote:
Hi sebi!
On 10/2/07, Sebastian Schulz wrote:
Amit Kumar Saha wrote:
what i find bizarre is that I used Open MPI 1.2.3 to install on all my
4 machines. whereas, 'orted' is installed in /usr/local/bin on all the
Marco,
Thanks for the report, and sorry for the delayed response. I can
replicate a problem using your test code, but it does not segfault for
me (although I am using a different version of Open MPI).
I filed a bug on this so (hopefully) out collective gurus will look at
it soon. You will re
Thanks for the report!
I have reproduced this bug and have filed a ticket on this
(https://svn.open-mpi.org/trac/ompi/ticket/1157). You should receive
updates as this bug is worked on.
Thanks,
Tim
Chris Johnson wrote:
Hi, I'm trying to run an MPI program of mine under OpenMPI 1.2 using
jus
this job.
Returned value Timeout instead of ORTE_SUCCESS.
--
I'm using version 1.2.3, got it from openmpi.org. I'm using the same
version of openmpi on all nodes.
Thanks
dino
Tim Prins schrieb:
This is very odd. The d
Hi,
Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
Hi there,
I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not
specify a hostfile.
Lately I'm having performance problems when running my mpi-app this way:
mpiexec -n 2 ./mpi-app config.ini
Both mpi-app processes are running
-c, -np, --np, -n, --n all do exactly the same thing.
Tim
Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
Hi,
On 10/3/07, jody wrote:
Hi Miguel
I don't know if it's a typo - but actually it should be
mpiexec -np 2 ./mpi-app config.ini
and not
mpiexec -n 2 ./mpi-app config.ini
thanks fo
try:
mpirun --hostfile hostfile hostname
Thanks,
Tim
Dino Rossegger wrote:
Hi again,
Tim Prins schrieb:
Hi,
On Monday 01 October 2007 03:56:16 pm Dino Rossegger wrote:
Hi again,
Yes the error output is the same:
root@sun:~# mpirun --hostfile hostfile main
[sun:23748] [0,0,0] ORTE_ERROR_LOG
Hi,
On Monday 01 October 2007 03:08:04 am Hammad Siddiqi wrote:
> One more thing to add -mca mtl mx uses ethernet and IP emulation of
> Myrinet to my knowledge. I want to use Myrinet(not its IP Emulation)
> and shared memory simultaneously.
This is not true (as far as I know...). Open MPI has 2 di
Hi,
On Monday 01 October 2007 03:56:16 pm Dino Rossegger wrote:
> Hi again,
>
> Yes the error output is the same:
> root@sun:~# mpirun --hostfile hostfile main
> [sun:23748] [0,0,0] ORTE_ERROR_LOG: Timeout in file
> base/pls_base_orted_cmds.c at line 275
> [sun:23748] [0,0,0] ORTE_ERROR_LOG: Timeo
Hi Joao,
Unfortunately Comm_spawn is a bit broken right now on the Open MPI trunk. We
are currently working on some major changes to the runtime system, so I would
rather not dig into this until these changes have made it onto the trunk.
I do not know of a timeline for when this these changes w
So you know this is something that we are working on for the next major
release of Open MPI (v 1.3). More details on some of the discussion can
be found here:
https://svn.open-mpi.org/trac/ompi/ticket/1023
Tim
Torje Henriksen wrote:
Specifying nodes several times in the hostfile or with the --
I would reccommend trying a few things:
1. Set some debugging flags and see if that helps. So, I would try something
like:
/opt/SUNWhpc/HPC7.0/bin/mpirun -np 2 -mca btl
mx,self -host "indus1,indus2" -mca btl_base_debug 1000 ./hello
This will output information as each btl is loaded, and whethe
Mostyn,
It looks like the documentation is wrong (and has been wrong for years).
I assume you were looking at the FAQ? I will update it tonight or tomorrow.
Thanks for the report!
Tim
Mostyn Lewis wrote:
I see docs for this like:
--enable-mca-no-build=btl:mvapi,btl:openib,btl:gm,btl:mx,mtl
Murat,
Thanks for the bug report. I have fixed (slightly differently than you
suggested) this in the Open MPI trunk in r16265 and it should be
available in the nightly trunk tarball tonight.
I will ask to have this moved into the next release of Open MPI.
Thanks,
Tim
Murat Knecht wrote:
C
Hi Teng,
Teng Lin wrote:
Hi,
We would like to distribute OpenMPI along with our software to
customers, is there any legal issue we need to know about?
Not that I know of (disclaimer: IANAL). Open MPI is licensed under the
new BSD license. Open MPI's license is here:
http://www.open-mpi.or
Note that you may be able to get some more error output by
adding --debug-daemons to the mpirun command line.
Tim
On Thursday 27 September 2007 05:12:53 pm Dino Rossegger wrote:
> Hi Jody,
>
> Thanks for your help, it really is the case that either in PATH nor in
> LD_LIBRARY_PATH the path to th
Åke Sandgren wrote:
On Thu, 2007-09-27 at 09:09 -0400, Tim Prins wrote:
Hi Ake,
Looking at the svn logs it looks like you reported the problems with
these checks quite a while ago and we fixed them (in r13773
https://svn.open-mpi.org/trac/ompi/changeset/13773), but we never moved
them to
Hi Ake,
Looking at the svn logs it looks like you reported the problems with
these checks quite a while ago and we fixed them (in r13773
https://svn.open-mpi.org/trac/ompi/changeset/13773), but we never moved
them to the 1.2 branch.
I will ask for this to be moved to the 1.2 branch.
However
Francesco,
I guess the first step would be to decide whether or not you want to upgrade.
All of the changes are listed below, if none of them effect you and your
current setup is working fine, I would not bother upgrading.
Also, assuming you installed from a tarball, there is no way that I know
Hi,
This is because Open MPI is finding gcc for the C compiler and ifort for
the Fortran compiler. Please see:
http://www.open-mpi.org/faq/?category=building#build-compilers
For how to specify to use the Intel compilers.
Hope this helps,
Tim
Bertrand P. S. Russell wrote:
Dear OpenMPI user
Hi Murat,
If the process is being spawned onto a node that you are already running
on there should not be a problem with ssh-sessions, since if there is
already a daemon running on the node we do not ssh into it again.
Can you try running again with --debug-daemons added to the mpirun
comman
Hi,
To give FLAGS to the ROMIO configuration script, the configure option
for Open MPI is:
--with-io-romio-flags=FLAGS
So something like:
--with-io-romio-flags="--with-filesystems=ufs+nfs+pvfs2"
should work, though I have not tested it.
You can see all the ROMIO configure flags by runnin
Hi Josh,
I am not an expert in this area of the code, but I'll give it a shot.
(I assume you are using linux due to your email address) When using the memory
manager (which is the default on linux), we wrap malloc/realloc/etc with
ptmalloc2 (which is the same allocator used in glibc 2.3.x).
W
ason.
Do you have any firewalls/port filtering enabled on nano_02? Open MPI
generally cannot be run when there are any firewalls on the machines
being used.
Hope this helps,
Tim
Does this message give any hints as to the problem?
Jody
On 8/14/07, *Tim Prins* <mailto:tpr...@open-mp
I meant to say, "exporting the variables is *not* good enough".
Tim
Tim Prins wrote:
In general, exporting the variables is good enough. You really should be
setting the variables in the appropriate shell (non-interactive) login
scripts, such as .bashrc (I again point you to th
child shell that forks the
MPI process will not inherit it.
On 8/14/07, *Rodrigo Faccioli* <mailto:faccioli.postgre...@gmail.com>> wrote:
Thanks, Tim Prins for your email.
However It did't resolve my problem.
I set the enviroment variable on my Kubuntu Linux:
Guillaume THOMAS-COLLIGNON wrote:
Hi,
I wrote an application which works fine on a small number of nodes
(eg. 4), but it crashes on a large number of CPUs.
In this application, all the slaves send many small messages to the
master. I use the regular MPI_Send, and since the messages are
r
Hi Jody,
jody wrote:
Hi
I installed openmpi 1.2.2 on a quad core intel machine running fedora 6
(hostname plankton)
I set PATH and LD_LIBRARY in the .zshrc file:
Note that .zshrc is only used for interactive logins. You need to setup
your system so the LD_LIBRARY_PATH and PATH is also set for
You need to set your LD_LIBRARY_PATH. See these FAQ entries:
http://www.open-mpi.org/faq/?category=running#run-prereqs
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
Tim
Rodrigo Faccioli wrote:
Hi,
I need to know what I can resolve my problem. I'm starting my study on
mpi,
Hi Marcus,
Your expectation sounds very reasonable to me. I have filed a bug in our bug
tracker (https://svn.open-mpi.org/trac/ompi/ticket/1124), and you will
receive emails as it is updated.
Unfortunately, this is in a part of the code which has not been touched for a
long time, and is in som
Have you setup your LD_LIBRARY_PATH variable correctly? See this FAQ entry:
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
Hope this helps,
Tim
Michael Komm wrote:
I'm trying to make work the pathscale fortran compiler with OpenMPI on a 64bit
Linux machine and can't get pas
> Yes, this helps tremendously. I installed rsh, and now it pretty much
> works.
Glad this worked out for you.
>
> The one missing detail is that I can't seem to get the stdout/stderr
> output. For example:
>
> $ orterun -np 1 uptime
> $ uptime
> 18:24:27 up 13 days, 3:03, 0 users, load aver
[0,0,0] ORTE_ERROR_LOG: Error in file
orterun.c at line 399
[tprins@odin ~]$ mpirun -np 1 -mca pls_rsh_agent /bin/false hostname
odin.cs.indiana.edu
[tprins@odin ~]$ touch usr/bin/rsh
[tprins@odin ~]$ chmod +x usr/bin/rsh
[tprins@odin ~]$ mpirun -np 1 hostname
odin.cs.indiana.edu
[tprins@odin ~]$
I hope
This is strange. I assume that you what to use rsh or ssh to launch the
processes?
If you want to use ssh, does "which ssh" find ssh? Similarly, if you
want to use rsh, does "which rsh" find rsh?
Thanks,
Tim
Adam C Powell IV wrote:
On Wed, 2007-07-18 at 09:50 -0400, Ti
Adam C Powell IV wrote:
Greetings,
I'm running the Debian package of OpenMPI in a chroot (with /proc
mounted properly), and orte_init is failing as follows:
$ uptime
12:51:55 up 12 days, 21:30, 0 users, load average: 0.00, 0.00, 0.00
$ orterun -np 1 uptime
[new-host-3:18250] [0,0,0] ORTE_ERR
Or you can simply tell the mx mtl not to run by adding "-mca mtl ^mx" to
the command line.
George: There is an open bug about this problem:
https://svn.open-mpi.org/trac/ompi/ticket/1080
Tim
George Bosilca wrote:
There seems to be a problem with MX, because a conflict between out
MTL and t
On Tuesday 10 July 2007 03:11:45 pm Scott Atchley wrote:
> On Jul 10, 2007, at 2:58 PM, Scott Atchley wrote:
> > Tim, starting with the recently released 1.2.1, it is the default.
>
> To clarify, MX_RCACHE=1 is the default.
It would be good for the default to be something where there is no warning
Is this something that Open MPI should be setting automatically?
Tim
On Tuesday 10 July 2007 02:44:04 pm George Bosilca wrote:
> I always use MX_RCACHE=2 for both MTL and BTL. So far I didn't had
> any problems with it.
>
>george.
>
> On Jul 10, 2007, at 2:37 PM, Brian Barrett wrote:
> > On J
s 'job', it and mpirun. After the daemon boots up, mpirun
will send it a command to actually launch your application.
Tim
Henk
-Original Message-
From: users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org] On Behalf Of Tim Prins
Sent: 09 July 2007 16:34
To: Open
ix /opt/openmpi
> --host 130.60.49.130 ./a.out
> it works, as it does if i run it on the machine itself the standard way
> jody@aim-nano_02 /home/aim-cari/jody $ mpirun -np 2 --host
> 130.60.49.130./a.out
>
> Is there anything else i could try?
>
> Jody
>
> On 7/9/07, Tim Prins w
setup modifying PATHs is the easier
way to go, less typing :):
http://www.open-mpi.org/faq/?category=running#mpirun-prefix
Hope this helps,
Tim
Thank You
Jody
On 7/9/07, Tim Prins wrote:
Hi Jody,
Sorry for the super long delay. I don't know how this one got lost...
I run like this al
on?
Thanks
Henk
-Original Message-
From: users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org] On Behalf Of Tim Prins
Sent: 06 July 2007 15:59
To: Open MPI Users
Subject: Re: [OMPI users] openmpi fails on mx endpoint busy
Henk,
On Friday 06 July 2007 05:34:35 am SLIM H.A. wrote:
Hi Jody,
Sorry for the super long delay. I don't know how this one got lost...
I run like this all the time. Unfortunately, it is not as simple as I
would like. Here is what I do:
1. Log into the machine using ssh -X
2. Run mpirun with the following parameters:
-mca pls rsh (This makes sure
On Sunday 08 July 2007 08:22:04 pm Neville Clark wrote:
> I have openmpi installed and running, but have a need to run non mpi
> programs (3rd party software for which I don't have the source) together
> with mpi programs.
>
> Have managed to simplify the problem down to the following
>
> JobA
> in
-1.1.1
> and a listing of that directory is
>
> >ls /usr/local/Cluster-Apps/mx/mx-1.1.1
>
> bin etc include lib lib32 lib64 sbin
>
> This should be sufficient, I don't need --with-mx-libdir?
Correct.
Hope this helps,
Tim
>
> Thanks
>
> Henk
>
Hi Henk,
By specifying '--mca btl mx,self' you are telling Open MPI not to use
its shared memory support. If you want to use Open MPI's shared memory
support, you must add 'sm' to the list. I.e. '--mca btl mx,self'. If you
would rather use MX's shared memory support, instead use '--mca btl
mx
Hi Jeff,
If you submit a batch script, there is no need to do a salloc.
See the Open MPI FAQ for details on how to run on SLURM:
http://www.open-mpi.org/faq/?category=slurm
Hope this helps.
Tim
On Wednesday 27 June 2007 14:21, Jeff Pummill wrote:
> Hey Jeff,
>
> Finally got my test nodes back
Note that since you are setting OMPI_MCA_pml to cm, OMPI_MCA_btl will have no
effect. You may try setting OMPI_MCA_pml=ob1, and trying your measurements
again, but we generally get better performance with the cm pml than then ob1
pml.
Tim
On Wednesday 06 June 2007 12:54:26 pm George Bosilca wr
Hi Daniel,
I am able to replicate your problem on Mandriva 2007.1, however I'm not sure
what is going on.
I was able to build the tarball just fine though, so you may try that.
Tim
On Friday 01 June 2007 12:32:54 pm Daniel Pfenniger wrote:
> Hello,
>
> version 1.2.2 refuses to compile on Mand
Open MPI uses TCP, and does not use any fixed ports. We use whatever ports the
operating system gives us. At this time there is no way to specify what ports
to use.
Hope this helps,
Tim
On Friday 18 May 2007 05:19 am, Code Master wrote:
> I run my openmpi-based application in a multi-node clus
On Thursday 10 May 2007 07:19 pm, Code Master wrote:
> I am a newbie in openmpi. I have just compiled a program with -g -pg (an
> mpi program with a listener thread, which all MPI calls except
> initialization and MPI_Finalize are placed within) and I run it. However
> it crashes and I can't fin
Batiment 506
>BP 167
>F - 91403 ORSAY Cedex
> Site Web :http://www.idris.fr
> **
>
> Tim Prins a écrit :
> > Hi Laurent,
> >
> > Unfortunately, as far as I know, none of the current Open MPI developers
> > has a
Hi Laurent,
Unfortunately, as far as I know, none of the current Open MPI developers has
access to a system with POE, so the POE process launcher has fallen into
disrepair. Attached is a patch that should allow you to compile (however, you
may also need to add #include to pls_poe_module.c).
Of Tim Prins
Sent: Friday, March 30, 2007 10:49 PM
To: Open MPI Users
Subject: Re: [OMPI users] mca_btl_mx_init: mx_open_endpoint() failed
withstatus=20
Hi Valmor,
What is happening here is that when Open MPI tries to create MX
endpoint
for
communication, mx returns code 20, which is MX_BUSY.
Hi Barry,
The problem is the line:
ncpus=`wc -l $PBS_NODEFILE`
wc will print out the file name after the count. So ncpus gets "16 /
var/spool/torque/aux//350.wc01" and your mpirun command will look like:
mpirun -np 16 /var/spool/torque/aux//350.wc01 /home/test/hpcc-1.0.0/hpcc
So mpirun will t
Hi Valmor,
What is happening here is that when Open MPI tries to create MX endpoint for
communication, mx returns code 20, which is MX_BUSY.
At this point we should gracefully move on, but there is a bug in Open MPI 1.2
which causes a segmentation fault in case of this type of error. This will
Steve,
This list is for supporting Open MPI, not MPICH2 (MPICH2 is an
entirely different software package). You should probably redirect
your question to their support lists.
Thanks,
Tim
On Mar 23, 2007, at 12:46 AM, Jeffrey Stephen wrote:
Hi,
I am trying to run an MPICH2 application
Geoff,
'cpu', 'slots', and 'count' all do exactly the same thing.
Tim
On Thursday 22 March 2007 03:03 pm, Geoff Galitz wrote:
> Does the hostfile understand the syntax:
>
> mybox cpu=4
>
> I have some legacy code and scripts that I'd like to move without
> modifying if possible. I understand th
Well that's not a good thing. I have filed a bug about this (https://
svn.open-mpi.org/trac/ompi/ticket/954) and will try to look into it
soon, but don't know when it will get fixed.
Thanks for bringing this to our attention!
Tim
On Mar 20, 2007, at 1:39 AM, Bill Saphir wrote:
If you ask
David,
Have you tried something like
mpirun -np 1 --host talisker4 hostname
If that hangs, try adding '--debug-daemons' to the command line and
see if the output from that helps. If not, please send the output to
the list.
Thanks,
Tim
On Mar 19, 2007, at 1:59 AM, David Burns wrote:
I
Bala,
This is a known problem with the 1.1 series. The bad news is that I
know of no fix for this, though many people work around this problem
by running a cleanup script after each unclean run. The good news is
that the 1.2 series is MUCH better, though still not perfect. I would
suggest
Never mind, I was just able to replicate it. I'll look into it.
Tim
On Mar 5, 2007, at 4:26 PM, Tim Prins wrote:
That is possible. Threading support is VERY lightly tested, but I
doubt it is the problem since it always fails after 31 spawns.
Again, I have tried with these configure op
That is possible. Threading support is VERY lightly tested, but I
doubt it is the problem since it always fails after 31 spawns.
Again, I have tried with these configure options and the same version
of Open MPI and have still have been able to replicate this (after
letting it spawn over 500
mpi_info in the file ompi_info.txt.
-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
De la
part de Tim Prins
Envoyé : jeudi 1 mars 2007 05:45
À : Open MPI Users
Objet : Re: [OMPI users] MPI_Comm_Spawn
I have tried to reproduce this but cannot
I have tried to reproduce this but cannot. I have been able to run your test
program to over 100 spawns. So I can track this further, please send the
output of ompi_info.
Thanks,
Tim
On Tuesday 27 February 2007 10:15 am, rozzen.vinc...@fr.thalesgroup.com wrote:
> Do you know if there is a limi
20 PM, Arif Ali wrote:
Tim Prins wrote:
Hi Arif,
This is a problem with libtool and the IBM compilers and shared
libraries. The easiest thing to do is to build static libraries
instead by passing "--disable-shared --enable-static" to configure.
I am currently unaware of any work
Hi Arif,
This is a problem with libtool and the IBM compilers and shared
libraries. The easiest thing to do is to build static libraries
instead by passing "--disable-shared --enable-static" to configure.
I am currently unaware of any workarounds to make compiling shared
libraries work wi
s, as these are used for
our internal administrative messaging and we currently require it to be
there.
Thanks,
Tim Prins
On Tuesday 21 November 2006 07:49 pm, Adam Moody wrote:
> Hello,
> We have some clusters which consist of a large pool of 8-way nodes
> connected via ethernet. On th
Hi Martin,
Yeah, we appear to have some mistakes in the configuration macros. I
will correct them, but they really should not be effecting things in
this instance.
Whether Open MPI expects a 32 bit or 64 bit library depends on the
compiler. If your compiler generates 64 bit executables by
Quoting Toon Knapen :
> Tim Prins wrote:
>
> > I am in the process of developing MorphMPI and have designed my
> > implementation a bit different than what you propose (my apologies
> if I
> > misunderstood what you have said). I am creating one main library,
> wh
Toon,
> We are planning to develop a MorphMPI library. As explained a bit
> higher
> up in this thread, the MorphMPI library will be used while *compiling*
> the app. The library that implements the MorphMPI calls will be linked
> with dynamically. The MorphMPI on its turn links with some specific
98 matches
Mail list logo