Sorry for the incredibly late reply. Hopefully, you have already managed to
find the answer.
I'm not sure what your comm_spawn command looks like, but it appears you
specified the host in it using the "dash_host" info-key, yes? The problem is
that this is interpreted the same way as the "-host
s read by OpenMPI. Is this correct?
Thanks,
Kurt
From: Ralph Castain mailto:r...@open-mpi.org> >
Subject: [EXTERNAL] Re: [OMPI users] MPI_Comm_Spawn failure: All nodes already
filled
I'm afraid I cannot replicate this problem on OMPI master, so it could be
something different about OMPI 4
I'm afraid I cannot replicate this problem on OMPI master, so it could be
something different about OMPI 4.0.1 or your environment. Can you download and
test one of the nightly tarballs from the "master" branch and see if it works
for you?
https://www.open-mpi.org/nightly/master/
Ralph
On
If you’re reporting a bug and have a reproducers, I recommend creating the
github issue and only posting on the user list if you don’t get the
attention you want there.
Best,
Jeff
On Sat, Mar 16, 2019 at 1:16 PM Thomas Pak
wrote:
> Dear Jeff,
>
> I did find a way to circumvent this issue for
> To: Open MPI Users
> Cc: Open MPI Users
> Subject: Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors
>
> Dear Jeff,
>
> I did find a way to circumvent this issue for my specific application by
> spawning less frequently. However, I wanted to at least b
pen MPI Users
Subject: Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors
Dear Jeff,
I did find a way to circumvent this issue for my specific application by
spawning less frequently. However, I wanted to at least bring attention to this
issue for the OpenMPI community, as it can
FWIW: I just ran a cycle of 10,000 spawns on my Mac without a problem using
OMPI master, so I believe this has been resolved. I don’t know if/when the
required updates might come into the various release branches.
Ralph
> On Mar 16, 2019, at 1:13 PM, Thomas Pak wrote:
>
> Dear Jeff,
>
> I
Dear Jeff,
I did find a way to circumvent this issue for my specific application by
spawning less frequently. However, I wanted to at least bring attention to this
issue for the OpenMPI community, as it can be reproduced with an alarmingly
simple program.
Perhaps the user's mailing list is not
Is there perhaps a different way to solve your problem that doesn’t spawn
so much as to hit this issue?
I’m not denying there’s an issue here, but in a world of finite human
effort and fallible software, sometimes it’s easiest to just avoid the bugs
altogether.
Jeff
On Sat, Mar 16, 2019 at
Dear all,
Does anyone have any clue on what the problem could be here? This seems to be a
persistent problem present in all currently supported OpenMPI releases and
indicates that there is a fundamental flaw in how OpenMPI handles dynamic
process creation.
Best wishes,
Thomas Pak
From:
Andrew,
the 2 seconds timeout is very likely a bug that was fixed, so i strongly
suggest you give a try to the latest 2.0.2 that was released earlier this
week.
Ralph is referring an other timeout which is hard coded (fwiw, the MPI
standard says nothing about timeout, so we hardcoded one to
We know v2.0.1 has problems with comm_spawn, and so you may be encountering one
of those. Regardless, there is indeed a timeout mechanism in there. It was
added because people would execute a comm_spawn, and then would hang and eat up
their entire allocation time for nothing.
In v2.0.2, I see
I am using Open MPI version 2.0.1.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
What version of OMPI are you using?
> On Jan 31, 2017, at 7:33 AM, elistrato...@info.sgu.ru wrote:
>
> Hi,
>
> I am trying to write trivial master-slave program. Master simply creates
> slaves, sends them a string, they print it out and exit. Everything works
> just fine, however, when I add a
PSM_DEVICES -> TrueScale
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of
r...@open-mpi.org
Sent: Thursday, September 29, 2016 7:12 AM
To: Open MPI Users <users@lists.open-mpi.org>
Subject: Re: [OMPI users] MPI_Comm_spawn
Ah, that may be why it wouldn’t show up i
Ah, that may be why it wouldn’t show up in the OMPI code base itself. If that
is the case here, then no - OMPI v2.0.1 does not support comm_spawn for PSM. It
is fixed in the upcoming 2.0.2
> On Sep 29, 2016, at 6:58 AM, Gilles Gouaillardet
> wrote:
>
> Ralph,
>
Ralph,
My guess is that ptl.c comes from PSM lib ...
Cheers,
Gilles
On Thursday, September 29, 2016, r...@open-mpi.org wrote:
> Spawn definitely does not work with srun. I don’t recognize the name of
> the file that segfaulted - what is “ptl.c”? Is that in your manager
Spawn definitely does not work with srun. I don’t recognize the name of the
file that segfaulted - what is “ptl.c”? Is that in your manager program?
> On Sep 29, 2016, at 6:06 AM, Gilles Gouaillardet
> wrote:
>
> Hi,
>
> I do not expect spawn can work with
Hi,
I do not expect spawn can work with direct launch (e.g. srun)
Do you have PSM (e.g. Infinipath) hardware ? That could be linked to the
failure
Can you please try
mpirun --mca pml ob1 --mca btl tcp,sm,self -np 1 --hostfile my_hosts
./manager 1
and see if it help ?
Note if you have the
Hi Gilles,
Thanks for your answer.
BR,
Radek
On Thu, May 14, 2015 at 9:12 AM, Gilles Gouaillardet
wrote:
> This is a known limitation of the sm btl.
>
> FWIW, the vader btl (available in Open MPI 1.8) has the same limitation,
> thought i heard there are some works in
This is a known limitation of the sm btl.
FWIW, the vader btl (available in Open MPI 1.8) has the same limitation,
thought i heard there are some works in progress to get rid of this
limitation.
Cheers,
Gilles
On 5/14/2015 3:52 PM, Radoslaw Martyniszyn wrote:
Dear developers of Open MPI,
> "George" == George Bosilca writes:
George> Why are you using system() the second time ? As you want
George> to spawn an MPI application calling MPI_Call_spawn would
George> make everything simpler.
Yes, this works! Very good trick... The system routine
Why are you using system() the second time ? As you want to spawn an MPI
application calling MPI_Call_spawn would make everything simpler.
George
On Jul 3, 2014 4:34 PM, "Milan Hodoscek" wrote:
>
> Hi,
>
> I am trying to run the following setup in fortran without much
>
Unfortunately, that has never been supported. The problem is that the embedded
mpirun picks up all those MCA params that were provided to the original
application process, and gets hopelessly confused. We have tried in the past to
figure out a solution, but it has proved difficult to separate
Funny, but I couldn't find the code path that supported that in the latest 1.6
series release (didn't check earlier ones) - but no matter, it seems logical
enough. Fixed in the trunk and cmr'd to 1.7.4
Thanks!
Ralph
On Dec 19, 2013, at 8:08 PM, Tim Miller wrote:
> Hi
Hi Ralph,
That's correct. All of the original processes see the -x values, but
spawned ones do not.
Regards,
Tim
On Thu, Dec 19, 2013 at 6:09 PM, Ralph Castain wrote:
>
> On Dec 19, 2013, at 2:57 PM, Tim Miller wrote:
>
> > Hi All,
> >
> > I have a
On Dec 19, 2013, at 2:57 PM, Tim Miller wrote:
> Hi All,
>
> I have a question similar (but not identical to) the one asked by Tom Fogel a
> week or so back...
>
> I have a code that uses MPI_Comm_spawn to launch different processes. The
> executables for these use
One further point that I missed in my earlier note: if you are starting the
parent as a singleton, then you are fooling yourself about the "without mpirun"
comment. A singleton immediately starts a local daemon to act as mpirun so that
comm_spawn will work. Otherwise, there is no way to launch
On 6/16/2012 8:03 AM, Roland Schulz wrote:
Hi,
I would like to start a single process without mpirun and then use
MPI_Comm_spawn to start up as many processes as required. I don't want
the parent process to take up any resources, so I tried to disconnect
the inter communicator and then
I'm afraid there is no option to keep the job alive if the parent exits. I
could give you several reasons for that behavior, but the bottom line is that
it can't be changed.
Why don't you have the parent loop across "sleep", waking up periodically to
check for a "we are done" message from a
Try using MPI_COMM_REMOTE_SIZE to get the size of the remote group in an
intercommunicator. MPI_COMM_SIZE returns the size of the local group.
On Jan 7, 2011, at 6:22 PM, Pierre Chanial wrote:
> Hello,
>
> When I run this code:
>
> program testcase
>
> use mpi
> implicit none
>
>
> "Ralph" == Ralph Castain writes:
Ralph> On Oct 4, 2010, at 10:36 AM, Milan Hodoscek wrote:
>>> "Ralph" == Ralph Castain writes:
>>
Ralph> I'm not sure why the group communicator would make a
Ralph> difference - the code area
On Oct 4, 2010, at 10:36 AM, Milan Hodoscek wrote:
>> "Ralph" == Ralph Castain writes:
>
>Ralph> I'm not sure why the group communicator would make a
>Ralph> difference - the code area in question knows nothing about
>Ralph> the mpi aspects of the job. It
> "Ralph" == Ralph Castain writes:
Ralph> I'm not sure why the group communicator would make a
Ralph> difference - the code area in question knows nothing about
Ralph> the mpi aspects of the job. It looks like you are hitting a
Ralph> race condition that
I'm not sure why the group communicator would make a difference - the code area
in question knows nothing about the mpi aspects of the job. It looks like you
are hitting a race condition that causes a particular internal recv to not
exist when we subsequently try to cancel it, which generates
Hi Ralph,
I have confirmed that openmpi-1.4a1r22335 works with my master, slave
example. The temporary directories are cleaned up properly.
Thanks for the help!
nick
On Thu, Dec 17, 2009 at 13:38, Nicolas Bock wrote:
> Ok, I'll give it a try.
>
> Thanks, nick
>
>
>
>
In case you missed it, this patch should be in the 1.4 nightly tarballs - feel
free to test and let me know what you find.
Thanks
Ralph
On Dec 2, 2009, at 10:06 PM, Nicolas Bock wrote:
> That was quick. I will try the patch as soon as you release it.
>
> nick
>
>
> On Wed, Dec 2, 2009 at
Nicolas Bock wrote:
On Fri, Dec 4, 2009 at 10:29, Eugene Loh
wrote:
I think you might observe a
world of difference if the master issued
some non-blocking call and then intermixed MPI_Test calls with sleep
calls. You should see *much* more subservient behavior.
On Fri, Dec 4, 2009 at 10:29, Eugene Loh wrote:
> Nicolas Bock wrote:
>
> On Fri, Dec 4, 2009 at 10:10, Eugene Loh wrote:
>
>> Yield helped, but not as effectively as one might have imagined.
>>
>
> Yes, that's the impression I get as well, the master
Nicolas Bock wrote:
On Fri, Dec 4, 2009 at 10:10, Eugene Loh
wrote:
Yield helped, but
not as effectively as one might have imagined.
Yes, that's the impression I get as well, the master process might be
yielding, but it doesn't appear to be a lot.
On Fri, Dec 4, 2009 at 10:10, Eugene Loh wrote:
> Nicolas Bock wrote:
>
> On Fri, Dec 4, 2009 at 08:21, Ralph Castain wrote:
>
>> You used it correctly. Remember, all that cpu number is telling you is the
>> percentage of use by that process. So bottom
Nicolas Bock wrote:
On Fri, Dec 4, 2009 at 08:21, Ralph Castain
wrote:
You used it correctly. Remember, all that cpu number
is telling you is the percentage of use by that process. So bottom line
is: we are releasing it as much as we possibly can, but no other
On Fri, Dec 4, 2009 at 08:21, Ralph Castain wrote:
> You used it correctly. Remember, all that cpu number is telling you is the
> percentage of use by that process. So bottom line is: we are releasing it as
> much as we possibly can, but no other process wants to use the cpu,
You used it correctly. Remember, all that cpu number is telling you is the
percentage of use by that process. So bottom line is: we are releasing it as
much as we possibly can, but no other process wants to use the cpu, so we go
ahead and use it.
If any other process wanted it, then the
On Fri, Dec 4, 2009 at 08:03, Ralph Castain wrote:
>
>
> It is polling at the barrier. This is done aggressively by default for
> performance. You can tell it to be less aggressive if you want via the
> yield_when_idle mca param.
>
>
How do I use this parameter correctly? I
On Dec 4, 2009, at 7:46 AM, Nicolas Bock wrote:
> Hello list,
>
> when I run the attached example, which spawns a "slave" process with
> MPI_Comm_spawn(), I see the following:
>
> nbock19911 0.0 0.0 53980 2288 pts/0S+ 07:42 0:00
> /usr/local/openmpi-1.3.4-gcc-4.4.2/bin/mpirun
That was quick. I will try the patch as soon as you release it.
nick
On Wed, Dec 2, 2009 at 21:06, Ralph Castain wrote:
> Patch is built and under review...
>
> Thanks again
> Ralph
>
> On Dec 2, 2009, at 5:37 PM, Nicolas Bock wrote:
>
> Thanks
>
> On Wed, Dec 2, 2009 at
Patch is built and under review...
Thanks again
Ralph
On Dec 2, 2009, at 5:37 PM, Nicolas Bock wrote:
> Thanks
>
> On Wed, Dec 2, 2009 at 17:04, Ralph Castain wrote:
> Yeah, that's the one all right! Definitely missing from 1.3.x.
>
> Thanks - I'll build a patch for the
Thanks
On Wed, Dec 2, 2009 at 17:04, Ralph Castain wrote:
> Yeah, that's the one all right! Definitely missing from 1.3.x.
>
> Thanks - I'll build a patch for the next bug-fix release
>
>
> On Dec 2, 2009, at 4:37 PM, Abhishek Kulkarni wrote:
>
> > On Wed, Dec 2, 2009 at 5:00
On Wed, Dec 2, 2009 at 14:23, Ralph Castain wrote:
> Hmmif you are willing to keep trying, could you perhaps let it run for
> a brief time, ctrl-z it, and then do an ls on a directory from a process
> that has already terminated? The pids will be in order, so just look for
On Dec 2, 2009, at 10:24 AM, Nicolas Bock wrote:
>
>
> On Tue, Dec 1, 2009 at 20:58, Nicolas Bock wrote:
>
>
> On Tue, Dec 1, 2009 at 18:03, Ralph Castain wrote:
> You may want to check your limits as defined by the shell/system. I can also
> run
On Tue, Dec 1, 2009 at 20:58, Nicolas Bock wrote:
>
>
> On Tue, Dec 1, 2009 at 18:03, Ralph Castain wrote:
>
>> You may want to check your limits as defined by the shell/system. I can
>> also run this for as long as I'm willing to let it run, so
You may want to check your limits as defined by the shell/system. I can also
run this for as long as I'm willing to let it run, so something else appears to
be going on.
On Dec 1, 2009, at 4:38 PM, Nicolas Bock wrote:
>
>
> On Tue, Dec 1, 2009 at 16:28, Abhishek Kulkarni
On Tue, Dec 1, 2009 at 16:28, Abhishek Kulkarni wrote:
> On Tue, Dec 1, 2009 at 6:15 PM, Nicolas Bock
> wrote:
> > After reading Anthony's question again, I am not sure now that we are
> having
> > the same problem, but we might. In any case, the
On Tue, Dec 1, 2009 at 6:15 PM, Nicolas Bock wrote:
> After reading Anthony's question again, I am not sure now that we are having
> the same problem, but we might. In any case, the attached example programs
> trigger the issue of running out of pipes. I don't see how orted
Linux mujo 2.6.30-gentoo-r5 #1 SMP PREEMPT Thu Sep 17 07:47:12 MDT 2009
x86_64 Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz GenuineIntel GNU/Linux
On Tue, Dec 1, 2009 at 16:24, Ralph Castain wrote:
> It really does help if we have some idea what OMPI version you are talking
>
Sorry,
openmpi-1.3.3 compiled with gcc-4.4.2
nick
On Tue, Dec 1, 2009 at 16:24, Ralph Castain wrote:
> It really does help if we have some idea what OMPI version you are talking
> about, and on what kind of platform.
>
> This issue was fixed to the best of my knowledge
It really does help if we have some idea what OMPI version you are talking
about, and on what kind of platform.
This issue was fixed to the best of my knowledge (not all the pipes were
getting closed), but I would have to look and see what release might contain
the fix...would be nice to know
After reading Anthony's question again, I am not sure now that we are having
the same problem, but we might. In any case, the attached example programs
trigger the issue of running out of pipes. I don't see how orted could, even
if it was reused. There is only a very limited number of processes
Thanks for the info.
meanwhile I have set:
mpi_param_check = 0
in my system-wide configuation file on workers
and
mpi_param_check = 1
on the master.
Jerome
Ralph Castain wrote:
Thanks! That does indeed help clarify.
You should also then configure OMPI with
Thanks! That does indeed help clarify.
You should also then configure OMPI with --disable-per-user-config-
files. MPI procs will automatically look at the default MCA parameter
file, which is probably on your master node (wherever mpirun was
executed). However, they also look at the user's
Hi,
thanks for the reply.
Ralph Castain wrote:
The orteds don't pass anything from MPI_Info to srun during a
comm_spawn. What the orteds do is to chdir to the specified wdir before
spawning the child process to ensure that the child has the correct
working directory, then the orted changes
The orteds don't pass anything from MPI_Info to srun during a
comm_spawn. What the orteds do is to chdir to the specified wdir
before spawning the child process to ensure that the child has the
correct working directory, then the orted changes back to its default
working directory.
The
Hi !
finally I got it:
passing the mca key/value `"plm_slurm_args"/"--chdir /local/folder"' does the
trick.
As a matter of fact, my code pass the MPI_Info key/value
`"wdir"/"/local/folder"'
to MPI_Comm_spawn as well: the working directories on the nodes of the spawned
programs
are
Hello Again,
Jerome BENOIT wrote:
Hello List,
I have just noticed that, when MPI_Comm_spawn is used to launch programs
around,
oreted working directory on the nodes is the working directory of the
spawnning program:
can we ask to oreted to use an another directory ?
Changing the working
Hi Joao,
Unfortunately, spawn is broken on the development trunk right now. We
are working on a major revamp of the runtime system which should fix
these problems, but it is not ready yet.
Sorry about that :(
Tim
Joao Vicente Lima wrote:
Hi all,
I'm getting errors with spawn in the
ng it spawn over 500 times). Have you been able to try a more
>> recent version of Open MPI? What kind of system is it? How many nodes
>> are you running on?
>>
>> Tim
>>
>> On Mar 5, 2007, at 1:21 PM, rozzen.vinc...@fr.thalesgroup.com wrote:
>>
>>&
/local/Mpi/openmpi-1.1.4-noBproc-noThread/lib/openmpi/mca_rmgr_urm.so
>> #9 0x4004f277 in orte_rmgr_base_cmd_dispatch () from
>> /usr/local/Mpi/openmpi-1.1.4-noBproc-noThread/lib/liborte.so.0
>> #10 0x402b10ae in orte_rmgr_urm_recv () from
>> /usr/local/Mpi/op
) at main.c:13
(gdb)
-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]De la
part de Tim Prins
Envoyé : lundi 5 mars 2007 22:34
À : Open MPI Users
Objet : Re: [OMPI users] MPI_Comm_Spawn
Never mind, I was just able to replicate it. I'll lo
d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
De la
part de Ralph H Castain
Envoyé : mardi 27 février 2007 16:26
À : Open MPI Users <us...@open-mpi.org>
Objet : Re: [OMPI users] MPI_Comm_Spawn
Now that's interesting! There shouldn't be a limit, but to be
honest
Castain
Envoyé : mardi 27 février 2007 16:26
À : Open MPI Users <us...@open-mpi.org>
Objet : Re: [OMPI users] MPI_Comm_Spawn
Now that's interesting! There shouldn't be a limit, but to be
honest, I've
never tested that mode of operation - let me look into it and see.
It sounds
like there i
Here is attached the output of ompi_info in the file ompi_info.txt.
-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
De la
part de Tim Prins
Envoyé : jeudi 1 mars 2007 05:45
À : Open MPI Users
Objet : Re: [OMPI users] MPI_Comm_Spawn
I have tr
Now that's interesting! There shouldn't be a limit, but to be honest, I've
never tested that mode of operation - let me look into it and see. It sounds
like there is some counter that is overflowing, but I'll look.
Thanks
Ralph
On 2/27/07 8:15 AM, "rozzen.vinc...@fr.thalesgroup.com"
tion phase or if there is really a
> incompatibility problem in open mpi.
>
> Thank you so much for all you support, I wish it is not succesful yet.
>
> Regards.
>
> Herve
>
> Date: Fri, 03 Nov 2006 14:10:20 -0700
> From: Ralph H Castain <r...@lanl.gov>
> Subj
e on my side now.
>
> You proposed to concoct something over the next few days. I look forward to
> hearing from you.
>
> Regards.
>
> Herve
>
>
>
> Date: Tue, 31 Oct 2006 06:53:53 -0700
> From: Ralph H Castain <r...@lanl.gov>
> Subject:
nor do I have one for
comm_spawn_multiple that uses the "host" field. I can try to concoct
something over the next few days, though, and verify that our code is
working correctly.
>
> Regards.
>
> Herve
>
> Date: Mon, 30 Oct 2006 09:00:47 -0700
> From: Ralph H C
Thank you for the diagnosis.
Saadat.
On 7/6/06, Ralph Castain wrote:
Hi Saadat
That's the problem, then – you need to run comm_spawn applications using
mpirun, I'm afraid. We plan to fix this in the near future, but for now we
can only offer that workaround.
Ralph
On
Ralph:
I am running the application without mpirun, i.e. ./foobar. So, according to
you definition of singleton above, I am calling comm_spawn from a singleton.
Thanks.
Saadat.
On 7/6/06, Ralph Castain wrote:
Thanks Saadat
Could you clarify how you are running this
Ralph:
I am using Fedora Core 4 (Linux turkana 2.6.12-1.1390_FC4smp #1 SMP Tue Jul
5 20:21:11 EDT 2005 i686 athlon i386 GNU/Linux). The machine is a dual
processor Athlon based machine. No, cluster resource manager, just an
rsh/ssh based setup.
Thanks.
Saadat.
On 7/6/06, Ralph H Castain
Hi Saadat
Could you tell us something more about the system you are using? What type
of processors, operating system, any resource manager (e.g., SLURM, PBS),
etc?
Thanks
Ralph
On 7/6/06 10:49 AM, "s anwar" wrote:
> Good Day:
>
> I am getting the following error messages
I see responses to noncritical parts of my discussion but not the
following, is it a known issue, a fixed issue, or we don't want to
discuss it issue?
Michael
On Mar 7, 2006, at 4:39 PM, Michael Kluskens wrote:
The following errors/warnings also exist when running my spawn test
on a clean
> -Original Message-
> > [-:13327] mca: base: component_find: unable to open: dlopen(/usr/
> > local/lib/openmpi/mca_pml_teg.so, 9): Symbol not found:
> > _mca_ptl_base_recv_request_t_class
> >Referenced from: /usr/local/lib/openmpi/mca_pml_teg.so
> >Expected in: flat namespace
>
On Mar 7, 2006, at 3:23 PM, Michael Kluskens wrote:
Per the mpi_comm_spawn issues with the 1.0.x releases I started using
1.1r9212, with my sample code I'm getting a messages of
[-:13327] mca: base: component_find: unable to open: dlopen(/usr/
local/lib/openmpi/mca_pml_teg.so, 9): Symbol not
These errors indicate a mismatching between the components and the
mpi library. Please remove the installation directory and do a make
install again in the open mpi directory. Somewhere around the
revision 9100 we remove some of the components (teg included). This
error say that there is
On Mar 7, 2006, at 8:45 AM, Michael Kluskens wrote:
I will begin using the 1.1 snapshot as soon as I see r9198 or
higher. The dynamic process creation is critical to my project and I
need the full range of features that have been discussed on the list
recently regarding the MPI_SPAWN.
I would
Per the mpi_comm_spawn issues with the 1.0.x releases I started using
1.1r9212, with my sample code I'm getting a messages of
[-:13327] mca: base: component_find: unable to open: dlopen(/usr/
local/lib/openmpi/mca_pml_teg.so, 9): Symbol not found:
_mca_ptl_base_recv_request_t_class
Michael --
Sorry for the delay in replying.
Many thanks for your report! You are exactly right -- our types are
wrong and will not match in the F90 bindings. I have committed a fix
to the trunk for this (it involved changing some types in mpif.h and
adding another interface function for
On Mar 1, 2006, at 9:56 AM, George Bosilca wrote:
Now I look into this problem more and your right it's a missing
interface. Somehow, it didn't get compiled.
From "openmpi-1.0.1/ompi/mpi/f90/mpi-f90-interfaces.h" the interface
says:
subroutine MPI_Comm_spawn(command, argv, maxprocs,
On Mar 1, 2006, at 8:59 AM, Michael Kluskens wrote:
I'm sorry I don't understand what you are saying. Are you saying
that when using "free source form" Fortran 90 code that the default
line length of 132 characters is ignored when compiling MPI function
calls? I know for a fact this is not
89 matches
Mail list logo