The C/R Debugging feature (the ability to do reversible debugging or
backward stepping with gdb and/or DDT) was added on 8/10/2010 in the commit
below:
https://svn.open-mpi.org/trac/ompi/changeset/23587
This feature never made it into a release so it was only ever available on
the trunk.
When we started adding Checkpoint/Restart functionality to Open MPI,
we were hoping to provide a LAM/MPI-like interface to the C/R
functionality. So we added a configure option as a placeholder. The
'LAM' option was intended to help those transitioning from LAM/MPI to
Open MPI. However we never
I wonder if this is related to memory pinning. Can you try turning off
the leave pinned, and see if the problem persists (this may affect
performance, but should avoid the crash):
mpirun ... --mca mpi_leave_pinned 0 ...
Also it looks like Smoky has a slightly newer version of the 1.4
branch
There should not be any issue is checkpointing a C++ vs C program
using the 'self' checkpointer. The self checkpointer just looks for a
particular function name to be present in the compiled program binary.
Something to try is to run 'nm' on the compiled C++ program and make
sure that the 'self'
There are some great comments in this thread. Process migration (like
many topics in systems) can get complex fast.
The Open MPI process migration implementation is checkpoint/restart
based (currently using BLCR), and uses an 'eager' style of migration.
This style of migration stops a process
That seems like a bug to me.
What version of Open MPI are you using? How have you setup the C/R
functionality (what MCA options do you have set, what command line
options are you using)? Can you send a small reproducing application
that we can test against?
That should help us focus in on the
Though I do not share George's pessimism about acceptance to the Open
MPI community, it has been slightly difficult to add such a
non-standard feature to the code base for various reasons.
At ORNL, I have been developing a prototype for the MPI Forum Fault
Tolerance Working Group [1] of the
It sounds like there is a race happening in the shutdown of the
processes. I wonder if the app is shutting down in a way that mpirun
does not quite like.
I have not tested the C/R functionality in the 1.4 series in a long
time. Can you give it a try with the 1.5 series, and see if there is
any
I'll preface my response with the note that I have not tried any of
those options with the C/R functionality. It should just work, but I
am not 100% certain. If it doesn't, let me know and I'll file a bug to
fix it.
You can pass any mpirun option through ompi-restart by using the
--mpirun_opts
That command line option may be only available on the trunk. What
version of Open MPI are you using?
-- Josh
On Tue, Oct 18, 2011 at 11:14 AM, Faisal Shahzad wrote:
> Hi,
> Thank you for your reply.
> I actually do not see option flag '--mpirun_opts' with 'ompi-restart
>
That option is only available on the trunk at the moment. I filed a
ticket to move the functionality to the 1.5 branch:
https://svn.open-mpi.org/trac/ompi/ticket/2890
The work around would be to take the appfile generated from
"ompi-restart --apponly ompi_snapshot...", and then run mpirun with
Open MPI (trunk/1.7 - not 1.4 or 1.5) provides an application level
interface to request a checkpoint of an application. This API is
defined on the following website:
http://osl.iu.edu/research/ft/ompi-cr/api.php#api-cr_checkpoint
This will behave the same as if you requested the checkpoint of
n Wed, Oct 26, 2011 at 3:25 AM, Josh Hursey <jjhur...@open-mpi.org> wrote:
>>
>> Open MPI (trunk/1.7 - not 1.4 or 1.5) provides an application level
>> interface to request a checkpoint of an application. This API is
>> defined on the following website:
>>
I wonder if the try_compile step is failing. Can you send a compressed
copy of your config.log from this build?
-- Josh
On Mon, Oct 31, 2011 at 10:04 AM, wrote:
> Hi !
>
> I am trying to compile openmpi 1.4.4 with Torque, Infiniband and blcr
> checkpoint support on
The MPI standard does not provide explicit support for process
migration. However, some MPI implementations (including Open MPI) have
integrated such support based on checkpoint/restart functionality. For
more information about the checkpoint/restart process migration
functionality in Open MPI see
Note that the "migrate me from my current node to node " scenario
is covered by the migration API exported by the C/R infrastructure, as
I noted earlier.
http://osl.iu.edu/research/ft/ompi-cr/api.php#api-cr_migrate
The "move rank N to node " scenario could probably be added as an
extension of
For MPI_Comm_split, all processes in the input communicator (oldcomm
or MPI_COMM_WORLD in your case) must call the operation since it is
collective over the input communicator. In your program rank 0 is not
calling the operation, so MPI_Comm_split is waiting for it to
participate.
If you want
Often this type of problem is due to the 'prelink' option in Linux.
BLCR has a FAQ item that discusses this issue and how to resolve it:
https://upc-bugs.lbl.gov/blcr/doc/html/FAQ.html#prelink
I would give that a try. If that does not help then you might want to
try checkpointing a single
I have not tried to support a MTL with the checkpointing functionality, so
I do not have first hand experience with those - just the OB1/BML/BTL stack.
The difficulty in porting to a new transport is really a function of how
the transport interacts with the checkpointer (e.g., BLCR). The draining
Currently Open MPI only supports the checkpointing of the whole
application. There has been some work on uncoordinated checkpointing with
message logging, though I do not know the state of that work with regards
to availability. That work has been undertaken by the University of
Tennessee
library to annotate what's important to
>> store, and how to do so, etc.). But if you're writing the application,
>> you're better off to handle it internally, than externally.
>>
>> Lloyd Brown
>> Systems Administrator
>> Fulton Supercomputing Lab
>> Brigham Y
That behavior is permitted by the MPI 2.2 standard. It seems that our
documentation is incorrect in this regard. I'll file a bug to fix it.
Just to clarify, in the MPI 2.2 standard in Section 6.4.2 (Communicator
Constructors) under MPI_Comm_create it states:
"Each process must call with a group
tensen <je...@fysik.dtu.dk>
> On 20-01-2012 15:26, Josh Hursey wrote:
>
> That behavior is permitted by the MPI 2.2 standard. It seems that our
> documentation is incorrect in this regard. I'll file a bug to fix it.
>
> Just to clarify, in the MPI 2.2 standard in Section 6.4.2 (
Well that is awfully insistent. I have been able to reproduce the problem.
Upon initial inspection I don't see the bug, but I'll dig into it today and
hopefully have a patch in a bit. Below is a ticket for this bug:
https://svn.open-mpi.org/trac/ompi/ticket/2980
I'll let you know what I find
It looks like Jeff beat me too it. The problem was with a missing 'test' in
the configure script. I'm not sure how it creeped in there, but the fix is
in the pipeline for the next 1.5 release. The ticket to track the progress
of this patch is on the following ticket:
When you receive that callback the MPI has ben put in a quiescent state. As
such it does not allow MPI communication until the checkpoint is completely
finished. So you cannot call barrier in the checkpoint callback. Since Open
MPI did doing a coordinated checkpoint, you can assume that all
This is a bit of a non-answer, but can you try the 1.5 series (1.5.5
in the current release)? 1.4 is being phased out, and 1.5 will replace
it in the near future. 1.5 has a number of C/R related fixes that
might help.
-- Josh
On Thu, Mar 29, 2012 at 1:12 PM, Linton, Tom
The 1.5 series does not support process migration, so there is no
ompi-migrate option there. This was only contributed to the trunk (1.7
series). However, changes to the runtime environment over the past few
months have broken this functionality. It is currently unclear when
this will be repaired.
I wonder if the LD_LIBRARY_PATH is not being set properly upon
restart. In your mpirun you pass the '-x LD_LIBRARY_PATH'.
ompi-restart will not pass that variable along for you, so if you are
using that to set the BLCR path this might be your problem.
A couple solutions:
- have the PATH and
heck/Restart Program contains DLL ?
I do not understand what you are trying to ask here. Please rephrase.
-- Josh
>
>
>
> 寄件者: Josh Hursey <jjhur...@open-mpi.org>
> 收件者: Open MPI Users <us...@open-mpi.org>
(4) I install openmpi in root ,should I move to
> General-user-account ?
>
>
> 寄件者: Josh Hursey <jjhur...@open-mpi.org>
> 收件者: Open MPI Users <us...@open-mpi.org>
> 寄件日期: 2012/4/24 (週二) 10:58 PM
>
> 主旨: Re: [OMPI users]
You are correct that the Open MPI project combined the efforts of a
few preexisting MPI implementations towards building a single,
extensible MPI implementation with the best features of the prior MPI
implementations. From the beginning of the project the Open MPI
developer community has desired
Ifeanyi,
I am usually the one that responds to checkpoint/restart questions,
but unfortunately I do not have time to look into this issue at the
moment (and probably won't for at least a few more months). There are
a few other developers that work on the checkpoint/restart
functionality that
The official support page for the C/R features is hosted by Indiana
University (linked from the Open MPI FAQs):
http://osl.iu.edu/research/ft/ompi-cr/
The instructions probably need to be cleaned up (some of the release
references are not quite correct any longer). But the following should
give
Currently you have to do as Reuti mentioned (use the queuing system,
or create a script). We do have a feature request ticket open for this
feature if you are interested in following the progress:
https://svn.open-mpi.org/trac/ompi/ticket/1961
It has been open for a while, but the feature
In your desired ordering you have rank 0 on (socket,core) (0,0) and
rank 1 on (0,2). Is there an architectural reason for that? Meaning
are cores 0 and 1 hardware threads in the same core, or is there a
cache level (say L2 or L3) connecting cores 0 and 1 separate from
cores 2 and 3?
hwloc's
Pramoda,
That paper was exploring an application of a proposed extension to the MPI
standard for fault tolerance purposes. By default this proposed interface
is not provided by Open MPI. We have created a prototype version of Open
MPI that includes this extension, and it can be found at the
Can you send the config.log and some of the other information described on:
http://www.open-mpi.org/community/help/
-- Josh
On Wed, Nov 14, 2012 at 6:01 PM, Ifeanyi wrote:
> Hi all,
>
> I got this message when I issued this command:
>
> root@node1:/home/abolap#
The openib BTL and BLCR support in Open MPI were working about a year ago
(when I last checked). The psm BTL is not supported at the moment though.
>From the error, I suspect that we are not fully closing the openib btl
driver before the checkpoint thus when we try to restart it is looking for
a
Process migration was implemented in Open MPI and working in the trunk a
couple of years ago. It has not been well maintained for a few years though
(hopefully that will change one day). So you can try it, but your results
may vary.
Some details are at the link below:
With that configure string, Open MPI should fail in configure if it does
not find the BLCR libraries. Note that this does not check to make sure the
BLCR is loaded as a module in the kernel (you will need to check that
manually).
The ompi_info command will also show you if C/R is enabled and will
> bash: ompi-migrate: command not found
>
> Please assist.
>
> Regards - Ifeanyi
>
>
>
> On Wed, Dec 12, 2012 at 3:19 AM, Josh Hursey <jjhur...@open-mpi.org>wrote:
>
>> Process migration was implemented in Open MPI and working in the trunk a
>> couple
This is a bit late in the thread, but I wanted to add one more note.
The functionality that made it to v1.6 is fairly basic in terms of C/R
support in Open MPI. It supported a global checkpoint write, and (for a
time) a simple staged option (I think that is now broken).
In the trunk (about 3
Currently, there is no mechanism to checkpoint every X minutes in Open
MPI.
As mentioned below you can use a script to initiate the checkpoint
every X minutes. Alternatively it should not be too difficult to add
such a feature to Open MPI. If enough people would be interested I can
file
Checkpoint/restart in Open MPI supports TCP, Shared Memory,
Infiniband, and Myrinet interconnects (possibly others, but they have
not been tested) [1]. Is this what you are looking for?
-- Josh
[1] Hursey, J., Mattox, T. I., and Lumsdaine, A. 2009. "Interconnect
agnostic
running application.
I would imagine an automatic restart from the last checkpoint in
case of failure would also be interesting.
Many thanks.
Regards,
Kritiraj
--- On Tue, 6/30/09, Josh Hursey <jjhur...@open-mpi.org> wrote:
From: Josh Hursey <jjhur...@open-mpi.org>
Subject: Re:
The MPI standard does not define any functions for taking checkpoints
from the application.
The checkpoint/restart work in Open MPI is a command line driven,
transparent solution. So the application does not have change in any
way, and the user (or scheduler) must initiate the checkpoint
This mailing list supports the Open MPI implementation of the MPI
standard. If you have concerns about Intel MPI you should contact
their support group.
The ompi_checkpoint/ompi_restart routines are designed to work with
Open MPI, and will certainly fail when used with other MPI
Task-farm or manager/worker recovery models typically depend on
intercommunicators (i.e., from MPI_Comm_spawn) and a resilient MPI
implementation. William Gropp and Ewing Lusk have a paper entitled
"Fault Tolerance in MPI Programs" that outlines how an application
might take advantage of
On Aug 12, 2009, at 3:35 PM, Kritiraj Sajadah wrote:
HI,
I want to configure OPENMPI to checkpoint MPI applications using
DMTCP. Does anyone know how to specify the path to the DMTCP
application when installing OPENMPI.
I have not experimented with Open MPI using DMTCP. If I understand
On Aug 18, 2009, at 11:36 AM, Jean Potsam wrote:
Dear ALL,
I am trying to checkpoint MPI application using the
self component. I had a look at the OPEN MPI FT user's guide Draft
1.4. but is still unsure.
I have installed openmpi as follows:
jean$ ./configure
Did you configure Open MPI with the appropriate checkpoint/restart
options? Did you remember to add the '-am ft-enable-cr' parameter to
mpirun? Is BLCR loaded properly on your machines? These are the common
problems that people usually hit when getting started.
There is a C/R Fault
Though I would not recommend your technique for initiating a
checkpoint from an application, it may work. Since ompi-checkpoint
will need to contact and interact with every MPI process, this could
cause problems if the application is blocking in system() while ompi-
checkpoint is trying to
The config.log looked fine, so I think you have fixed the configure
problem that you previously posted about.
Though the config.log indicates that the BLCR component is scheduled
for compile, ompi_info does not indicate that it is available. I
suspect that the error below is because the
Is your application running on the same machine as mpirun?
How did you configure Open MPI? Note that is program will not work
without the FT thread enabled, which would be one reason why it would
seem to hang (since it is waiting for the application to enter the MPI
library):
The configuration looks fine, but from the stack it seems that the
segv is coming from an invalid free in BLCR (which seems odd to me).
Are you able to get a gdb backtrace from a core file generated from
this run? That would provide a bit more detail on where things are
going wrong.
What
--- On Mon, 14/9/09, Josh Hursey <jjhur...@open-mpi.org> wrote:
From: Josh Hursey <jjhur...@open-mpi.org>
Subject: Re: [OMPI users] Application hangs when checkpointing
application (update)
To: "Open MPI Users" <us...@open-mpi.org>
Date: Monday, 14 September, 2009, 1:27 PM
This is described in the C/R User's Guide attached to the webpage below:
https://svn.open-mpi.org/trac/ompi/wiki/ProcessFT_CR
Additionally this has been addressed on the users mailing list in the
past, so searching around will likely turn up some examples.
-- Josh
On Sep 18, 2009, at
a resilient
impementation. Here by resiliency I mean abnormal termination or
intentionally killing a process should not cause any(parent or
sibling) process to be terminated, given that processes are connected.
thanks.
Regards,
On Mon, Aug 3, 2009 at 8:37 PM, Josh Hursey <jjhur...@open-mpi.
How did you configure Open MPI? Is your application using SIGUSR1?
This error message indicates that Open MPI's daemons could not
communicate with the application processes. The daemons send SIGUSR1
to the process to initiate the handshake (you can change this signal
with -mca
As an alternative technique for distributing the binary, you could ask
Open MPI's runtime to do it for you (made available in the v1.3
series). You still need to make sure that the same version of Open is
installed on all nodes, but if you pass the --preload-binary option to
mpirun the
feature to make sure that it is still
working on my test machines.
-- Josh
Does anyone has an idea about what is wrong?
Best regards,
--
Constantinos
Josh Hursey wrote:
This is described in the C/R User's Guide attached to the webpage
below:
https://svn.open-mpi.org/trac/ompi/wiki/Proces
(Sorry for the excessive delay in replying)
I do not have any experience with the DMTCP project, so I can only
speculate on what might be going on here. If you are using DMTCP to
transparently checkpoint Open MPI you will need to make sure that you
are not using any other interconnect
On Oct 30, 2009, at 1:35 PM, Hui Jin wrote:
Hi All,
I got a problem when trying to checkpoint a mpi job.
I will really appreciate if you can help me fix the problem.
the blcr package was installed successfully on the cluster.
I configure the ompenmpi with flags,
./configure --with-ft=cr
On Oct 28, 2009, at 7:41 AM, Sergio Díaz wrote:
Hello,
I have achieved the checkpoint of an easy program without SGE. Now,
I'm trying to do the integration openmpi+sge but I have some
problems... When I try to do checkpoint of the mpirun PID, I got an
error similar to the error gotten
On Nov 5, 2009, at 4:46 AM, Mohamed Adel wrote:
Dear Sergio,
Thank you for your reply. I've inserted the modules into the kernel
and it all worked fine. But there is still a weired issue. I use the
command "mpirun -n 2 -am ft-enable-cr -H comp001 checkpoint-restart-
test" to start the an
Though the --preload-binary option was created while building the
checkpoint/restart functionality it does not depend on checkpoint/restart
function in any way (just a side effect of the initial development).
The problem you are seeing is a result of the computing environment setup of
On Nov 6, 2009, at 7:59 AM, Kritiraj Sajadah wrote:
> Hi Everyone,
> I have install openmpi 1.3 and blcr 0.81 on my laptop (single
> processor).
>
> I am trying to checkpoint a small test application:
>
> ###
>
> #include
> #include
> #include
> #include
> #include
>
change this protocol and use ssh. So, I'm going to
> test it this afternoon and I will comment to you the results.
Try 'ssh' and see if that helps. I suspect the problem is with the session
directory location though.
>
> Regards,
> Sergio
>
>
> Josh Hursey escribió
Though I do not test this scenario (using hostfiles) very often, it
used to work. The ompi-restart command takes a --hostfile (or --
machinefile) argument that is passed directly to the mpirun command. I
wonder if something broke recently with this handoff. I can certainly
checkpoint with
,
same result.
thanks,
Jonathan
Josh Hursey wrote:
Though I do not test this scenario (using hostfiles) very often, it
used to work. The ompi-restart command takes a --hostfile (or --
machinefile) argument that is passed directly to the mpirun
command. I wonder if something broke recently
I verified that the preload functionality works on the trunk. It seems
to be broken on the v1.3/v1.4 branches. The version of this code has
changed significantly between the v1.3/v1.4 and the trunk/v1.5
versions. I filed a bug about this so it does not get lost:
gt;>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
ERROR3
>
>
>
>
>
>
>
>
>
>
>>>>>>>>>>>>>>>>&
the problem?
-- Josh
P.S. If you are interested, we have a slightly better version of the
documentation, hosted at the link below:
http://osl.iu.edu/research/ft/ompi-cr/
On Nov 18, 2009, at 1:27 PM, Constantinos Makassikis wrote:
Josh Hursey wrote:
(Sorry for the excessive delay
On Dec 12, 2009, at 10:03 AM, Kritiraj Sajadah wrote:
Dear All,
I am trying to checkpoint am MPI application which has two
processes each running on two seperate hosts.
I run the application as follows:
raj@sun32:~$ mpirun -am ft-enable-cr -np 2 --hostfile sunhost -mca
btl
On Dec 13, 2009, at 3:57 PM, Kritiraj Sajadah wrote:
Dear All,
I am running a simple mpi application which looks as
follows:
##
#include
#include
#include
#include
#include
int main(int argc, char **argv)
{
int rank,size;
MPI_Init(, );
52772 1188 ?D12:54
0:00 \_ /bin/bash /opt/
cesga/openmpi-1.3.3/bin/orted
....
Josh Hursey escribió:
On Nov 12, 2009, at 10:54 AM, Sergio Díaz wrote:
Hi Josh,
You were right. The main problem was the /tmp. SGE uses a
scratch director
On Dec 19, 2009, at 7:42 AM, Jean Potsam wrote:
Hi Everyone,
I am trying to checkpoint an mpi application
running on multiple nodes. However, I get some error messages when i
trigger the checkpointing process.
Error: expected_component: PID information unavailable!
I tested the 1.4.1 release, and everything worked fine for me (tested
a few different configurations of nodes/environments).
The ompi-checkpoint error you cited is usually caused by one of two
things:
- The PID specified is wrong (which I don't think that is the case
here)
- The session
to resolve
this problem.
Thank you
Jean
--- On Mon, 11/1/10, Josh Hursey <jjhur...@open-mpi.org> wrote:
From: Josh Hursey <jjhur...@open-mpi.org>
Subject: Re: [OMPI users] checkpointing multi node and multi process
applications
To: "Open MPI Users" <us...@open-
reproduce it. Can you try the
trunk (either SVN checkout or nightly tarball from tonight) and check
if this solves your problem?
Cheers,
Josh
On Jan 25, 2010, at 12:14 PM, Josh Hursey wrote:
I am not able to reproduce this problem with the 1.4 branch using a
hostfile, and node configuration
to the v1.5
series if possible.
-- Josh
On Jan 25, 2010, at 3:33 PM, Josh Hursey wrote:
So while working on the error message, I noticed that the global
coordinator was using the wrong path to investigate the checkpoint
metadata. This particular section of code is not often used (which
Thanks for the bug report. There are a couple of places in the code
that, in a sense, hard code '/tmp' as the temporary directory. It
shouldn't be to hard to fix since there is a common function used in
the code to discovery the 'true' temporary directory (which defaults
to /tmp). Of
On Feb 10, 2010, at 9:45 AM, Addepalli, Srirangam V wrote:
> I am trying to test orte-checkpoint with a MPI JOB. It how ever hangs for all
> jobs. This is how i submit the job is started
> mpirun -np 8 -mca ft-enable cr /apps/nwchem-5.1.1/bin/LINUX64/nwchem
> siosi6.nw
This might be the
This type of failure is usually due to prelink'ing being left enabled
on one or more of the systems. This has come up multiple times on the
Open MPI list, but is actually a problem between BLCR and the Linux
kernel. BLCR has a FAQ entry on this that you will want to check out:
I have not been working with the integration of Open MPI and Torque
directly, so I cannot state how well this is supported. However, the
BLCR folks have been working on a Torque/Open MPI/BLCR project for a
while now, and have had some success. You might want to raise the
question on the
On Mar 21, 2010, at 12:58 PM, Addepalli, Srirangam V wrote:
Yes We have seen this behavior too.
Another behavior I have seen is that one MPI process starts to
show different elapsed time than its peers. Is it because
checkpoint happened on behalf of this process?
R
On Mar 22, 2010, at 4:41 PM, wrote:
Hi
If the run my compute intensive openmpi based program using regular
invocation of mpirun (ie; mpirun –host -np cores>), it gets completed in few seconds but if I run the same
program with “-am
On Mar 20, 2010, at 11:14 PM, wrote:
I am observing a very strange performance issue with my openmpi
program.
I have compute intensive openmpi based application that keeps the
data in memory, process the data and then dumps it to GPFS
So the MCA parameter that you mention is explained at the link below:
http://osl.iu.edu/research/ft/ompi-cr/api.php#mca-opal_cr_use_thread
This enables/disables the C/R thread a runtime if Open MPI was
configured with C/R thread support:
On Mar 23, 2010, at 1:00 PM, Fernando Lemos wrote:
On Tue, Mar 23, 2010 at 12:55 PM, fengguang tian
wrote:
I use mpirun -np 50 -am ft-enable-cr --mca
snapc_base_global_snapshot_dir
--hostfile .mpihostfile
to store the global checkpoint snapshot into the shared
Does this happen when you run without '-am ft-enable-cr' (so a no-C/R
run)?
This will help us determine if your problem is with the C/R work or
with the ORTE runtime. I suspect that there is something odd with your
system that is confusing the runtime (so not a C/R problem).
Have you
On Mon, Mar 29, 2010 at 11:42 AM, Josh Hursey <jjhursey@open-
mpi.org> wrote:
On Mar 23, 2010, at 1:00 PM, Fernando Lemos wrote:
On Tue, Mar 23, 2010 at 12:55 PM, fengguang tian
<ferny...@gmail.com> wrote:
I use mpirun -np 50 -am ft-enable-cr --mca
snapc_base_global_snapshot_dir
I wonder if this is a bug with BLCR (since the segv stack is in the
BLCR thread). Can you try an non-MPI version of this application that
uses popen(), and see if BLCR properly checkpoints/restarts it?
If so, we can start to see what Open MPI might be doing to confuse
things, but I suspect
So what you are looking for is checkpoint/restart support, which you
can find some details about at the link below:
http://osl.iu.edu/research/ft/ompi-cr/
Additionally, we relatively recently added the ability to checkpoint
and 'stop' the application. This generates a usable checkpoint of
So I recently hit this same problem while doing some scalability
testing. I experimented with adding the --no-restore-pid option, but
found the same problem as you mention. Unfortunately, the problem is
with BLCR, not Open MPI.
BLCR will restart the process with a new PID, but the value
When you defined them in your environment did you prefix them with
'OMPI_MCA_'? Open MPI looks for this prefix to identify which
parameters are intended for it specifically.
-- Josh
On May 12, 2010, at 11:09 PM, wrote:
Ralph
Defining
The functionality of checkpoint operation is not tied to CPU
utilization. Are you running with the C/R thread enabled? If not then
the checkpoint might be waiting until the process enters the MPI
library.
Does the system emit an error message describing the error that it
encountered?
(Sorry for the delay in replying, more below)
On Apr 12, 2010, at 6:36 AM, Hideyuki Jitsumoto wrote:
Hi Members,
I tried to use checkpoint/restart by openmpi.
But I can not get collect checkpoint data.
I prepared execution environment as follows, the strings in () mean
name of output file
(Sorry for the delay in replying, more below)
On Apr 8, 2010, at 1:34 PM, Fernando Lemos wrote:
Hello,
I've noticed that ompi-restart doesn't support the --rankfile option.
It only supports --hostfile/--machinefile. Is there any reason
--rankfile isn't supported?
Suppose you have a cluster
1 - 100 of 221 matches
Mail list logo