Thanks for you comments.
>> If so, my question is if I need NOT to install JDK to the all nodes. In
>> other words is it possible to have a machine used as a worker without JDK
>> installed?
>>
>
> You do not need JDK installed on worker nodes - it only has to be on the node
> where you
Or perhaps cloned, renamed to orte-kill, and modified to kill a single (or
multiple) specific job(s). That would be POSIX-like ("kill" vs. "clean").
On Oct 24, 2012, at 1:32 PM, Rolf vandeVaart wrote:
> And just to give a little context, ompi-clean was created initially to
> "clean" up a
And just to give a little context, ompi-clean was created initially to "clean"
up a node, not for cleaning up a specific job. It was for the case where MPI
jobs would leave some files behind or leave some processes running. (I do not
believe this happens much at all anymore.) But, as was
...but patches would be greatly appreciated. :-)
On Oct 24, 2012, at 12:24 PM, Ralph Castain wrote:
> All things are possible, including what you describe. Not sure when we would
> get to it, though.
>
>
> On Oct 24, 2012, at 4:01 AM, Nicolas Deladerriere
>
All things are possible, including what you describe. Not sure when we would
get to it, though.
On Oct 24, 2012, at 4:01 AM, Nicolas Deladerriere
wrote:
> Reuti,
>
> The problem I am facing is a small small part of our production
> system, and I cannot modify
On Oct 23, 2012, at 9:51 PM, Yoshiki SATO wrote:
> Hi,
>
> Recently, I tried to use Java bindings in Open MPI released this February.
> As far as mentioned in the Java FAQ (
> http://www.open-mpi.org/faq/?category=java ), the Java binding implementation
>
Reuti,
The problem I am facing is a small small part of our production
system, and I cannot modify our mpirun submission system. This is why
i am looking at solution using only ompi-clean of mpirun command
specification.
Thanks,
Nicolas
2012/10/24, Reuti :
> Am
Am 24.10.2012 um 11:33 schrieb Nicolas Deladerriere:
> Reuti,
>
> Thanks for your comments,
>
> In our case, we are currently running different mpirun commands on
> clusters sharing the same frontend. Basically we use a wrapper to run
> the mpirun command and to run an ompi-clean command to
Reuti,
Thanks for your comments,
In our case, we are currently running different mpirun commands on
clusters sharing the same frontend. Basically we use a wrapper to run
the mpirun command and to run an ompi-clean command to clean up the
mpi job if required.
Using ompi-clean like this just kills
Hi,
Am 24.10.2012 um 09:36 schrieb Nicolas Deladerriere:
> I am having issue running ompi-clean which clean up (this is normal) session
> associated to a user which means it kills all running jobs assoicated to this
> session (this is also normal). But I would like to be able to clean up
>
Hi all,
I am having issue running ompi-clean which clean up (this is normal)
session associated to a user which means it kills all running jobs
assoicated to this session (this is also normal). But I would like to be
able to clean up session associated to a job (a not user).
Here is my point:
I
Hi,
Recently, I tried to use Java bindings in Open MPI released this February. As
far as mentioned in the Java FAQ ( http://www.open-mpi.org/faq/?category=java
), the Java binding implementation doesn't cause any performance degradation
because of just wrapping native Open MPI with
12 matches
Mail list logo