Greetings, MPI mavens,

Perhaps this belongs on users@, but since it's about development status I thought I start here. I've fairly recently gotten involved in getting an MPI environment configured for our institute. We have an existing LSF cluster because most of our work is more High-Throughput than High-Performance, so if I can use LSF to underlie our MPI environment, that'd be administratively easiest.

I tried to compile the LSF support in the public SVN repo and noticed it was, er, broken. I'll include the trivial changes we made below. But the behavior is still fairly unpredictable, mostly involving mpirun never spinning up daemons on other nodes.

I saw mention that work was being suspended on LSF support pending technical improvements on the LSF side (mentioning that Platform had provided a patch or try.)

Can I assume, based on the inactivity in the repo, that Platform hasn't resolved the issue?

Thanks,
Eric

------------------------
Here're the diffs to get LSF support to compile. We also made a change so it would report the LSF failure code instead of an uninitialized variable when it fails:

Index: pls_lsf_module.c
===================================================================
--- pls_lsf_module.c    (revision 17234)
+++ pls_lsf_module.c    (working copy)
@@ -304,7 +304,7 @@
      */
     if (lsb_launch(nodelist_argv, argv, LSF_DJOB_NOWAIT, env) < 0) {
         ORTE_ERROR_LOG(ORTE_ERR_FAILED_TO_START);
-        opal_output(0, "lsb_launch failed: %d", rc);
+        opal_output(0, "lsb_launch failed: %d", lsberrno);
         rc = ORTE_ERR_FAILED_TO_START;
         goto cleanup;
     }
@@ -356,7 +356,7 @@

     /* check for failed launch - if so, force terminate */
     if (failed_launch) {
-        if (ORTE_SUCCESS !=
+/*        if (ORTE_SUCCESS != */
orte_pls_base_daemon_failed(jobid, false, -1, 0, ORTE_JOB_STATE_FAILED_TO_START);
     }

Reply via email to