Re: [OMPI devel] Fwd: lsf support / farm use models
Hi Matt On 7/15/07 1:49 PM, "Matthew Moskewicz"wrote: > >> Welcome! Yes, Jeff and I have been working on the LSF support based on 7.0 >> features in collab with the folks at Platform. > > sounds good. i'm happy to be involved with such a nice active project! > >>> 1) it appears that you (jeff, i guess ;) are using new LSF 7.0 API features. >>> i'm working to support customers in the EDA space, and it's not clear >>> if/when >>> they will migrate to 7.0 -- not to mention that our company (cadence) >>> doesn't >>> appear to have LSF 7.0 yet. i'm still looking in to the deatils, but it >>> appears that (from the Platform docs) lsb_getalloc is probably just a thin >>> wrapper around the LSB_MCPU_HOSTS (spelling?) environment variable. so that >>> could be worked around fairly easily. i dunno about lsb_launch -- it seems >>> equivalent to a set of ls_rtask() calls (one per process). however, i have >>> heard that there can be significant subtleties with the semantics of these >>> functions, in terms of compatibility across differently configured >>> LSF-controlled farms, specifically with regrads to administrators ability to >>> track and control job execution. personally, i don't see how it's really >>> possible for LSF to prevent 'bad' users from spamming out jobs or >>> short-cutting queues, but perhaps some of the methods they attempt to use >>> can >>> complicate things for a library like open-rte. >> >> After lengthy discussions with Platform, it was deemed the best path forward >> is to use the lsb_getalloc interface. While it currently reads the enviro >> variable, they indicated a potential change to read a file instead for >> scalability. Rather than chasing any changes, we all agreed that using >> lsb_getalloc would remain the "stable" interface - so that is what we used. > > understood. > >> Similar reasons for using lsb_launch. I would really advise against making >> any changes away from that support. Instead, we could take a lesson from our >> bproc support and simply (a) detect if we are on a pre-7.0 release, and then >> (b) build our own internal wrapper that provides back-support. See the bproc >> pls component for examples. > > that sounds fine -- should just be a matter of a little configure magic, > right? i already had to change the current configure stuff to be able to build > at all under 6.2 (since the current configure check requires 7.0 to pass), so > i guess it shouldn't be too much harder to mimic the bproc method of detecting > multiple versions, assuming it's really the same sort of thing. basically, i'd > keep the main LSF configure check downgraded as i have currently done in my > working copy, but add a new 7.0 check that is really the current truck check. > > then, i'll make signature-compatible replacements (with the same names? or add > internal functions to abstract things? or just add #ifdef's inline where they > are used?) for each missing LSF 7.0 function (implemented using the 6.1 or 6.2 > API), and have configure only build them if the system LSF doesn't have them. > uhm, once i figure out how to do that, anyway ... i guess i'll ask for more > help if the bproc code doesn't enlighten me. if successful, i should be able > to track trunk easily with respect to the LSF version issue at least. > This sounds fine - you'll find that the bproc pls does the exact same thing. In that case, we use #ifdefs since the APIs are actually different between the versions - we just create a wrapper inside the bproc pls code for the older version so that we can always call the same API. I'm not sure what the case will be in LSF - I believe the function calls are indeed different, so you might be able to use the same approach. > i'll probably just continue experimenting on my own for the moment (tracking > any updates to the main trunk LSF support) to see if i can figure it out. any > advice the best way to get such back support into trunk, if and when if exists > / is working? The *best* way would be for you to sign a third-party agreement - see the web site for details and a copy. Barring that, the only option would be to submit the code through either Jeff or I. We greatly prefer the agreement method as it is (a) less burdensome on us and (b) gives you greater flexibility. > > >>> >>> 2) this brings us to point 2 -- upon talking to the author(s) of cadence's >>> internal open-rte-like library, several key issues were raised. mainly, >>> customers want their applications to be 'farm-friendly' in several key ways. >>> firstly, they do not want any persistent daemons running outside of a given >>> job -- this requirement seems met by the current open-mpi default behavior, >>> at >>> least as far i can tell. secondly, they prefer (strongly) that applications >>> acquire resources incrementally, and perform work with whatever nodes are >>> currently available, rather than forcing a large up-front node allocation. >>> fault tolerance is
[OMPI devel] Removal of cellid
Yo folks I have completed the removal of the cellid from the orte_process_name_t struct on a tmp branch. Tim Prins has successfully tested it on odin, thor, and bigred at IU - I have checked it on coyote and yellowrail at LANL, as well as on a standalone Mac. It seems to work just fine in all those environments. Accordingly, I plan to check it into the trunk late Monday afternoon (probably around 4pm Eastern). As part of the change, I'll be getting rid of some obsolete test code, plus the setup_hnp and orte-console code (the latter depends upon the former, which is so obsolete it won't run anyway - and our revised launch procedure won't support). Just wanted to give you a "heads-up" in case you have any tmp branches out there that might be impacted. I rolled the Voltaire tmp branch into the trunk in preparation for this change so I could deal with the conflicts (rather than asking them to do so). Ralph
[OMPI devel] Fwd: lsf support / farm use models
Welcome! Yes, Jeff and I have been working on the LSF support based on 7.0 features in collab with the folks at Platform. sounds good. i'm happy to be involved with such a nice active project! > 1) it appears that you (jeff, i guess ;) are using new LSF 7.0 API features. > i'm working to support customers in the EDA space, and it's not clear if/when > they will migrate to 7.0 -- not to mention that our company (cadence) doesn't > appear to have LSF 7.0 yet. i'm still looking in to the deatils, but it > appears that (from the Platform docs) lsb_getalloc is probably just a thin > wrapper around the LSB_MCPU_HOSTS (spelling?) environment variable. so that > could be worked around fairly easily. i dunno about lsb_launch -- it seems > equivalent to a set of ls_rtask() calls (one per process). however, i have > heard that there can be significant subtleties with the semantics of these > functions, in terms of compatibility across differently configured > LSF-controlled farms, specifically with regrads to administrators ability to > track and control job execution. personally, i don't see how it's really > possible for LSF to prevent 'bad' users from spamming out jobs or > short-cutting queues, but perhaps some of the methods they attempt to use can > complicate things for a library like open-rte. After lengthy discussions with Platform, it was deemed the best path forward is to use the lsb_getalloc interface. While it currently reads the enviro variable, they indicated a potential change to read a file instead for scalability. Rather than chasing any changes, we all agreed that using lsb_getalloc would remain the "stable" interface - so that is what we used. understood. Similar reasons for using lsb_launch. I would really advise against making any changes away from that support. Instead, we could take a lesson from our bproc support and simply (a) detect if we are on a pre-7.0 release, and then (b) build our own internal wrapper that provides back-support. See the bproc pls component for examples. that sounds fine -- should just be a matter of a little configure magic, right? i already had to change the current configure stuff to be able to build at all under 6.2 (since the current configure check requires 7.0 to pass), so i guess it shouldn't be too much harder to mimic the bproc method of detecting multiple versions, assuming it's really the same sort of thing. basically, i'd keep the main LSF configure check downgraded as i have currently done in my working copy, but add a new 7.0 check that is really the current truck check. then, i'll make signature-compatible replacements (with the same names? or add internal functions to abstract things? or just add #ifdef's inline where they are used?) for each missing LSF 7.0 function (implemented using the 6.1or 6.2 API), and have configure only build them if the system LSF doesn't have them. uhm, once i figure out how to do that, anyway ... i guess i'll ask for more help if the bproc code doesn't enlighten me. if successful, i should be able to track trunk easily with respect to the LSF version issue at least. i'll probably just continue experimenting on my own for the moment (tracking any updates to the main trunk LSF support) to see if i can figure it out. any advice the best way to get such back support into trunk, if and when if exists / is working? > 2) this brings us to point 2 -- upon talking to the author(s) of cadence's > internal open-rte-like library, several key issues were raised. mainly, > customers want their applications to be 'farm-friendly' in several key ways. > firstly, they do not want any persistent daemons running outside of a given > job -- this requirement seems met by the current open-mpi default behavior, at > least as far i can tell. secondly, they prefer (strongly) that applications > acquire resources incrementally, and perform work with whatever nodes are > currently available, rather than forcing a large up-front node allocation. > fault tolerance is nice too, although it's unclear to me if it's really > practically needed. in any case, many of our applications can structure their > computation to use resources in just such a way, generally by dividing the > work into independent, restartable pieces ( i.e. they are embarrassingly ||). > also, MPI communication + MPI-2 process creation seems to be a reasonable > interface for handling communication and dynamic process creation on the > application side. however, it's not clear that open-rte supports the needed > dynamic resource acquisition model in any of the ras/pls components i looked > at. in fact, other that just folding everything in the pls component, it's not > clear that the entire flow via the rmgr really supports it very well. > specifically for LSF, the use model is that the initial job either is created > with bsub/lsb_submit(), (or automatically submits itself as step zero > perhaps) to run initially on N machines. N should be 'small' (1-16) --