I have no objections to bringing this into the trunk, but I agree that an .ompi_ignore is probably a good idea at first.

One question that I'd like to have answered is how OMPI decides whether to use the SCTP BTL or not. If there are SCTP stacks available by default in Linux and OS X -- but their performance may be sub-optimal and/or buggy, we may want to have the SCTP BTL only activated if the user explicitly asks for it. Open MPI is very concerned with "out of the box" behavior -- we need to ensure that "mpirun a.out" will "just work" on all of our supported platforms.

Will UBC setup regular MTT runs to test the SCTP stuff?  :-)

More below.


On Nov 10, 2007, at 9:25 PM, Brad Penoff wrote:

Currently, both the one-to-one and the one-to-many make use of the
event library offered by Open MPI.  The callback functions for the
one-to-many style however are quite unique as multiple endpoints may
be interested in the events that poll returns. Currently we use these unique callback functions, but in the future the hope is to play with
the potential benefits of a btl_progress function, particularly for
the one-to-many style.

In my experience the event callbacks have a high overhead compared to a
progress function, so I'd say thats definitely worth checking out.

We noticed that poll is only called after a timer goes off while
btl_progress would be called with each iteration of opal_progress, so
noticing that along with you encouragement makes us want to check it
out even more.


Be aware that based on discussions from the Paris meeting, some changes to libevent are coming (I really need to get this on a wiki page or something). Here's a quick summary:

- We're waiting for a new release of libevent (or libev -- we'll see how it shakes out) that has lots of bug fixes and performance improvements as compared to the version we currently have in the OMPI tree. Based on some libevent mailing list traffic, this release may be in Dec 2007. We'll see what happens.

- After we update libevent, we'll be making a policy change w.r.t. OMPI progress functions and timer callbacks: only software layers with actual devices will be allowed to register progress functions (in particular, the io and osd framework progress functions will be eliminated; see below). All other progress-requiring functions will have to use timers. This means that every time we call progress, we *only* call the stuff that needs to be polled as frequently as possible. We'll call the less-important progress stuff less frequently (e.g., ORTE OOB/RML).

- We'll be changing our use of libevent to utilize the more scalable polling capabilities (such as epoll and friends). We don't use them right now because on all OS's that we currently care about (Linux, OS X, Solaris), mixing the scalable fd polling mechanism with pty's results in Very Very Bad Things. We'll special case where pty's are used and only use select/poll there, and then use epoll (etc.) elsewhere.

- We'll also be changing our use of libevent to utilized timers properly.

- ompi_request_t will be augmented to have a callback that, if non- NULL, will be invoked when the request is completed. This will allow removing the io and osd framework progress functions.

- We may also add a high-performance clock framework in Open MPI -- a way of accessing high-resolution timers and clocks on the host (e.g., on Intel chips, additional algorithms are necessary to normalize the per-chip clocks between sockets, especially if a process bounces between sockets -- unnecessary on AMD, PPC, and SPARC platforms). This could improve performance and precision of the libevent timers.

- Finally, registering progress functions will take a new parameter: a file descriptor. If a file descriptor is provided and opal_progress() decides that it wants to block (specific mechanism TBD, but probably something similar to what other hybrid polling/blocking systems do: poll for a while, and if nothing "interesting" happens, block) *and* if all registered progress functions have valid fd's, then we'll block until either a timer expires or something "interesting" happens.

--
Jeff Squyres
Cisco Systems

Reply via email to