On Feb 6, 2008, at 10:25 AM, Jeff Squyres wrote:

On Jan 31, 2008, at 5:07 PM, Josh Hursey wrote:

For the visualization it would be really nice to see how well tested a
particular interconnect, resource manager, and/or 'feature' is when
ramping up to a release. However these peices of information are hard
to obtain, and in some cases quantify (e.g., what do we mean by
testing a 'feature'?).

Thinking about this it occurred to me that what we really need is for
OMPI to tell MTT what it is doing for some of these cases.
Two examples, MTT cannot tell:
- which set of compile time options are enabled/disabled
automatically
  e.g. [ "./configure --with-foo" vs "./configure"]

Yes, this could be done.


- which BTL(s) or MTL are used to run a test
  e.g. [ "mpirun -mca btl tcp,self foo" vs. "mpirun foo"]

Don't we offer this in a limited way right now with the "network"
field in the MPI details section?  I think we hesitated to put OMPI-
specific semantics on that field -- e.g., whether you're using the MX
BTL or MTL is an OMPI issue; you're still using the MX protocol/ network.

I suppose we could agument those strings in the OMPI case: mx:mtl and
mx:btl, for example.

So to be clear: does the network field not give you what you need?

The network field gives us exactly what we want. The problem is that it is not filled in when we run "mpirun foo" since we do not specify the BTLs on the command line (unless the INI explicitly specifies it). The problems becomes further complicated when you run something like "mpirun -mca btl openib,tcp,self" where the 'tcp' BTL is not going to be used due to exclusivity (at least that is what I'm told), so we miss report the BTLs used in this case.



For the configure options we *could* parse the config.log to extract
this data. The question is, if we did this, what do we want to look?
And is this something we want to do? Is there another way?

I think having a network-like field for the MPI install section might
be good, and possibly have an OMPI:: funclet to automatically do the
parsing.  But we need to be mindful of MPIs that won't have a
configure script, so what information goes there might be dubious (or
just empty?).

Yeah I think an Open MPI specific function should do the parsing since the configure options we want to grab will be specific to Open MPI. I think in the case of no configure script it would just be empty.




For the BTL(s)/MTL this is a much more subtle question since this
depends on the connectivity of the interfaces on a machine, and the
runtime selection logic. If we added a parameter to mpirun (e.g. "--
showme connectivity") that displayed connectivity information to
stdout (or a file) would this be useful? What should it look like?

Ya, this is on my to-do list.  IB CM stuff in the openib BTL has been
consuming my time recently (much more complicated than I originally
thought); I swear I'll be getting to the connectivity map issue before
v1.3...

Is there a bug about this somewhere? There is a slim chance that I (maybe Tim P) could help with this effort as well in the near term (next month). For the simple case we could just dump the connectivity information from Rank 0, then the more complex case will be full mapping.

-- Josh




We have talked about some of this in the past, but I could not find a
Bug talking about it in MTT.

What do you think about this?

Cheers,
Josh
_______________________________________________
mtt-devel mailing list
mtt-de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel


--
Jeff Squyres
Cisco Systems

_______________________________________________
mtt-devel mailing list
mtt-de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel

Reply via email to