Re: [OMPI devel] patch for building gm btl

2008-01-02 Thread George Bosilca
Same here at UTK, no more GM clusters around. I guess I can reinstall  
the GM libraries, just to test the ompi compilation step.


  george.

On Jan 2, 2008, at 9:51 AM, Tim Prins wrote:


On Wednesday 02 January 2008 08:52:08 am Jeff Squyres wrote:

On Dec 31, 2007, at 11:42 PM, Paul H. Hargrove wrote:
I tried today to build the OMPI trunk on a system w/ GM libs  
installed
(I tried both GM-2.0.16 and GM-1.6.4) and found that the GM BTL  
won't
even compile, due to unbalanced parens.  The patch below  
reintroduces

the parens that were apparently lost in r16633:


Fixed (https://svn.open-mpi.org/trac/ompi/changeset/17029); thanks  
for

the patch.


The fact that this has gone unfixed for 2 months suggests to me that
nobody is building the GM BTL.  So, how would I go about  
checking ...

a) ...if there exists any periodic build of the GM BTL via MTT?

treks
I thought that Indiana was doing GM builds, but perhaps they've
upgraded to MX these days...?


This is correct. Our GM system was upgraded, and is now running MX  
(although

we have yet to setup MTT on the upgraded system...).

Tim
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI devel] patch for building gm btl

2008-01-02 Thread Tim Prins
On Wednesday 02 January 2008 08:52:08 am Jeff Squyres wrote:
> On Dec 31, 2007, at 11:42 PM, Paul H. Hargrove wrote:
> > I tried today to build the OMPI trunk on a system w/ GM libs installed
> > (I tried both GM-2.0.16 and GM-1.6.4) and found that the GM BTL won't
> > even compile, due to unbalanced parens.  The patch below reintroduces
> > the parens that were apparently lost in r16633:
>
> Fixed (https://svn.open-mpi.org/trac/ompi/changeset/17029); thanks for
> the patch.
>
> > The fact that this has gone unfixed for 2 months suggests to me that
> > nobody is building the GM BTL.  So, how would I go about checking ...
> > a) ...if there exists any periodic build of the GM BTL via MTT?
>treks
> I thought that Indiana was doing GM builds, but perhaps they've
> upgraded to MX these days...?

This is correct. Our GM system was upgraded, and is now running MX (although 
we have yet to setup MTT on the upgraded system...).

Tim


Re: [OMPI devel] patch for building gm btl

2008-01-02 Thread Jeff Squyres

On Dec 31, 2007, at 11:42 PM, Paul H. Hargrove wrote:


I tried today to build the OMPI trunk on a system w/ GM libs installed
(I tried both GM-2.0.16 and GM-1.6.4) and found that the GM BTL won't
even compile, due to unbalanced parens.  The patch below reintroduces
the parens that were apparently lost in r16633:


Fixed (https://svn.open-mpi.org/trac/ompi/changeset/17029); thanks for  
the patch.



The fact that this has gone unfixed for 2 months suggests to me that
nobody is building the GM BTL.  So, how would I go about checking ...
a) ...if there exists any periodic build of the GM BTL via MTT?


I thought that Indiana was doing GM builds, but perhaps they've  
upgraded to MX these days...?


UTK -- do you still have any GM clusters around?


b) ...if such builds, if any, experience the same error(s) as I
c) ...which GM library versions such builds, if any, compile against


Given the typos you found, I don't see how they could.


d) ...if anybody wants to help setup an MTT for GM on my system (NOTE:
Jeff Squyres, Brian Barrett and George Bosilca all have existing
accounts on my cluster, though possibly expired/disabled).



I always like to see more OMPI testing.  :-)

I'd be happy to help setup MTT for your cluster.  Is it easy to re- 
activate my accounts?  What kind of testing would you be willing to do  
on your cluster, and how often?  What queueing system do you  
use?  ...etc. (this might be worth a phone call)


I have a somewhat-complex setup for MTT on my Cisco development  
cluster; I submit a whole pile of compilation MTT jobs via SLURM and  
wait for them to complete (individually).  Each compile that completes  
successfully will end up generating another pile of SLURM jobs for  
testing.  I have a [somewhat ugly] top-level script that submits all  
these jobs according to a schedule set by day-of-week.


Sidenote: one of the interesting things about MTT that we've found is  
that everyone tends to use it differently -- IU, Sun, IBM, and Cisco  
all use MTT quite differently in our nightly regression testing.  So  
our top-level scripting to invoke MTT is not uniform at all.  We've  
long-since talked about adding a uniform upper layer for large-scale  
MTT automation that can handle full parallelism, generic batch queue  
system support, etc., but haven't found the cycles to get together and  
try to map out what it would look like.  Plus, all of our individual  
setups are working / ain't broken, so there's not a lot of incentive  
to "fix" them...  It might be an interesting software engineering  
research project, though, if anyone's got the cycles.  This has [much]  
larger implications than just MPI testing.


--
Jeff Squyres
Cisco Systems