Thats Cool !

I'll test it on my cluster =)
it uses infiniband and mvpich2 but some people want to use ordinary
eth as interconnect so it will be very useful =)

2008/2/26, Justin Bronder <[EMAIL PROTECTED]>:
> I've been spending the majority of my Gentoo-related time working on a
>  solution to bug 44132 [1], basically, trying to find a way to gracefully
>  handle multiple installs of various MPI implementations at the same time in
>  Gentoo.  Theres more information about the solution in my devspace [2], but
>  a quick summary is that there is a new package (empi) that is much like
>  crossdev, a new eselect module for empi, and a new eclass that handles both
>  mpi implementations and packages depending on mpi.
>
>  So, I think I have pushed this work far enough along for it to actually be
>  somewhat officially offered.  My question then, is where should this be
>  located?  There are several mpi packages in the science overlay already, so
>  should I push this work to there, or would it be more appropriate to make a
>  new overlay specifically for hp-cluster?
>
>  Future work related to this project will be getting all mpi implementations
>  and dependant packages converted in the same overlay before bringing it up on
>  -dev for discussion about inclusion into the main tree.
>
>  I have no real preference either way, but the science team does already have
>  an overlay :)  Let me know what you think.
>
>  [1] https://bugs.gentoo.org/show_bug.cgi?id=44132
>  [2] http://dev.gentoo.org/~jsbronder/README.empi.txt
>
>  --
>
> Justin Bronder
>
>


-- 
Gentoo GNU/Linux 2.6.23 Dual Xeon

Mail to
      [EMAIL PROTECTED]
      [EMAIL PROTECTED]
-- 
[email protected] mailing list

Reply via email to