>-Original Message-
>From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org]
>On Behalf Of Ralph Castain
>Sent: Monday, July 30, 2012 9:29 AM
>To: Open MPI Developers
>Subject: Re: [OMPI devel] The hostfile option
>
>
>On Jul 30, 2012, at 2:37 AM, George Bosilca wrote:
>
>> I t
New tarball issued in the usual place:
http://www.open-mpi.org/software/ompi/v1.6/
Changes since rc1:
- fix compile errors in rc1
- removed rmcast framework
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
The NFS server for www.open-mpi.org and svn.open-mpi.org needs to be rebooted
tomorrow morning. These services will be offline for about 10 minutes during
the reboot.
List-Post: devel@lists.open-mpi.org
Date: Tuesday, July 31 2012
Time:
- 6:00am-6:10am Pacific US time
- 7:00am-7:10am Mountain U
1.6.1rc1 is a bust because of a compile error. :(
It wasn't caught on the build machine because it's a bug in the openib BTL, and
the build machine doesn't have OpenFabrics support.
1.6.1rc2 will be posted later today.
On Jul 27, 2012, at 10:20 PM, Jeff Squyres wrote:
> Finally! It's in th
On Jul 30, 2012, at 2:37 AM, George Bosilca wrote:
> I think that as long as there is a single home area per cluster the
> difference between the different approaches might seem irrelevant to most of
> the people.
Yeah, I agree - after thinking about it, it probably didn't accomplish much.
>
Hello,
3 months ago I opened a ticket about an extra local data copy being made in
the pairwise alltoallv implementation in the "tuned" module that can hurt
performance in some cases:
https://svn.open-mpi.org/trac/ompi/ticket/3079
As far as I can see the milestone was set to Open MPI 1.6.1
I think that as long as there is a single home area per cluster the difference
between the different approaches might seem irrelevant to most of the people.
My problem is twofold. First, I have a common home area across several
different development clusters. Thus I have direct access through ss