Making all in mca/common/ofacm
make[2]: Entering directory
`/hpc/local/benchmarks/hpc-stack-gcc/src/install/ompi-master/opal/mca/common/ofacm'
CC libmca_common_ofacm_la-common_ofacm_base.lo
CC libmca_common_ofacm_la-common_ofacm_oob.lo
CC
That's because you folks didn't completely cleanup the open fabrics stuff prior
to the move - something that we warned about, but folks said they would resolve
later :-)
On Jul 25, 2014, at 11:19 PM, Mike Dubman wrote:
> Making all in mca/common/ofacm
> make[2]:
We are talking MB not KB isn't it?
George.
On Thu, Jul 24, 2014 at 2:57 PM, Rolf vandeVaart
wrote:
> WHAT: Bump up the minimum sm pool size to 128K from 64K.
> WHY: When running OSU benchmark on 2 nodes and utilizing a larger
> btl_smcuda_max_send_size, we can run
Yes (my mistake)
Sent from my iPhone
On Jul 26, 2014, at 3:19 PM, "George Bosilca"
> wrote:
We are talking MB not KB isn't it?
George.
On Thu, Jul 24, 2014 at 2:57 PM, Rolf vandeVaart
All,
I take advantage of this thread to clarify what is missing to have a perfectly
MPI agnostic BTL interface. Some of these issues are pretty straightforward
(getting rid of RTE and OMPI vestiges), some others will require some thinking
from their developers in order to cope with a not