On Tue, Jan 10, 2012 at 10:02 AM, Roberto Rey wrote:
> I'm running some tests on EC2 cluster instances with 10 Gigabit Ethernet
> hardware and I'm getting strange latency results with Netpipe and OpenMPI.
- There are 3 types of instances that can use 10 GbE. Are you using
Andrew Helwer, le Fri 13 Jan 2012 18:16:16 +0100, a écrit :
> libhwloc.lib(traversal.o) : error LNK2019: unresolved external symbol
> __ms_vsnpr
> intf referenced in function snprintf
Do you also link msvcrt in? mingw needs it for almost everything.
Samuel
On Thu, Jan 12, 2012 at 16:10, Jeff Squyres wrote:
> It's very strange to me that Open MPI is getting *better* than raw TCP
> performance. I don't have an immediate explanation for that -- if you're
> using the TCP BTL, then OMPI should be using TCP sockets, just like
On 01/12/2012 08:40 AM, Dave Love wrote:
> Surely this should be on the gridengine list -- and it's in recent
> archives -- but there's some ob-openmpi below. Can Notre Dame not get
> the support they've paid Univa for?
This is, in fact, in the recent gridengine archives. I brought up this
Do you have a stack of where exactly things are seg faulting in
blacs_pinfo?
--td
On 1/13/2012 8:12 AM, Conn ORourke wrote:
Dear Openmpi Users,
I am reserving several processors with SGE upon which I want to run a
number of openmpi jobs, all of which individually (and combined) use
less
Dear Openmpi Users,
I am reserving several processors with SGE upon which I want to run a number of
openmpi jobs, all of which individually (and combined) use
less than the reserved number of processors. The code I am using uses
BLACS, and when blacs_pinfo is called I get a seg fault. If the
Dear OpenMPI,
using MPI_Allgather with MPI_CHAR type, I have a doubt about
null-terminated character. Imaging I want to spawn node names where my
program is running on:
char hostname[MAX_LEN];
char*