Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread Jingchao Zhang
pen-mpi.org> on behalf of r...@open-mpi.org <r...@open-mpi.org> Sent: Tuesday, August 30, 2016 6:37:51 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Sorry - previous version had a typo in it: diff --git a/orte/mca/state/orted/state_orted.c b/orte/mca

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread Jingchao Zhang
f of r...@open-mpi.org<mailto:r...@open-mpi.org> <r...@open-mpi.org<mailto:r...@open-mpi.org>> Sent: Tuesday, August 30, 2016 1:45:45 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Well, that helped a bit. For some reason, your system is skipping

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread r...@open-mpi.org
lt;mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Tuesday, August 30, 2016 1:45:45 PM > To: Open MPI Users > Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 > > Well, that helped a bit. For some reason, your system is skip

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread r...@open-mpi.org
> From: users <users-boun...@lists.open-mpi.org > <mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org > <mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Tuesday, August 30, 2016 12:56:33 PM > To: Open MP

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread Jingchao Zhang
PRINT(ORTE_PROC_MY_NAME), fd, ORTE_NAME_PRINT(dst_name))); */ From: users <users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> <r...

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread r...@open-mpi.org
TPUT_VERBOSE((1, orte_iof_base_framework.framework_output, > "%s iof:hnp pushing fd %d for process %s", > ORTE_NAME_PRINT(ORTE_PROC_MY_NAME), > fd, ORTE_NAME_PRINT(dst_name))); > */ > > From

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-30 Thread Jingchao Zhang
boun...@lists.open-mpi.org> on behalf of r...@open-mpi.org <r...@open-mpi.org> Sent: Monday, August 29, 2016 11:42:00 AM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 I’m sorry, but something is simply very wrong here. Are you sure you are pointed

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-29 Thread r...@open-mpi.org
Rank 18 has cleared MPI_Init > Rank 10 has cleared MPI_Init > Rank 11 has cleared MPI_Init > Rank 12 has cleared MPI_Init > Rank 13 has cleared MPI_Init > Rank 17 has cleared MPI_Init > Rank 19 has cleared MPI_Init > > Thanks, > > Dr. Jingchao Zhang > Holland Comp

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-29 Thread Jingchao Zhang
PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 I am finding this impossible to replicate, so something odd must be going on. Can you please (a) pull down the latest v2.0.1 nightly tarball, and (b) add this patch to it? diff --git a/orte/mca/iof/hnp/iof_hnp.c b/o

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-27 Thread r...@open-mpi.org
;>> 402-472-6400 >>> From: users <users-boun...@lists.open-mpi.org> on behalf of >>> r...@open-mpi.org <r...@open-mpi.org> >>> Sent: Wednesday, August 24, 2016 1:27:28 PM >>> To: Open MPI Users >>> Subject: Re: [OMPI users] stdin issue

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-25 Thread Jeff Squyres (jsquyres)
t; University of Nebraska-Lincoln >> 402-472-6400 >> From: users <users-boun...@lists.open-mpi.org> on behalf of >> r...@open-mpi.org <r...@open-mpi.org> >> Sent: Wednesday, August 24, 2016 1:27:28 PM >> To: Open MPI Users >> Subject: Re: [OMPI users] s

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-25 Thread Jingchao Zhang
ursday, August 25, 2016 8:59:23 AM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 ??? Weird - can you send me an updated output of that last test we ran? On Aug 25, 2016, at 7:51 AM, Jingchao Zhang <zh...@unl.edu<mailto:zh...@unl.edu>> wrote: Hi Ralph,

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-25 Thread r...@open-mpi.org
;r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Wednesday, August 24, 2016 1:27:28 PM > To: Open MPI Users > Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 > > Bingo - found it, fix submitted and hope to get it into 2.0.1 > > Thanks for the assis

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-25 Thread Jingchao Zhang
f of r...@open-mpi.org <r...@open-mpi.org> Sent: Wednesday, August 24, 2016 1:27:28 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Bingo - found it, fix submitted and hope to get it into 2.0.1 Thanks for the assist! Ralph On Aug 24, 2016, at 12:15 PM, Ji

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-24 Thread r...@open-mpi.org
;mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Wednesday, August 24, 2016 12:14:26 PM > To: Open MPI Users > Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 > > Afraid I can’t replicate a problem at all, whether rank=0 is

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-24 Thread Jingchao Zhang
open-mpi.org <r...@open-mpi.org> Sent: Wednesday, August 24, 2016 12:14:26 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Afraid I can’t replicate a problem at all, whether rank=0 is local or not. I’m also using bash, but on CentOS-7, so I suspect the OS is th

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-24 Thread r...@open-mpi.org
eared MPI_Init >>>> Rank 11 has cleared MPI_Init >>>> Rank 12 has cleared MPI_Init >>>> Rank 13 has cleared MPI_Init >>>> Rank 14 has cleared MPI_Init >>>> Rank 15 has cleared MPI_Init >>>> Rank 17 has cleared MPI_Init >>&

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-24 Thread Jingchao Zhang
en MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Hmmm...that’s a good point. Rank 0 and mpirun are always on the same node on my cluster. I’ll give it a try. Jingchao: is rank 0 on the node with mpirun, or on a remote node? On Aug 23, 2016, at 5:58 PM, Gilles Gouail

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-23 Thread r...@open-mpi.org
;mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Tuesday, August 23, 2016 4:03:07 PM > To: Open MPI Users > Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 > > The IO forwarding messages all flow over the Ethernet, so the typ

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-23 Thread Jingchao Zhang
s.open-mpi.org> on behalf of r...@open-mpi.org <r...@open-mpi.org> Sent: Tuesday, August 23, 2016 4:03:07 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 The IO forwarding messages all flow over the Ethernet, so the type of fabric is irrelevant. The number of

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-23 Thread r...@open-mpi.org
ka-Lincoln > 402-472-6400 > From: users <users-boun...@lists.open-mpi.org > <mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org > <mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Monday, August 22,

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-23 Thread Jingchao Zhang
n-mpi.org> Sent: Monday, August 22, 2016 10:23:42 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 FWIW: I just tested forwarding up to 100MBytes via stdin using the simple test shown below with OMPI v2.0.1rc1, and it worked fine. So I’d suggest upgrading when

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread r...@open-mpi.org
/mca_coll_tuned.so >> #7 0x2b16351cb4fb in PMPI_Bcast () from >> /util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libmpi.so.20 >> #8 0x005c5b5d in LAMMPS_NS::Input::file() () at ../input.cpp:203 >> #9 0x0000005d4236 in main () at ../main.cpp:31 >> >>

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jingchao Zhang
Monday, August 22, 2016 2:17:10 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Hmmm...perhaps we can break this out a bit? The stdin will be going to your rank=0 proc. It sounds like you have some subsequent step that calls MPI_Bcast? Can you first verify

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread r...@open-mpi.org
boun...@lists.open-mpi.org > <mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org > <mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>> > Sent: Monday, August 22, 2016 2:17:10 PM > To: Open MPI Users > Subjec

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jingchao Zhang
oun...@lists.open-mpi.org> on behalf of r...@open-mpi.org <r...@open-mpi.org> Sent: Monday, August 22, 2016 2:17:10 PM To: Open MPI Users Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0 Hmmm...perhaps we can break this out a bit? The stdin will be going to your rank=0 proc. It sou

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jeff Hammond
On Monday, August 22, 2016, Jingchao Zhang wrote: > Hi all, > > > We compiled openmpi/2.0.0 with gcc/6.1.0 and intel/13.1.3. Both of them > have odd behaviors when trying to read from standard input. > > > For example, if we start the application lammps across 4 nodes, each node >

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread r...@open-mpi.org
Hmmm...perhaps we can break this out a bit? The stdin will be going to your rank=0 proc. It sounds like you have some subsequent step that calls MPI_Bcast? Can you first verify that the input is being correctly delivered to rank=0? This will help us isolate if the problem is in the IO

[OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jingchao Zhang
Hi all, We compiled openmpi/2.0.0 with gcc/6.1.0 and intel/13.1.3. Both of them have odd behaviors when trying to read from standard input. For example, if we start the application lammps across 4 nodes, each node 16 cores, connected by Intel QDR Infiniband, mpirun works fine for the 1st