pen-mpi.org> on behalf of r...@open-mpi.org
<r...@open-mpi.org>
Sent: Tuesday, August 30, 2016 6:37:51 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Sorry - previous version had a typo in it:
diff --git a/orte/mca/state/orted/state_orted.c
b/orte/mca
f of r...@open-mpi.org<mailto:r...@open-mpi.org>
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Tuesday, August 30, 2016 1:45:45 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Well, that helped a bit. For some reason, your system is skipping
lt;mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Tuesday, August 30, 2016 1:45:45 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
>
> Well, that helped a bit. For some reason, your system is skip
> From: users <users-boun...@lists.open-mpi.org
> <mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org
> <mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Tuesday, August 30, 2016 12:56:33 PM
> To: Open MP
PRINT(ORTE_PROC_MY_NAME),
fd, ORTE_NAME_PRINT(dst_name)));
*/
From: users
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org>
<r...
TPUT_VERBOSE((1, orte_iof_base_framework.framework_output,
> "%s iof:hnp pushing fd %d for process %s",
> ORTE_NAME_PRINT(ORTE_PROC_MY_NAME),
> fd, ORTE_NAME_PRINT(dst_name)));
> */
>
> From
boun...@lists.open-mpi.org> on behalf of r...@open-mpi.org
<r...@open-mpi.org>
Sent: Monday, August 29, 2016 11:42:00 AM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
I’m sorry, but something is simply very wrong here. Are you sure you are
pointed
Rank 18 has cleared MPI_Init
> Rank 10 has cleared MPI_Init
> Rank 11 has cleared MPI_Init
> Rank 12 has cleared MPI_Init
> Rank 13 has cleared MPI_Init
> Rank 17 has cleared MPI_Init
> Rank 19 has cleared MPI_Init
>
> Thanks,
>
> Dr. Jingchao Zhang
> Holland Comp
PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
I am finding this impossible to replicate, so something odd must be going on.
Can you please (a) pull down the latest v2.0.1 nightly tarball, and (b) add
this patch to it?
diff --git a/orte/mca/iof/hnp/iof_hnp.c b/o
;>> 402-472-6400
>>> From: users <users-boun...@lists.open-mpi.org> on behalf of
>>> r...@open-mpi.org <r...@open-mpi.org>
>>> Sent: Wednesday, August 24, 2016 1:27:28 PM
>>> To: Open MPI Users
>>> Subject: Re: [OMPI users] stdin issue
t; University of Nebraska-Lincoln
>> 402-472-6400
>> From: users <users-boun...@lists.open-mpi.org> on behalf of
>> r...@open-mpi.org <r...@open-mpi.org>
>> Sent: Wednesday, August 24, 2016 1:27:28 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] s
ursday, August 25, 2016 8:59:23 AM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
??? Weird - can you send me an updated output of that last test we ran?
On Aug 25, 2016, at 7:51 AM, Jingchao Zhang
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:
Hi Ralph,
;r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Wednesday, August 24, 2016 1:27:28 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
>
> Bingo - found it, fix submitted and hope to get it into 2.0.1
>
> Thanks for the assis
f of r...@open-mpi.org
<r...@open-mpi.org>
Sent: Wednesday, August 24, 2016 1:27:28 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Bingo - found it, fix submitted and hope to get it into 2.0.1
Thanks for the assist!
Ralph
On Aug 24, 2016, at 12:15 PM, Ji
;mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Wednesday, August 24, 2016 12:14:26 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
>
> Afraid I can’t replicate a problem at all, whether rank=0 is
open-mpi.org
<r...@open-mpi.org>
Sent: Wednesday, August 24, 2016 12:14:26 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Afraid I can’t replicate a problem at all, whether rank=0 is local or not. I’m
also using bash, but on CentOS-7, so I suspect the OS is th
eared MPI_Init
>>>> Rank 11 has cleared MPI_Init
>>>> Rank 12 has cleared MPI_Init
>>>> Rank 13 has cleared MPI_Init
>>>> Rank 14 has cleared MPI_Init
>>>> Rank 15 has cleared MPI_Init
>>>> Rank 17 has cleared MPI_Init
>>&
en MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Hmmm...that’s a good point. Rank 0 and mpirun are always on the same node on my
cluster. I’ll give it a try.
Jingchao: is rank 0 on the node with mpirun, or on a remote node?
On Aug 23, 2016, at 5:58 PM, Gilles Gouail
;mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Tuesday, August 23, 2016 4:03:07 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
>
> The IO forwarding messages all flow over the Ethernet, so the typ
s.open-mpi.org> on behalf of r...@open-mpi.org
<r...@open-mpi.org>
Sent: Tuesday, August 23, 2016 4:03:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
The IO forwarding messages all flow over the Ethernet, so the type of fabric is
irrelevant. The number of
ka-Lincoln
> 402-472-6400
> From: users <users-boun...@lists.open-mpi.org
> <mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org
> <mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Monday, August 22,
n-mpi.org>
Sent: Monday, August 22, 2016 10:23:42 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
FWIW: I just tested forwarding up to 100MBytes via stdin using the simple test
shown below with OMPI v2.0.1rc1, and it worked fine. So I’d suggest upgrading
when
/mca_coll_tuned.so
>> #7 0x2b16351cb4fb in PMPI_Bcast () from
>> /util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libmpi.so.20
>> #8 0x005c5b5d in LAMMPS_NS::Input::file() () at ../input.cpp:203
>> #9 0x0000005d4236 in main () at ../main.cpp:31
>>
>>
Monday, August 22, 2016 2:17:10 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Hmmm...perhaps we can break this out a bit? The stdin will be going to your
rank=0 proc. It sounds like you have some subsequent step that calls MPI_Bcast?
Can you first verify
boun...@lists.open-mpi.org
> <mailto:users-boun...@lists.open-mpi.org>> on behalf of r...@open-mpi.org
> <mailto:r...@open-mpi.org> <r...@open-mpi.org <mailto:r...@open-mpi.org>>
> Sent: Monday, August 22, 2016 2:17:10 PM
> To: Open MPI Users
> Subjec
oun...@lists.open-mpi.org> on behalf of r...@open-mpi.org
<r...@open-mpi.org>
Sent: Monday, August 22, 2016 2:17:10 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Hmmm...perhaps we can break this out a bit? The stdin will be going to your
rank=0 proc. It sou
On Monday, August 22, 2016, Jingchao Zhang wrote:
> Hi all,
>
>
> We compiled openmpi/2.0.0 with gcc/6.1.0 and intel/13.1.3. Both of them
> have odd behaviors when trying to read from standard input.
>
>
> For example, if we start the application lammps across 4 nodes, each node
>
Hmmm...perhaps we can break this out a bit? The stdin will be going to your
rank=0 proc. It sounds like you have some subsequent step that calls MPI_Bcast?
Can you first verify that the input is being correctly delivered to rank=0?
This will help us isolate if the problem is in the IO
Hi all,
We compiled openmpi/2.0.0 with gcc/6.1.0 and intel/13.1.3. Both of them have
odd behaviors when trying to read from standard input.
For example, if we start the application lammps across 4 nodes, each node 16
cores, connected by Intel QDR Infiniband, mpirun works fine for the 1st
29 matches
Mail list logo