Now fixed in SVN.  Thanks!

On Oct 11, 2005, at 6:06 PM, Galen M. Shipman wrote:

When running the NPB - FT using 128 nodes problem size C, I get the
following error with both btl_tcp and btl_mvapi:

-bash-3.00$ mpirun -np 128 -machinefile ~/dqlist -mca btl self,tcp -
mca mpi_leave_pinned 0  ./bin/ft.C.128


NAS Parallel Benchmarks 2.3 -- FT Benchmark

No input file inputft.data. Using compiled defaults
Size                : 512x512x512
Iterations          :          20
Number of processes :         128
Processor array     :       1x128
Layout type         :          1D
[dq049:27360] *** An error occurred in MPI_Reduce
[dq049:27360] *** on communicator MPI_COMM_WORLD
[dq049:27360] *** MPI_ERR_OP: invalid reduce operation
[dq049:27360] *** MPI_ERRORS_ARE_FATAL (goodbye)
[dq048:27568] *** An error occurred in MPI_Reduce
[dq048:27568] *** on communicator MPI_COMM_WORLD
[dq048:27568] *** MPI_ERR_OP: invalid reduce operation
[dq048:27568] *** MPI_ERRORS_ARE_FATAL (goodbye)
[dq088:24879] *** An error occurred in MPI_Reduce

_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/

Reply via email to