Source: dune-common Version: 2.7.0-5 Severity: serious Justification: FTBFS on amd64 Tags: bullseye sid ftbfs Usertags: ftbfs-20201226 ftbfs-bullseye
Hi, During a rebuild of all packages in sid, your package failed to build on amd64. Relevant part (hopefully): > make[5]: Entering directory '/<<PKGBUILDDIR>>/build' > make[5]: Nothing to be done for 'CMakeFiles/build_tests.dir/build'. > make[5]: Leaving directory '/<<PKGBUILDDIR>>/build' > [100%] Built target build_tests > make[4]: Leaving directory '/<<PKGBUILDDIR>>/build' > /usr/bin/cmake -E cmake_progress_start /<<PKGBUILDDIR>>/build/CMakeFiles 0 > make[3]: Leaving directory '/<<PKGBUILDDIR>>/build' > make[2]: Leaving directory '/<<PKGBUILDDIR>>/build' > cd build; PATH=/<<PKGBUILDDIR>>/debian/tmp-test:$PATH > /<<PKGBUILDDIR>>/bin/dune-ctest > Site: ip-172-31-8-9 > Build name: Linux-c++ > Create new tag: 20201226-1836 - Experimental > Test project /<<PKGBUILDDIR>>/build > Start 1: indexsettest > 1/112 Test #1: indexsettest ........................... Passed 0.00 > sec > Start 2: remoteindicestest > 2/112 Test #2: remoteindicestest ......................***Failed 0.04 > sec > [ip-172-31-8-9:13404] [[33605,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13403] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13403] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13403] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 3: remoteindicestest-mpi-2 > 3/112 Test #3: remoteindicestest-mpi-2 ................***Failed 0.01 > sec > [ip-172-31-8-9:13405] [[33604,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 4: selectiontest > 4/112 Test #4: selectiontest .......................... Passed 0.17 > sec > Start 5: syncertest > 5/112 Test #5: syncertest .............................***Failed 0.02 > sec > [ip-172-31-8-9:13408] [[33657,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13407] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13407] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13407] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 6: syncertest-mpi-2 > 6/112 Test #6: syncertest-mpi-2 .......................***Failed 0.01 > sec > [ip-172-31-8-9:13409] [[33656,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 7: variablesizecommunicatortest > 7/112 Test #7: variablesizecommunicatortest ...........***Failed 0.02 > sec > [ip-172-31-8-9:13411] [[33658,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13410] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13410] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13410] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 8: variablesizecommunicatortest-mpi-2 > 8/112 Test #8: variablesizecommunicatortest-mpi-2 .....***Failed 0.01 > sec > [ip-172-31-8-9:13412] [[33661,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 9: mpidatatest-mpi-2 > 9/112 Test #9: mpidatatest-mpi-2 ......................***Failed 0.01 > sec > [ip-172-31-8-9:13413] [[33660,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 10: mpifuturetest > 10/112 Test #10: mpifuturetest ..........................***Failed 0.02 > sec > [ip-172-31-8-9:13415] [[33662,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13414] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13414] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13414] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 11: mpifuturetest-mpi-2 > 11/112 Test #11: mpifuturetest-mpi-2 ....................***Failed 0.01 > sec > [ip-172-31-8-9:13416] [[33649,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 12: mpipacktest-mpi-2 > 12/112 Test #12: mpipacktest-mpi-2 ......................***Failed 0.01 > sec > [ip-172-31-8-9:13417] [[33648,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 13: looptest > 13/112 Test #13: looptest ............................... Passed 0.01 > sec > Start 14: standardtest > 14/112 Test #14: standardtest ........................... Passed 0.01 > sec > Start 15: vcarraytest > 15/112 Test #15: vcarraytest ............................***Skipped 0.00 > sec > Start 16: vcvectortest > 16/112 Test #16: vcvectortest ...........................***Skipped 0.00 > sec > Start 17: arithmetictestsuitetest > 17/112 Test #17: arithmetictestsuitetest ................***Failed 0.02 > sec > [ip-172-31-8-9:13423] [[33654,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13422] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13422] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13422] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 18: arraylisttest > 18/112 Test #18: arraylisttest .......................... Passed 0.00 > sec > Start 19: arraytest > 19/112 Test #19: arraytest .............................. Passed 0.00 > sec > Start 20: assertandreturntest > 20/112 Test #20: assertandreturntest ....................***Failed 0.02 > sec > [ip-172-31-8-9:13427] [[33642,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13426] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13426] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13426] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 21: assertandreturntest_compiletime_fail > 21/112 Test #21: assertandreturntest_compiletime_fail ... Passed 0.72 > sec > Start 22: assertandreturntest_ndebug > 22/112 Test #22: assertandreturntest_ndebug .............***Failed 0.02 > sec > [ip-172-31-8-9:13452] [[33685,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13451] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13451] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13451] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 23: autocopytest > 23/112 Test #23: autocopytest ........................... Passed 0.00 > sec > Start 24: bigunsignedinttest > 24/112 Test #24: bigunsignedinttest ..................... Passed 0.00 > sec > Start 25: bitsetvectortest > 25/112 Test #25: bitsetvectortest ....................... Passed 0.00 > sec > Start 26: boundscheckingtest > 26/112 Test #26: boundscheckingtest ..................... Passed 0.00 > sec > Start 27: boundscheckingmvtest > 27/112 Test #27: boundscheckingmvtest ................... Passed 0.00 > sec > Start 28: boundscheckingoptest > 28/112 Test #28: boundscheckingoptest ................... Passed 0.00 > sec > Start 29: calloncetest > 29/112 Test #29: calloncetest ........................... Passed 0.00 > sec > Start 30: check_fvector_size > 30/112 Test #30: check_fvector_size ..................... Passed 0.00 > sec > Start 31: check_fvector_size_fail1 > 31/112 Test #31: check_fvector_size_fail1 ............... Passed 0.61 > sec > Start 32: check_fvector_size_fail2 > 32/112 Test #32: check_fvector_size_fail2 ............... Passed 0.61 > sec > Start 33: classnametest-demangled > 33/112 Test #33: classnametest-demangled ................ Passed 0.01 > sec > Start 34: classnametest-fallback > 34/112 Test #34: classnametest-fallback ................. Passed 0.01 > sec > Start 35: concept > 35/112 Test #35: concept ................................***Failed 0.03 > sec > [ip-172-31-8-9:13499] [[33698,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13498] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13498] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13498] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 36: constexprifelsetest > 36/112 Test #36: constexprifelsetest .................... Passed 0.00 > sec > Start 37: debugaligntest > 37/112 Test #37: debugaligntest .........................***Failed 0.02 > sec > [ip-172-31-8-9:13502] [[33703,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13501] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13501] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13501] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 38: debugalignsimdtest > 38/112 Test #38: debugalignsimdtest .....................***Failed 0.02 > sec > [ip-172-31-8-9:13504] [[33753,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13503] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13503] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13503] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 39: densematrixassignmenttest > 39/112 Test #39: densematrixassignmenttest .............. Passed 0.00 > sec > Start 40: densematrixassignmenttest_fail0 > 40/112 Test #40: densematrixassignmenttest_fail0 ........ Passed 0.97 > sec > Start 41: densematrixassignmenttest_fail1 > 41/112 Test #41: densematrixassignmenttest_fail1 ........ Passed 0.97 > sec > Start 42: densematrixassignmenttest_fail2 > 42/112 Test #42: densematrixassignmenttest_fail2 ........ Passed 1.01 > sec > Start 43: densematrixassignmenttest_fail3 > 43/112 Test #43: densematrixassignmenttest_fail3 ........ Passed 1.02 > sec > Start 44: densematrixassignmenttest_fail4 > 44/112 Test #44: densematrixassignmenttest_fail4 ........ Passed 1.00 > sec > Start 45: densematrixassignmenttest_fail5 > 45/112 Test #45: densematrixassignmenttest_fail5 ........ Passed 1.03 > sec > Start 46: densematrixassignmenttest_fail6 > 46/112 Test #46: densematrixassignmenttest_fail6 ........ Passed 0.98 > sec > Start 47: densevectorassignmenttest > 47/112 Test #47: densevectorassignmenttest .............. Passed 0.00 > sec > Start 48: diagonalmatrixtest > 48/112 Test #48: diagonalmatrixtest ..................... Passed 0.00 > sec > Start 49: dynmatrixtest > 49/112 Test #49: dynmatrixtest .......................... Passed 0.00 > sec > Start 50: dynvectortest > 50/112 Test #50: dynvectortest .......................... Passed 0.00 > sec > Start 51: densevectortest > 51/112 Test #51: densevectortest ........................ Passed 0.00 > sec > Start 52: enumsettest > 52/112 Test #52: enumsettest ............................ Passed 0.00 > sec > Start 53: filledarraytest > 53/112 Test #53: filledarraytest ........................ Passed 0.00 > sec > Start 54: fmatrixtest > 54/112 Test #54: fmatrixtest ............................ Passed 0.00 > sec > Start 55: functiontest > 55/112 Test #55: functiontest ........................... Passed 0.00 > sec > Start 56: fvectortest > 56/112 Test #56: fvectortest ............................ Passed 0.00 > sec > Start 57: fvectorconversion1d > 57/112 Test #57: fvectorconversion1d .................... Passed 0.00 > sec > Start 58: genericiterator_compile_fail > 58/112 Test #58: genericiterator_compile_fail ........... Passed 0.64 > sec > Start 59: gcdlcmtest > 59/112 Test #59: gcdlcmtest ............................. Passed 0.00 > sec > Start 60: hybridutilitiestest > 60/112 Test #60: hybridutilitiestest .................... Passed 0.00 > sec > Start 61: indicestest > 61/112 Test #61: indicestest ............................ Passed 0.00 > sec > Start 62: integersequence > 62/112 Test #62: integersequence ........................ Passed 0.00 > sec > Start 63: iteratorfacadetest2 > 63/112 Test #63: iteratorfacadetest2 .................... Passed 0.00 > sec > Start 64: iteratorfacadetest > 64/112 Test #64: iteratorfacadetest ..................... Passed 0.00 > sec > Start 65: lrutest > 65/112 Test #65: lrutest ................................ Passed 0.00 > sec > Start 66: mathclassifierstest > 66/112 Test #66: mathclassifierstest .................... Passed 0.00 > sec > Start 67: mpicommunicationtest > 67/112 Test #67: mpicommunicationtest ...................***Failed 0.04 > sec > [ip-172-31-8-9:13704] [[33425,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13703] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13703] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13703] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 68: mpicommunicationtest-mpi-2 > 68/112 Test #68: mpicommunicationtest-mpi-2 .............***Failed 0.01 > sec > [ip-172-31-8-9:13705] [[33424,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 69: mpiguardtest > 69/112 Test #69: mpiguardtest ...........................***Failed 0.02 > sec > [ip-172-31-8-9:13707] [[33426,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13706] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13706] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13706] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 70: mpiguardtest-mpi-2 > 70/112 Test #70: mpiguardtest-mpi-2 .....................***Failed 0.01 > sec > [ip-172-31-8-9:13708] [[33429,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 71: mpihelpertest > 71/112 Test #71: mpihelpertest ..........................***Failed 0.02 > sec > [ip-172-31-8-9:13710] [[33431,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13709] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13709] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13709] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 72: mpihelpertest-mpi-2 > 72/112 Test #72: mpihelpertest-mpi-2 ....................***Failed 0.01 > sec > [ip-172-31-8-9:13711] [[33430,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 73: mpihelpertest2 > 73/112 Test #73: mpihelpertest2 .........................***Failed 0.02 > sec > [ip-172-31-8-9:13713] [[33416,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13712] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13712] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13712] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 74: mpihelpertest2-mpi-2 > 74/112 Test #74: mpihelpertest2-mpi-2 ...................***Failed 0.01 > sec > [ip-172-31-8-9:13714] [[33419,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > > Start 75: overloadsettest > 75/112 Test #75: overloadsettest ........................ Passed 0.00 > sec > Start 76: parameterizedobjecttest > 76/112 Test #76: parameterizedobjecttest ................ Passed 0.00 > sec > Start 77: parametertreelocaletest > 77/112 Test #77: parametertreelocaletest ................***Skipped 0.00 > sec > Start 78: parametertreetest > 78/112 Test #78: parametertreetest ...................... Passed 0.00 > sec > Start 79: pathtest > 79/112 Test #79: pathtest ............................... Passed 0.00 > sec > Start 80: poolallocatortest > 80/112 Test #80: poolallocatortest ...................... Passed 0.00 > sec > Start 81: powertest > 81/112 Test #81: powertest .............................. Passed 0.00 > sec > Start 82: quadmathtest > 82/112 Test #82: quadmathtest ........................... Passed 0.00 > sec > Start 83: rangeutilitiestest > 83/112 Test #83: rangeutilitiestest ..................... Passed 0.00 > sec > Start 84: reservedvectortest > 84/112 Test #84: reservedvectortest ..................... Passed 0.00 > sec > Start 85: shared_ptrtest > 85/112 Test #85: shared_ptrtest ......................... Passed 0.00 > sec > Start 86: singletontest > 86/112 Test #86: singletontest .......................... Passed 0.00 > sec > Start 87: sllisttest > 87/112 Test #87: sllisttest ............................. Passed 0.00 > sec > Start 88: stdidentity > 88/112 Test #88: stdidentity ............................ Passed 0.00 > sec > Start 89: stdapplytest > 89/112 Test #89: stdapplytest ........................... Passed 0.00 > sec > Start 90: stdtypetraitstest > 90/112 Test #90: stdtypetraitstest ...................... Passed 0.00 > sec > Start 91: streamoperatorstest > 91/112 Test #91: streamoperatorstest .................... Passed 0.00 > sec > Start 92: streamtest > 92/112 Test #92: streamtest ............................. Passed 0.00 > sec > Start 93: stringutilitytest > 93/112 Test #93: stringutilitytest ...................... Passed 0.00 > sec > Start 94: testdebugallocator > 94/112 Test #94: testdebugallocator ..................... Passed 0.00 > sec > Start 95: testdebugallocator_fail1 > 95/112 Test #95: testdebugallocator_fail1 ............... Passed 0.00 > sec > Start 96: testdebugallocator_fail2 > 96/112 Test #96: testdebugallocator_fail2 ............... Passed 0.00 > sec > Start 97: testdebugallocator_fail3 > 97/112 Test #97: testdebugallocator_fail3 ............... Passed 0.00 > sec > Start 98: testdebugallocator_fail4 > 98/112 Test #98: testdebugallocator_fail4 ............... Passed 0.00 > sec > Start 99: testdebugallocator_fail5 > 99/112 Test #99: testdebugallocator_fail5 ............... Passed 0.00 > sec > Start 100: testfloatcmp > 100/112 Test #100: testfloatcmp ........................... Passed 0.00 > sec > Start 101: to_unique_ptrtest > 101/112 Test #101: to_unique_ptrtest ...................... Passed 0.00 > sec > Start 102: tupleutilitytest > 102/112 Test #102: tupleutilitytest ....................... Passed 0.00 > sec > Start 103: typeutilitytest > 103/112 Test #103: typeutilitytest ........................ Passed 0.00 > sec > Start 104: typelisttest > 104/112 Test #104: typelisttest ........................... Passed 0.00 > sec > Start 105: utilitytest > 105/112 Test #105: utilitytest ............................ Passed 0.00 > sec > Start 106: eigenvaluestest > 106/112 Test #106: eigenvaluestest ........................ Passed 0.00 > sec > Start 107: optionaltest > 107/112 Test #107: optionaltest ........................... Passed 0.00 > sec > Start 108: versiontest > 108/112 Test #108: versiontest ............................ Passed 0.00 > sec > Start 109: mathtest > 109/112 Test #109: mathtest ............................... Passed 0.00 > sec > Start 110: varianttest > 110/112 Test #110: varianttest ............................***Failed 0.02 > sec > [ip-172-31-8-9:13751] [[33454,0],0] ORTE_ERROR_LOG: Not found in file > ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > opal_pmix_base_select failed > --> Returned value Not found (-13) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ip-172-31-8-9:13750] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716 > [ip-172-31-8-9:13750] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a > daemon on the local node in file > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_init failed > --> Returned value Unable to start a daemon on the local node (-127) > instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: ompi_rte_init failed > --> Returned "Unable to start a daemon on the local node" (-127) instead of > "Success" (0) > -------------------------------------------------------------------------- > *** An error occurred in MPI_Init > *** on a NULL communicator > *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > *** and potentially your MPI job) > [ip-172-31-8-9:13750] Local abort before MPI_INIT completed completed > successfully, but am not able to aggregate error messages, and not able to > guarantee that all other processes were killed! > > Start 111: vcexpectedimpltest > 111/112 Test #111: vcexpectedimpltest .....................***Skipped 0.00 > sec > Start 112: alignedallocatortest > 112/112 Test #112: alignedallocatortest ................... Passed 0.00 > sec > > 78% tests passed, 25 tests failed out of 112 > > Label Time Summary: > quick = 10.46 sec*proc (105 tests) > > Total Test time (real) = 10.39 sec > > The following tests did not run: > 15 - vcarraytest (Skipped) > 16 - vcvectortest (Skipped) > 77 - parametertreelocaletest (Skipped) > 111 - vcexpectedimpltest (Skipped) > > The following tests FAILED: > 2 - remoteindicestest (Failed) > 3 - remoteindicestest-mpi-2 (Failed) > 5 - syncertest (Failed) > 6 - syncertest-mpi-2 (Failed) > 7 - variablesizecommunicatortest (Failed) > 8 - variablesizecommunicatortest-mpi-2 (Failed) > 9 - mpidatatest-mpi-2 (Failed) > 10 - mpifuturetest (Failed) > 11 - mpifuturetest-mpi-2 (Failed) > 12 - mpipacktest-mpi-2 (Failed) > 17 - arithmetictestsuitetest (Failed) > 20 - assertandreturntest (Failed) > 22 - assertandreturntest_ndebug (Failed) > 35 - concept (Failed) > 37 - debugaligntest (Failed) > 38 - debugalignsimdtest (Failed) > 67 - mpicommunicationtest (Failed) > 68 - mpicommunicationtest-mpi-2 (Failed) > 69 - mpiguardtest (Failed) > 70 - mpiguardtest-mpi-2 (Failed) > 71 - mpihelpertest (Failed) > 72 - mpihelpertest-mpi-2 (Failed) > 73 - mpihelpertest2 (Failed) > 74 - mpihelpertest2-mpi-2 (Failed) > 110 - varianttest (Failed) > Errors while running CTest > ====================================================================== > Name: remoteindicestest > FullName: ./dune/common/parallel/test/remoteindicestest > Status: FAILED > > ====================================================================== > Name: remoteindicestest-mpi-2 > FullName: ./dune/common/parallel/test/remoteindicestest-mpi-2 > Status: FAILED > > ====================================================================== > Name: syncertest > FullName: ./dune/common/parallel/test/syncertest > Status: FAILED > > ====================================================================== > Name: syncertest-mpi-2 > FullName: ./dune/common/parallel/test/syncertest-mpi-2 > Status: FAILED > > ====================================================================== > Name: variablesizecommunicatortest > FullName: ./dune/common/parallel/test/variablesizecommunicatortest > Status: FAILED > > ====================================================================== > Name: variablesizecommunicatortest-mpi-2 > FullName: ./dune/common/parallel/test/variablesizecommunicatortest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpidatatest-mpi-2 > FullName: ./dune/common/parallel/test/mpidatatest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpifuturetest > FullName: ./dune/common/parallel/test/mpifuturetest > Status: FAILED > > ====================================================================== > Name: mpifuturetest-mpi-2 > FullName: ./dune/common/parallel/test/mpifuturetest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpipacktest-mpi-2 > FullName: ./dune/common/parallel/test/mpipacktest-mpi-2 > Status: FAILED > > ====================================================================== > Name: arithmetictestsuitetest > FullName: ./dune/common/test/arithmetictestsuitetest > Status: FAILED > > ====================================================================== > Name: assertandreturntest > FullName: ./dune/common/test/assertandreturntest > Status: FAILED > > ====================================================================== > Name: assertandreturntest_ndebug > FullName: ./dune/common/test/assertandreturntest_ndebug > Status: FAILED > > ====================================================================== > Name: concept > FullName: ./dune/common/test/concept > Status: FAILED > > ====================================================================== > Name: debugaligntest > FullName: ./dune/common/test/debugaligntest > Status: FAILED > > ====================================================================== > Name: debugalignsimdtest > FullName: ./dune/common/test/debugalignsimdtest > Status: FAILED > > ====================================================================== > Name: mpicommunicationtest > FullName: ./dune/common/test/mpicommunicationtest > Status: FAILED > > ====================================================================== > Name: mpicommunicationtest-mpi-2 > FullName: ./dune/common/test/mpicommunicationtest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpiguardtest > FullName: ./dune/common/test/mpiguardtest > Status: FAILED > > ====================================================================== > Name: mpiguardtest-mpi-2 > FullName: ./dune/common/test/mpiguardtest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpihelpertest > FullName: ./dune/common/test/mpihelpertest > Status: FAILED > > ====================================================================== > Name: mpihelpertest-mpi-2 > FullName: ./dune/common/test/mpihelpertest-mpi-2 > Status: FAILED > > ====================================================================== > Name: mpihelpertest2 > FullName: ./dune/common/test/mpihelpertest2 > Status: FAILED > > ====================================================================== > Name: mpihelpertest2-mpi-2 > FullName: ./dune/common/test/mpihelpertest2-mpi-2 > Status: FAILED > > ====================================================================== > Name: varianttest > FullName: ./dune/common/test/varianttest > Status: FAILED > > JUnit report for CTest results written to > /<<PKGBUILDDIR>>/build/junit/cmake.xml > make[1]: *** [debian/dune-debian.mk:39: override_dh_auto_test] Error 1 The full build log is available from: http://qa-logs.debian.net/2020/12/26/dune-common_2.7.0-5_unstable.log A list of current common problems and possible solutions is available at http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute! If you reassign this bug to another package, please marking it as 'affects'-ing this package. See https://www.debian.org/Bugs/server-control#affects If you fail to reproduce this, please provide a build log and diff it with me so that we can identify if something relevant changed in the meantime. About the archive rebuild: The rebuild was done on EC2 VM instances from Amazon Web Services, using a clean, minimal and up-to-date chroot. Every failed build was retried once to eliminate random failures.

