Your message dated Sun, 27 Dec 2020 21:48:47 +0100
with message-id <20201227204847.ga24...@xanadu.blop.info>
and subject line bug fixed in openmpi 4.1.0-2
has caused the Debian Bug report #978180,
regarding dune-functions: FTBFS: tests failed
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
978180: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978180
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Source: dune-functions
Version: 2.7.0-2
Severity: serious
Justification: FTBFS on amd64
Tags: bullseye sid ftbfs
Usertags: ftbfs-20201226 ftbfs-bullseye

Hi,

During a rebuild of all packages in sid, your package failed to build
on amd64.

Relevant part (hopefully):
> make[5]: Entering directory '/<<PKGBUILDDIR>>/build'
> make[5]: Nothing to be done for 'CMakeFiles/build_tests.dir/build'.
> make[5]: Leaving directory '/<<PKGBUILDDIR>>/build'
> [100%] Built target build_tests
> make[4]: Leaving directory '/<<PKGBUILDDIR>>/build'
> /usr/bin/cmake -E cmake_progress_start /<<PKGBUILDDIR>>/build/CMakeFiles 0
> make[3]: Leaving directory '/<<PKGBUILDDIR>>/build'
> make[2]: Leaving directory '/<<PKGBUILDDIR>>/build'
> cd build; PATH=/<<PKGBUILDDIR>>/debian/tmp-test:$PATH /usr/bin/dune-ctest 
>    Site: ip-172-31-9-132
>    Build name: Linux-c++
> Create new tag: 20201226-1834 - Experimental
> Test project /<<PKGBUILDDIR>>/build
>       Start  1: istlvectorbackendtest
>  1/17 Test  #1: istlvectorbackendtest .............***Failed    0.04 sec
> [ip-172-31-9-132:11006] [[23070,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11005] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11005] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11005] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  2: differentiablefunctiontest
>  2/17 Test  #2: differentiablefunctiontest ........***Failed    0.02 sec
> [ip-172-31-9-132:11008] [[23520,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11007] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11007] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11007] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  3: brezzidouglasmarinibasistest
>  3/17 Test  #3: brezzidouglasmarinibasistest ......***Failed    0.02 sec
> [ip-172-31-9-132:11010] [[23522,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11009] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11009] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11009] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  4: bsplinebasistest
>  4/17 Test  #4: bsplinebasistest ..................***Failed    0.02 sec
> [ip-172-31-9-132:11012] [[23524,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11011] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11011] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11011] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  5: gridviewfunctionspacebasistest
>  5/17 Test  #5: gridviewfunctionspacebasistest ....***Failed    0.02 sec
> [ip-172-31-9-132:11014] [[23526,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11013] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11013] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11013] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  6: lagrangebasistest
>  6/17 Test  #6: lagrangebasistest .................***Failed    0.02 sec
> [ip-172-31-9-132:11016] [[23528,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11015] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11015] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11015] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  7: lagrangedgbasistest
>  7/17 Test  #7: lagrangedgbasistest ...............***Failed    0.02 sec
> [ip-172-31-9-132:11018] [[23530,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11017] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11017] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11017] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  8: taylorhoodbasistest
>  8/17 Test  #8: taylorhoodbasistest ...............***Failed    0.02 sec
> [ip-172-31-9-132:11020] [[23532,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11019] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11019] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11019] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  9: rannacherturekbasistest
>  9/17 Test  #9: rannacherturekbasistest ...........***Failed    0.02 sec
> [ip-172-31-9-132:11022] [[23534,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11021] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11021] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11021] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 10: raviartthomasbasistest
> 10/17 Test #10: raviartthomasbasistest ............***Failed    0.02 sec
> [ip-172-31-9-132:11024] [[23536,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11023] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11023] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11023] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 11: hierarchicvectorwrappertest
> 11/17 Test #11: hierarchicvectorwrappertest .......***Failed    0.02 sec
> [ip-172-31-9-132:11026] [[23538,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11025] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11025] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11025] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 12: compositebasistest
> 12/17 Test #12: compositebasistest ................***Failed    0.02 sec
> [ip-172-31-9-132:11028] [[23540,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11027] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11027] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11027] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 13: makebasistest
> 13/17 Test #13: makebasistest .....................***Failed    0.02 sec
> [ip-172-31-9-132:11030] [[23542,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11029] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11029] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11029] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 14: analyticgridviewfunctiontest
> 14/17 Test #14: analyticgridviewfunctiontest ......***Failed    0.02 sec
> [ip-172-31-9-132:11032] [[23544,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11031] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11031] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11031] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 15: discreteglobalbasisfunctiontest
> 15/17 Test #15: discreteglobalbasisfunctiontest ...***Failed    0.02 sec
> [ip-172-31-9-132:11034] [[23546,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11033] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11033] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11033] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 16: gridfunctiontest
> 16/17 Test #16: gridfunctiontest ..................***Failed    0.02 sec
> [ip-172-31-9-132:11036] [[23548,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11035] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11035] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11035] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 17: localfunctioncopytest
> 17/17 Test #17: localfunctioncopytest .............***Failed    0.02 sec
> [ip-172-31-9-132:11038] [[23550,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-9-132:11037] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-9-132:11037] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-9-132:11037] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
> 
> 0% tests passed, 17 tests failed out of 17
> 
> Total Test time (real) =   0.32 sec
> 
> The following tests FAILED:
>         1 - istlvectorbackendtest (Failed)
>         2 - differentiablefunctiontest (Failed)
>         3 - brezzidouglasmarinibasistest (Failed)
>         4 - bsplinebasistest (Failed)
>         5 - gridviewfunctionspacebasistest (Failed)
>         6 - lagrangebasistest (Failed)
>         7 - lagrangedgbasistest (Failed)
>         8 - taylorhoodbasistest (Failed)
>         9 - rannacherturekbasistest (Failed)
>        10 - raviartthomasbasistest (Failed)
>        11 - hierarchicvectorwrappertest (Failed)
>        12 - compositebasistest (Failed)
>        13 - makebasistest (Failed)
>        14 - analyticgridviewfunctiontest (Failed)
>        15 - discreteglobalbasisfunctiontest (Failed)
>        16 - gridfunctiontest (Failed)
>        17 - localfunctioncopytest (Failed)
> Errors while running CTest
> ======================================================================
> Name:      istlvectorbackendtest
> FullName:  ./dune/functions/backends/test/istlvectorbackendtest
> Status:    FAILED
> 
> ======================================================================
> Name:      differentiablefunctiontest
> FullName:  ./dune/functions/common/test/differentiablefunctiontest
> Status:    FAILED
> 
> ======================================================================
> Name:      brezzidouglasmarinibasistest
> FullName:  
> ./dune/functions/functionspacebases/test/brezzidouglasmarinibasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      bsplinebasistest
> FullName:  ./dune/functions/functionspacebases/test/bsplinebasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      gridviewfunctionspacebasistest
> FullName:  
> ./dune/functions/functionspacebases/test/gridviewfunctionspacebasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      lagrangebasistest
> FullName:  ./dune/functions/functionspacebases/test/lagrangebasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      lagrangedgbasistest
> FullName:  ./dune/functions/functionspacebases/test/lagrangedgbasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      taylorhoodbasistest
> FullName:  ./dune/functions/functionspacebases/test/taylorhoodbasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      rannacherturekbasistest
> FullName:  ./dune/functions/functionspacebases/test/rannacherturekbasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      raviartthomasbasistest
> FullName:  ./dune/functions/functionspacebases/test/raviartthomasbasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      hierarchicvectorwrappertest
> FullName:  
> ./dune/functions/functionspacebases/test/hierarchicvectorwrappertest
> Status:    FAILED
> 
> ======================================================================
> Name:      compositebasistest
> FullName:  ./dune/functions/functionspacebases/test/compositebasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      makebasistest
> FullName:  ./dune/functions/functionspacebases/test/makebasistest
> Status:    FAILED
> 
> ======================================================================
> Name:      analyticgridviewfunctiontest
> FullName:  ./dune/functions/gridfunctions/test/analyticgridviewfunctiontest
> Status:    FAILED
> 
> ======================================================================
> Name:      discreteglobalbasisfunctiontest
> FullName:  ./dune/functions/gridfunctions/test/discreteglobalbasisfunctiontest
> Status:    FAILED
> 
> ======================================================================
> Name:      gridfunctiontest
> FullName:  ./dune/functions/gridfunctions/test/gridfunctiontest
> Status:    FAILED
> 
> ======================================================================
> Name:      localfunctioncopytest
> FullName:  ./dune/functions/gridfunctions/test/localfunctioncopytest
> Status:    FAILED
> 
> JUnit report for CTest results written to 
> /<<PKGBUILDDIR>>/build/junit/cmake.xml
> make[1]: *** [/usr/share/dune/dune-debian.mk:39: override_dh_auto_test] Error 
> 1

The full build log is available from:
   http://qa-logs.debian.net/2020/12/26/dune-functions_2.7.0-2_unstable.log

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

If you reassign this bug to another package, please marking it as 'affects'-ing
this package. See https://www.debian.org/Bugs/server-control#affects

If you fail to reproduce this, please provide a build log and diff it with me
so that we can identify if something relevant changed in the meantime.

About the archive rebuild: The rebuild was done on EC2 VM instances from
Amazon Web Services, using a clean, minimal and up-to-date chroot. Every
failed build was retried once to eliminate random failures.

--- End Message ---
--- Begin Message ---
Hi,

This failure was caused by a bug in openmpi , fixed in openmpi 4.1.0-2.
so I'm closing this bug.

Lucas

--- End Message ---

Reply via email to