Your message dated Sun, 27 Dec 2020 21:48:47 +0100
with message-id <20201227204847.ga24...@xanadu.blop.info>
and subject line bug fixed in openmpi 4.1.0-2
has caused the Debian Bug report #978332,
regarding gpaw: FTBFS: dh_auto_test: error: pybuild --test --test-pytest -i 
python{version} -p 3.9 --test-pytest "--test-args=-v -m ci" returned exit code 
13
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
978332: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978332
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Source: gpaw
Version: 20.10.0-2
Severity: serious
Justification: FTBFS on amd64
Tags: bullseye sid ftbfs
Usertags: ftbfs-20201226 ftbfs-bullseye

Hi,

During a rebuild of all packages in sid, your package failed to build
on amd64.

Relevant part (hopefully):
> make[1]: Entering directory '/<<PKGBUILDDIR>>'
> dh_auto_test -- --test-pytest --test-args="-v -m ci"
> I: pybuild base:232: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9/build; 
> python3.9 -m pytest -v -m ci
> ============================= test session starts 
> ==============================
> platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0 -- 
> /usr/bin/python3.9
> cachedir: .pytest_cache
> rootdir: /<<PKGBUILDDIR>>, inifile: pytest.ini
> collecting ... [ip-172-31-8-9:18994] [[64811,0],0] ORTE_ERROR_LOG: Not found 
> in file ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-8-9:18993] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-8-9:18993] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-8-9:18993] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> E: pybuild pybuild:353: test: plugin distutils failed with: exit code=1: cd 
> /<<PKGBUILDDIR>>/.pybuild/cpython3_3.9/build; python3.9 -m pytest -v -m ci
> dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 
> --test-pytest "--test-args=-v -m ci" returned exit code 13

The full build log is available from:
   http://qa-logs.debian.net/2020/12/26/gpaw_20.10.0-2_unstable.log

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

If you reassign this bug to another package, please marking it as 'affects'-ing
this package. See https://www.debian.org/Bugs/server-control#affects

If you fail to reproduce this, please provide a build log and diff it with me
so that we can identify if something relevant changed in the meantime.

About the archive rebuild: The rebuild was done on EC2 VM instances from
Amazon Web Services, using a clean, minimal and up-to-date chroot. Every
failed build was retried once to eliminate random failures.

--- End Message ---
--- Begin Message ---
Hi,

This failure was caused by a bug in openmpi , fixed in openmpi 4.1.0-2.
so I'm closing this bug.

Lucas

--- End Message ---

Reply via email to