Bug#769801: mpi4py: FTBFS under pbuilder: Failure in test_spawn.TestSpawnSelf

2014-11-23 Thread Wookey
Just a datapoint:
It builds fine under sbuild:
sbuild -A -s -d unstable mpi4py_1.3.1+hg20131106-1.dsc 

The test run OK there, but maybe sbuild does less environment sanitising?

As it's sbuild that gets used on the buildds, does that mean that this
is not actually a serious bug?

Wookey
-- 
Principal hats:  Linaro, Debian, Wookware, ARM
http://wookware.org/


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#769801: mpi4py: FTBFS under pbuilder: Failure in test_spawn.TestSpawnSelf

2014-11-16 Thread Daniel Schepler
Source: mpi4py
Version: 1.3.1+hg20131106-1
Severity: serious

From my pbuilder build log (on amd64):

...
set -e; for v in 2.7 3.4; do \
 echo I: testing using python$v; \
 PYTHONPATH=`/bin/ls -d /tmp/buildd/mpi4py-1.3.1+hg20131106/build/lib.*-$v` \
  /usr/bin/python$v /usr/bin/nosetests -v --exclude='testPackUnpackExternal'; \
done
I: testing using python2.7
testAttr (test_attributes.TestCommAttrSelf) ... ok
testAttrCopyDelete (test_attributes.TestCommAttrSelf) ... ok
testAttrCopyFalse (test_attributes.TestCommAttrSelf) ... ok
...
testPutGet (test_rma.TestRMAWorld) ... ok
testPutProcNull (test_rma.TestRMAWorld) ... ok
testArgsOnlyAtRoot (test_spawn.TestSpawnSelf) ... 
--
At least one pair of MPI processes are unable to reach each other for
MPI communications.  This means that no Open MPI device has indicated
that it can be used to communicate between these processes.  This is
an error; Open MPI requires that all MPI processes be able to reach
each other.  This error can sometimes be the result of forgetting to
specify the self BTL.

  Process 1 ([[22838,1],0]) is on host: frobozz
  Process 2 ([[22838,2],0]) is on host: frobozz
  BTLs attempted: self sm

Your MPI job is now going to abort; sorry.
--
ERROR
testArgsOnlyAtRootMultiple (test_spawn.TestSpawnSelf) ... [frobozz:11719] *** 
An error occurred in MPI_Barrier
[frobozz:11719] *** on communicator MPI_COMM_PARENT
[frobozz:11719] *** MPI_ERR_INTERN: internal error
[frobozz:11719] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
[frobozz:11717] 1 more process has sent help message help-mca-bml-r2.txt / 
unreachable proc
[frobozz:11717] Set MCA parameter orte_base_help_aggregate to 0 to see all 
help / error messages
debian/rules:83: recipe for target 'override_dh_auto_test' failed
make[1]: *** [override_dh_auto_test] Error 1
make[1]: Leaving directory '/tmp/buildd/mpi4py-1.3.1+hg20131106'
debian/rules:16: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2

This seems like it might be related to the fact that pbuilder now completely
disables network access except for a new instance of lo (as opposed to what
it did before, just blanking out /etc/resolv.conf).
-- 
Daniel Schepler


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org