Bug#1072939: paraview: armel needs -latomic linking

2024-06-10 Thread Drew Parsons
Package: paraview
Version: 5.12.0+dfsg-4
Severity: important
Control: forwarded -1 https://gitlab.kitware.com/vtk/vtk/-/issues/19352


paraview FTBFS on armel with 

bin/vtkWrapPython-pv5.12 -MF 
/<>/build.python3.12/CMakeFiles/vtkRemotingServerManagerPython/vtkSMDataSourceProxyPython.cxx.d
 
@/<>/build.python3.12/CMakeFiles/vtkRemotingServerManagerPython/vtkRemotingServerManager-python.DEBUG.args
 -o 
/<>/build.python3.12/CMakeFiles/vtkRemotingServerManagerPython/vtkSMDataSourceProxyPython.cxx
 /<>/Remoting/ServerManager/vtkSMDataSourceProxy.h
[ 79%] Building CXX object 
CMakeFiles/vtkRenderingOpenGL2CS.dir/CMakeFiles/vtkRenderingOpenGL2CS/vtkClearRGBPassClientServer.cxx.o
[ 79%] Generating Python wrapper sources for vtkOpenGLRenderer
bin/vtkWrapPython-pv5.12 -MF 
/<>/build.python3.12/CMakeFiles/vtkRenderingOpenGL2Python/vtkOpenGLRendererPython.cxx.d
 
@/<>/build.python3.12/CMakeFiles/vtkRenderingOpenGL2Python/vtkRenderingOpenGL2-python.DEBUG.args
 -o 
/<>/build.python3.12/CMakeFiles/vtkRenderingOpenGL2Python/vtkOpenGLRendererPython.cxx
 /<>/VTK/Rendering/OpenGL2/vtkOpenGLRenderer.h
/usr/bin/c++ -Dkiss_fft_scalar=double 
-DvtkRenderingCore_AUTOINIT_INCLUDE=\"/<>/build.python3.12/CMakeFiles/vtkModuleAutoInit_5e984c86b020da752eb6357425f4f168.h\"
 -I/<>/build.python3.12/VTK/Rendering/OpenGL2 
-I/<>/VTK/Rendering/OpenGL2 
-I/<>/build.python3.12/VTK/Common/Core 
-I/<>/VTK/Common/Core 
-I/<>/build.python3.12/VTK/ThirdParty/token/vtktoken/token 
-I/<>/VTK/ThirdParty/token/vtktoken/token 
-I/<>/VTK/ThirdParty/token/vtktoken 
-I/<>/build.python3.12/VTK/ThirdParty/token/vtktoken 
-I/<>/build.python3.12/VTK/ThirdParty/nlohmannjson/vtknlohmannjson 
-I/<>/VTK/ThirdParty/nlohmannjson/vtknlohmannjson 
-I/<>/VTK/ThirdParty/nlohmannjson/vtknlohmannjson/include 
-I/<>/build.python3.12/VTK/Common/DataModel 
-I/<>/VTK/Common/DataModel 
-I/<>/build.python3.12/VTK/Common/Math 
-I/<>/VTK/Common/Math 
-I/<>/build.python3.12/VTK/ThirdParty/kissfft/vtkkissfft 
-I/<>/VTK/ThirdParty/kissfft/vtkkissfft 
-I/<>/build.python3.12/VTK/Common/Transforms 
-I/<>/VTK/Common/Transforms 
-I/<>/build.python3.12/VTK/Filters/General 
-I/<>/VTK/Filters/General 
-I/<>/build.python3.12/VTK/Common/ExecutionModel 
-I/<>/VTK/Common/ExecutionModel 
-I/<>/build.python3.12/VTK/Common/Misc 
-I/<>/VTK/Common/Misc 
-I/<>/build.python3.12/VTK/Filters/Core 
-I/<>/VTK/Filters/Core 
-I/<>/build.python3.12/VTK/IO/Image 
-I/<>/VTK/IO/Image 
-I/<>/build.python3.12/VTK/Imaging/Core 
-I/<>/VTK/Imaging/Core 
-I/<>/build.python3.12/VTK/Rendering/Core 
-I/<>/VTK/Rendering/Core 
-I/<>/build.python3.12/VTK/Rendering/HyperTreeGrid 
-I/<>/VTK/Rendering/HyperTreeGrid 
-I/<>/build.python3.12/VTK/Rendering/UI 
-I/<>/VTK/Rendering/UI 
-I/<>/build.python3.12/Remoting/ClientServerStream 
-I/<>/Remoting/ClientServerStream -isystem 
/<>/build.python3.12/VTK/Utilities/KWIML -isystem 
/<>/VTK/Utilities/KWIML -isystem 
/<>/build.python3.12/VTK/Utilities/KWSys -isystem 
/<>/VTK/Utilities/KWSys -isystem 
/<>/build.python3.12/VTK/ThirdParty/token -isystem 
/<>/VTK/ThirdParty/token -isystem 
/<>/build.python3.12/VTK/ThirdParty/nlohmannjson -isystem 
/<>/VTK/ThirdParty/nlohmannjson -isystem 
/<>/build.python3.12/VTK/ThirdParty/kissfft -isystem 
/<>/VTK/ThirdParty/kissfft -isystem 
/<>/build.python3.12/VTK/ThirdParty/glew -isystem 
/<>/VTK/ThirdParty/glew -g -O2 
-ffile-prefix-map=/<>=. -fstack-protector-strong 
-fstack-clash-protection -Wformat -Werror=format-security -D_LARGEFILE_SOURCE 
-D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 -Wdate-time -D_FORTIFY_SOURCE=2 -O0  -g 
-std=c++11 -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -MD -MT 
CMakeFiles/vtkRenderingOpenGL2CS.dir/CMakeFiles/vtkRenderingOpenGL2CS/vtkClearRGBPassClientServer.cxx.o
 -MF 
CMakeFiles/vtkRenderingOpenGL2CS.dir/CMakeFiles/vtkRenderingOpenGL2CS/vtkClearRGBPassClientServer.cxx.o.d
 -o 
CMakeFiles/vtkRenderingOpenGL2CS.dir/CMakeFiles/vtkRenderingOpenGL2CS/vtkClearRGBPassClientServer.cxx.o
 -c 
/<>/build.python3.12/CMakeFiles/vtkRenderingOpenGL2CS/vtkClearRGBPassClientServer.cxx
/usr/bin/ld: 
../../../lib/arm-linux-gnueabi/libvtkCommonDataModel-pv5.12.so.5.12: undefined 
reference to `__atomic_fetch_add_8'
[ 79%] Generating Python wrapper sources for vtkSMDataTypeDomain
collect2: error: ld returned 1 exit status


see
https://buildd.debian.org/status/fetch.php?pkg=paraview=armel=5.12.1%2Bdfsg-2=1718029171=0

The problem with atomics is discussed upstream (VTK) at
https://gitlab.kitware.com/vtk/vtk/-/issues/19352

see also
https://gitlab.kitware.com/cmake/cmake/-/issues/23021
https://github.com/google/highway/pull/1008/files

Should be fixed in the toolchain (gcc) not in applications.
in the meantime probably best for paraview to follow the VTK patch
https://salsa.debian.org/science-team/vtk9/-/merge_requests/1



Bug#1072591: silx: debci fails tests (testClickOnBackToParentTool))

2024-06-04 Thread Drew Parsons

I meant to add, the testClickOnBackToParentTool failure was on ppc64el.



Bug#1072591: silx: debci fails tests (testClickOnBackToParentTool)

2024-06-04 Thread Drew Parsons
Source: silx
Version: 2.0.1+dfsg-3
Severity: serious
Justification: debci
Control: affects -1 src:scipy

silx has started failing debci tests.

This is blocking migration of scipy 1.12 to testing
(though in principle scipy shouldn't be causing the problem, since
tests passed recently against the version in experimental)


pytest itself is passing tests on python3.12.
But in unstable, the no-opencl test ends (before starting the
python3.11 test) with the message:

207s = 1627 passed, 293 skipped, 170 warnings in 103.58s (0:01:43) 
==
208s Error in sys.excepthook:
208s 
208s Original exception was:
208s autopkgtest [17:03:52]: test no-opencl: ---]
208s autopkgtest [17:03:52]: test no-opencl:  - - - - - - - - - - results - - - 
- - - - - - -
208s no-openclFAIL non-zero exit status 245



In testing (using scipy 1.12 from unstable), python3.12 again passes
but python3.11 gives a more explicit error message

372s __ TestImageFileDialogInteraction.testClickOnBackToParentTool 
__
372s 
372s self = 

372s 
372s def testClickOnBackToParentTool(self):
372s dialog = self.createDialog()
372s dialog.show()
372s self.qWaitForWindowExposed(dialog)
372s 
372s url = testutils.findChildren(dialog, qt.QLineEdit, name="url")[0]
372s action = testutils.findChildren(dialog, qt.QAction, 
name="toParentAction")[0]
372s toParentButton = testutils.getQToolButtonFromAction(action)
372s filename = _tmpDirectory + "/data/data.h5"
372s 
372s # init state
372s path = silx.io.url.DataUrl(file_path=filename, 
data_path="/group/image").path()
372s dialog.selectUrl(path)
372s self.qWaitForPendingActions(dialog)
372s path = silx.io.url.DataUrl(
372s scheme="silx", file_path=filename, data_path="/group/image"
372s ).path()
372s self.assertSamePath(url.text(), path)
372s # test
372s self.mouseClick(toParentButton, qt.Qt.LeftButton)
372s self.qWaitForPendingActions(dialog)
372s path = silx.io.url.DataUrl(
372s scheme="silx", file_path=filename, data_path="/"
372s ).path()
372s self.assertSamePath(url.text(), path)
372s 
372s self.mouseClick(toParentButton, qt.Qt.LeftButton)
372s self.qWaitForPendingActions(dialog)
372s self.assertSamePath(url.text(), _tmpDirectory + "/data")
372s 
372s self.mouseClick(toParentButton, qt.Qt.LeftButton)
372s self.qWaitForPendingActions(dialog)
372s >   self.assertSamePath(url.text(), _tmpDirectory)
372s 
372s 
/usr/lib/python3/dist-packages/silx/gui/dialog/test/test_imagefiledialog.py:285:
 
372s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ 
372s 
/usr/lib/python3/dist-packages/silx/gui/dialog/test/test_imagefiledialog.py:136:
 in assertSamePath
372s self.assertEqual(path1, path2)
372s E   AssertionError: 
'/tmp/silx.gui.dialog.test.test_imagefiledialogftofnov4/data' != 
'/tmp/silx.gui.dialog.test.test_imagefiledialogftofnov4'
372s E   - /tmp/silx.gui.dialog.test.test_imagefiledialogftofnov4/data
372s E   ?   -
372s E   + /tmp/silx.gui.dialog.test.test_imagefiledialogftofnov4



Bug#1071738: pysatellites: segfault on arm64 with scipy 1.12

2024-06-04 Thread Drew Parsons
Source: pysatellites
Version: 2.7-2
Followup-For: Bug #1071738
Control: severity 1071738 serious

The error is persistent after uploading scipy 1.12 to unstable,
so bumping severity to serious.

ppc64el and s390x are also affected.



Bug#1071704: uc-echo: tests fail with scipy 1.12

2024-06-04 Thread Drew Parsons
Source: uc-echo
Version: 1.12-18
Followup-For: Bug #1071704
Control: severity 1071704 serious

scipy 1.12 is now uploaded to unstable,
so bumping severity to serious.



Bug#1071703: gammapy: test fail with scipy 1.12

2024-06-04 Thread Drew Parsons
Source: gammapy
Version: 1.1-4
Followup-For: Bug #1071703
Control: severity 1071703 serious

scipy 1.12 is now uploaded to unstable,
so bumping severity to serious.



Bug#1072512: debci: mutter fails tests

2024-06-03 Thread Drew Parsons
Source: mutter
Version: 44.8-3.1
Severity: serious
Justification: debci
Control: affects -1 libxcursor-dev

mutter is failing tests in debci.
This is blocking migration of libxcursor to testing.

The failing tests in the last amd64 unstable run are

125s # Running test: mutter-12/set-override-redirect-parent.test
126s # Executing: mutter-12/set-override-redirect-parent.test
128s # FAIL: mutter-12/set-override-redirect-parent.test (Child process exited 
with code 133)
128s not ok - mutter-12/set-override-redirect-parent.test

139s # Running test: 
mutter-12/closed-transient-no-input-parent-delayed-focus-default-cancelled.test
141s # Executing: 
mutter-12/closed-transient-no-input-parent-delayed-focus-default-cancelled.test
142s # FAIL: 
mutter-12/closed-transient-no-input-parent-delayed-focus-default-cancelled.test 
(Child process exited with code 133)
142s not ok - 
mutter-12/closed-transient-no-input-parent-delayed-focus-default-cancelled.test

162s # SUMMARY: total=30; passed=28; skipped=0; failed=2; user=18.9s; 
system=9.3s; maxrss=173092
162s # FAIL: mutter-12/set-override-redirect-parent.test (Child process exited 
with code 133)
162s # FAIL: 
mutter-12/closed-transient-no-input-parent-delayed-focus-default-cancelled.test 
(Child process exited with code 133)



Bug#1071722: adios4dolfinx: FTBFS: failing tests

2024-05-31 Thread Drew Parsons

On 2024-05-31 14:04, Santiago Vila wrote:

El 31/5/24 a las 13:21, Drew Parsons escribió:

Source: adios4dolfinx
Followup-For: Bug #1071722

Santiago, could you try running with

export OMPI_MCA_rmaps_base_oversubscribe=true

added to the top of debian/rules?


That seems to work.

And also it works again on single-cpu systems
(where it previously also failed).


Great.  We'll resolve it that way then.  It's already used in other mpi 
packages where mpirun is invoked explicitly.
I didn't catch that adios4dolfinx was invoking mpi tests internally 
inside its tests.




But if the user sets DEB_BUILD_OPTIONS=parallel=1
and the package oversubscribes anyway, then
we would not honoring DEB_BUILD_OPTIONS and it would
still be a bug (admittedly, less severe than a FTBFS bug).


I don't thing it's an issue. DEB_BUILD_OPTIONS=parallel is more about 
the build itself (triggering make -j8 for example) than about testing 
mpi execution.


Drew



Bug#1071722: adios4dolfinx: FTBFS: failing tests

2024-05-31 Thread Drew Parsons
Source: adios4dolfinx
Followup-For: Bug #1071722

Santiago, could you try running with 

export OMPI_MCA_rmaps_base_oversubscribe=true

added to the top of debian/rules?
I think that will get it working on your test system.

I think the problem is that openmpi does not use hwthreads or
oversubscribe by default.  So your error is saying you're trying to
run 2 processes on 1 core.



Bug#1071722: adios4dolfinx: FTBFS: failing tests

2024-05-31 Thread Drew Parsons
Source: adios4dolfinx
Followup-For: Bug #1071722
Control: severity 1071722 important

This bug appears to arise from how openmpi needs to be invoked on a
specific system.  It doesn't affect all systems, and doesn't affect
debian buildds or debci, so downgrading severity.



Bug#1071703: gammapy: test fail with scipy 1.12

2024-05-29 Thread Drew Parsons

tags 1071703 + fixed-upstream patch
thanks

Fixed upstream PR#4997
https://github.com/gammapy/gammapy/pull/4997



Bug#1071993: cppimport: test_multiple_processes sometimes fails: No such file: 'hook_test.cpython-312-x86_64-linux-gnu.so.lock'

2024-05-27 Thread Drew Parsons
Source: cppimport
Followup-For: Bug #1071993

test_multiple_processes is now skipped in 22.08.02-3 and -4, but
that's just a workaround to avoid having to remove the package.
It shouldn't be considered a solution to the problem.



Bug#1071993: cppimport: test_multiple_processes sometimes fails: No such file: 'hook_test.cpython-312-x86_64-linux-gnu.so.lock'

2024-05-27 Thread Drew Parsons
Source: cppimport
Version: 22.08.02-2
Severity: important
Control: forwarded -1 https://github.com/tbenthompson/cppimport/issues/90

cppimport has started failing debci tests on most (if not all)
architectures:

112s ___ test_multiple_processes 

112s 
112s def test_multiple_processes():
112s with tmp_dir(["tests/hook_test.cpp"]) as tmp_path:
112s test_code = f"""
112s import os;
112s os.chdir('{tmp_path}');
112s import cppimport.import_hook;
112s import hook_test;
112s """
112s processes = [
112s Process(target=subprocess_check, args=(test_code,)) for i 
in range(100)
112s ]
112s 
112s for p in processes:
112s p.start()
112s 
112s for p in processes:
112s p.join()
112s 
112s >   assert all(p.exitcode == 0 for p in processes)
112s E   assert False
112s E+  where False = all(. at 0x7fdfabf48ba0>)
112s 
112s tests/test_cppimport.py:236: AssertionError
112s - Captured stdout call 
-
112s Traceback (most recent call last):
112s   File "", line 5, in 
112s   File "", line 1360, in _find_and_load
112s   File "", line 1322, in 
_find_and_load_unlocked
112s   File "", line 1262, in _find_spec
112s   File "/usr/lib/python3/dist-packages/cppimport/import_hook.py", line 21, 
in find_spec
112s cppimport.imp(fullname, opt_in=True)
112s   File "/usr/lib/python3/dist-packages/cppimport/__init__.py", line 50, in 
imp
112s return imp_from_filepath(filepath, fullname)
112s^
112s   File "/usr/lib/python3/dist-packages/cppimport/__init__.py", line 87, in 
imp_from_filepath
112s build_safely(filepath, module_data)
112s   File "/usr/lib/python3/dist-packages/cppimport/importer.py", line 33, in 
build_safely
112s with filelock.FileLock(lock_path, timeout=1):
112s   File "/usr/lib/python3/dist-packages/filelock/_api.py", line 339, in 
__enter__
112s self.acquire()
112s   File "/usr/lib/python3/dist-packages/filelock/_api.py", line 295, in 
acquire
112s self._acquire()
112s   File "/usr/lib/python3/dist-packages/filelock/_unix.py", line 42, in 
_acquire
112s fd = os.open(self.lock_file, open_flags, self._context.mode)
112s  ^^^
112s FileNotFoundError: [Errno 2] No such file or directory: 
'/tmp/tmpe23cj6xk/hook_test.cpython-312-x86_64-linux-gnu.so.lock'


As far as I can tell this is upstream issue #90
https://github.com/tbenthompson/cppimport/issues/90

Not (yet) marking the bug with Severity: Serious since not all systems are 
effected
(i.e. the test passes on my own local system)



Bug#1071722: adios4dolfinx: FTBFS: failing tests

2024-05-26 Thread Drew Parsons

On 2024-05-26 12:44, Santiago Vila wrote:


To track that, what does `lscpu` report on your failing system?
(the thread,core,socket lines are probably the relevant ones)


It's an AWS machine of type m6a.large. These are the most relevant 
specs.


Thread(s) per core:   2
Core(s) per socket:   1
Socket(s):1

...
So it seems to have "one core, two threads". I can try changing the n=2 
parameter
in the ipp.Cluster invocation to n=1 and tell you if there is any 
change,

if it helps. I don't think that will make things worse in any case.



It's worth trying.  I've got no reason to think that 2 threads 1 core 
should be problem, but I can't think of anything else that would in 
principle be different on your system compared to the others that are 
not failing.


What it's worth, my own system is
Thread(s) per core:   2
Core(s) per socket:   4
Socket(s):1



Bug#1071722: adios4dolfinx: FTBFS: failing tests

2024-05-26 Thread Drew Parsons

On 2024-05-25 18:31, Santiago Vila wrote:

El 25/5/24 a las 16:42, Drew Parsons escribió:


adios4dolfinx is building cleanly in reproducibility builds.
Perhaps the problem was a temporary glitch on your test system?


No, this is unlikely to be a temporary glitch:

...
My system has 2 CPUs, apparently MPI counts them as "one engine" and 
fails

because the code has things like this:

ipp.Cluster(engines="mpi", n=2)

This bypasses whatever BUILD_OPTIONS=parallel=n setting the
user might set.

BTW: This bug looks similar to #1057556, which the maintainer
misdiagnosed as "fails with a single cpu" (not true!).

In such bug I proposed to run mpi with true and skip
the tests if it fails.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057556#34

Maybe a similar idea would work here as well.



We can try that. As you suggested in the other bug, it might be quirk of 
openmpi we're facing.
openmpi distinguishs sockets, cores, slots, hwthread.  On one system two 
cores might be found in one socket, on another each core might have its 
own socket.


To track that, what does `lscpu` report on your failing system?
(the thread,core,socket lines are probably the relevant ones)



Bug#1061243: Bug #1061243 FTBFS: needs update for xsimd 12

2024-05-25 Thread Drew Parsons

Control: retitle 1061243 needs update for xsimd 13

Now xsimd 13 is available.



Bug#1071722: adios4dolfinx: FTBFS: failing tests

2024-05-25 Thread Drew Parsons
Source: adios4dolfinx
Followup-For: Bug #1071722
Control: tags -1 ftbfs

adios4dolfinx is building cleanly in reproducibility builds.
Perhaps the problem was a temporary glitch on your test system?



Bug#1071752: satpy: arm64 fails netCDF4 tests with scipy 1.12

2024-05-24 Thread Drew Parsons
Source: satpy
Version: 0.48.0-2
Severity: normal

scipy 1.12 is triggering test failure on arm64

https://ci.debian.net/packages/s/satpy/unstable/arm64/46960923/
https://ci.debian.net/packages/s/satpy/unstable/arm64/

amd64 is passing without error.

The errors seem to come from _netCDF4.pyx
Does it just need a rebuild?

e.g.
1148s _ ERROR at setup of test_bad_area_name 
_
1148s 
1148s tmp_path_factory = TempPathFactory(_given_basetemp=None, 
_trace=, 
_basetemp=PosixPath('/tmp/pytest-of-debci/pytest-1'), _retention_count=3, 
_retention_policy='all')
1148s 
1148s @pytest.fixture(scope="session")
1148s def himl2_filename_bad(tmp_path_factory):
1148s """Create a fake himawari l2 file."""
1148s fname = 
f'{tmp_path_factory.mktemp("data")}/AHI-CMSK_v1r1_h09_s202308240540213_e202308240549407_c202308240557548.nc'
1148s ds = xr.Dataset({"CloudMask": (["Rows", "Columns"], clmk_data)},
1148s coords={"Latitude": (["Rows", "Columns"], 
lat_data),
1148s "Longitude": (["Rows", "Columns"], 
lon_data)},
1148s attrs=badarea_attrs)
1148s >   ds.to_netcdf(fname)
1148s 
1148s 
/usr/lib/python3/dist-packages/satpy/tests/reader_tests/test_ahi_l2_nc.py:64: 
1148s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ 
1148s /usr/lib/python3/dist-packages/xarray/core/dataset.py:2298: in to_netcdf
1148s return to_netcdf(  # type: ignore  # mypy cannot resolve the 
overloads:(
1148s /usr/lib/python3/dist-packages/xarray/backends/api.py:1339: in to_netcdf
1148s dump_to_store(
1148s /usr/lib/python3/dist-packages/xarray/backends/api.py:1386: in 
dump_to_store
1148s store.store(variables, attrs, check_encoding, writer, 
unlimited_dims=unlimited_dims)
1148s /usr/lib/python3/dist-packages/xarray/backends/common.py:397: in store
1148s self.set_variables(
1148s /usr/lib/python3/dist-packages/xarray/backends/common.py:439: in 
set_variables
1148s writer.add(source, target)
1148s /usr/lib/python3/dist-packages/xarray/backends/common.py:284: in add
1148s target[...] = source
1148s /usr/lib/python3/dist-packages/xarray/backends/netCDF4_.py:80: in 
__setitem__
1148s data[key] = value
1148s src/netCDF4/_netCDF4.pyx:5519: in netCDF4._netCDF4.Variable.__setitem__
1148s ???
1148s src/netCDF4/_netCDF4.pyx:5802: in netCDF4._netCDF4.Variable._put
1148s ???
1148s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ 
1148s 
1148s >   ???
1148s E   RuntimeError: NetCDF: HDF error
1148s 
1148s src/netCDF4/_netCDF4.pyx:2034: RuntimeError


Later there are other errors, "OSError: [Errno 28] No space left on device"


scipy 1.12 should be uploaded to unstable in the near future,
which would make this bug serious.



Bug#1071738: pysatellites: segfault on arm64 with scipy 1.12

2024-05-24 Thread Drew Parsons

On 2024-05-24 15:41, Georges Khaznadar wrote:

Hello Drew,

I could not reproduce with my machine the issue which you are 
describing:


- I installed python3-scipy 1.12.0-1exp3 over 1.11.4-10

- I launched the test suite :

  $ cd ~//pysatellites
  ~//pysatellites$  make
  pyuic5 pysat.ui -o UI_pysat.py
  pyuic5 graphe.ui -o UI_graphe.py
  ~//pysatellites$ debian/tests/runtests.sh
  = test session starts 
==

  platform linux -- Python 3.11.9, pytest-7.4.4, pluggy-1.5.0
  PyQt5 5.15.10 -- Qt runtime 5.15.10 -- Qt compiled 5.15.10
  rootdir: ~//pysatellites
  plugins: filter-subpackage-0.2.0, mock-3.12.0, arraydiff-0.6.1, 
doctestplus-1.2.1, anyio-4.3.0, astropy-header-0.2.2, 
hypothesis-6.102.1, typeguard-4.1.5, openfiles-0.5.0, remotedata-0.4.1, 
qt-4.3.1

  collected 2 items

  debian/tests/test_gui.py .
   [ 50%]
  debian/tests/test_trajectoire.py .
   [100%]


  == 2 passed in 12.17s 
==


  ... with no error.

Is there another trick to reproduce the bug?



I'm not sure.  I'm just reporting debci results,
https://ci.debian.net/packages/p/pysatellites/unstable/arm64/
https://ci.debian.net/packages/p/pysatellites/unstable/arm64/46974597/

Just to be sure, the problem is on arm64, not amd64.

Perhaps it's a transient failure that we won't see after the upload to 
unstable.

Thanks for checking it anyway.



Bug#1071738: pysatellites: segfault on arm64 with scipy 1.12

2024-05-24 Thread Drew Parsons
Source: pysatellites
Version: 2.7-2
Severity: important

pysatellites is failing tests with segfault against scipy 1.12 (from
experimental) and python3.11.

157s = test session starts 
==
157s platform linux -- Python 3.11.9, pytest-8.1.2, pluggy-1.5.0
157s PyQt5 5.15.10 -- Qt runtime 5.15.13 -- Qt compiled 5.15.13
157s rootdir: /tmp/autopkgtest-lxc.todkr5yw/downtmp/build.FIz/src
157s plugins: qt-4.3.1
157s collected 2 items
157s 
161s debian/tests/test_gui.py .   [ 
50%]
161s debian/tests/test_trajectoire.py .   
[100%]
161s 
161s == 2 passed in 5.18s 
===
161s Segmentation fault (core dumped)


scipy 1.12 will be uploaded to unstable in the near future,
which will make this bug serious.



Bug#1071702: scoary: tests fail with scipy 1.12

2024-05-23 Thread Drew Parsons

Error is

 38s Traceback (most recent call last):
 38s   File "/usr/lib/python3.11/multiprocessing/pool.py", line 125, in 
worker

 38s result = (True, func(*args, **kwds))
 38s ^^^
 38s   File "/usr/lib/python3/dist-packages/scoary/methods.py", line 
1267, in PairWiseComparisons

 38s resultscontainer[currentgene][Pbest] = ss.binom_test(
 38s^
 38s AttributeError: module 'scipy.stats' has no attribute 'binom_test'



Bug#1071704: uc-echo: tests fail with scipy 1.12

2024-05-23 Thread Drew Parsons
Source: uc-echo
Version: 1.12-18
Severity: normal

uc-echo uses a deprecated scipy API that fails with scipy 1.12

 24s autopkgtest [20:33:32]: test runtest: [---
 24s Test for sample data
 25s Traceback (most recent call last):
 25s   File "/usr/lib/uc-echo/ErrorCorrection.py", line 19, in 
 25s from scipy import floor, ceil
 25s ImportError: cannot import name 'floor' from 'scipy' 
(/usr/lib/python3/dist-packages/scipy/__init__.py)


This bug will become serious when scipy 1.12 is uploaded to unstable
in the near future.



Bug#1071703: gammapy: test fail with scipy 1.12

2024-05-23 Thread Drew Parsons
Source: gammapy
Version: 1.1-4
Severity: normal

gammpy appears to be using a deprecated RootResults API that fails
with scipy 1.12

 92s  ERROR collecting test session 
_
 92s /usr/lib/python3.11/importlib/__init__.py:126: in import_module
 92s return _bootstrap._gcd_import(name[level:], package, level)
 92s :1204: in _gcd_import
 92s ???
 92s :1176: in _find_and_load
 92s ???
 92s :1147: in _find_and_load_unlocked
 92s ???
 92s :690: in _load_unlocked
 92s ???
 92s /usr/lib/python3/dist-packages/_pytest/assertion/rewrite.py:178: in 
exec_module
 92s exec(co, module.__dict__)
 92s /usr/lib/python3/dist-packages/gammapy/conftest.py:13: in 
 92s from gammapy.datasets import SpectrumDataset
 92s /usr/lib/python3/dist-packages/gammapy/datasets/__init__.py:2: in 
 92s from .core import Dataset, Datasets
 92s /usr/lib/python3/dist-packages/gammapy/datasets/core.py:10: in 
 92s from gammapy.modeling.models import DatasetModels, Models
 92s /usr/lib/python3/dist-packages/gammapy/modeling/__init__.py:4: in 
 92s from .fit import Fit
 92s /usr/lib/python3/dist-packages/gammapy/modeling/fit.py:15: in 
 92s from .scipy import confidence_scipy, optimize_scipy
 92s /usr/lib/python3/dist-packages/gammapy/modeling/scipy.py:5: in 
 92s from gammapy.utils.roots import find_roots
 92s /usr/lib/python3/dist-packages/gammapy/utils/roots.py:9: in 
 92s BAD_RES = RootResults(root=np.nan, iterations=0, function_calls=0, 
flag=0)
 92s E   TypeError: RootResults.__init__() missing 1 required positional 
argument: 'method'



This bug will become serious when scipy 1.12 is uploaded to unstable
in the near future.



Bug#1071702: scoary: tests fail with scipy 1.12

2024-05-23 Thread Drew Parsons
Source: scoary
Version: 1.6.16-6
Severity: normal

scoary uses a deprecated scipy.stats API (binom_test) which causes it
to fail tests with scipy 1.12.

This bug will become serious when scipy 1.12 is uploaded to unstable
in the near future.



Bug#1071700: paraview: FTBFS 32-bit arches: xdmf3 invalid conversion ‘long unsigned int*’ to ‘const int*’

2024-05-23 Thread Drew Parsons
Package: paraview
Version: 5.12.0+dfsg-4
Severity: important
Tags: ftbfs

paraview 5.12 has started failing to build on i386.
paraview 5.11 previously built on i386.

The problem appears to be an implicit conversion from
‘long unsigned int*’ to ‘const int*’ in getArrayType in XdmfArray.cpp

paraview was already failing on armel, armhf with other errors,
so they are already considered not-supported.
Perhaps we might have to drop support on i386 as well.

Build logs at
https://buildd.debian.org/status/fetch.php?pkg=paraview=i386=5.12.0%2Bdfsg-4=1716421169=0
https://buildd.debian.org/status/fetch.php?pkg=paraview=armel=5.12.0%2Bdfsg-4=1716421874=0
https://buildd.debian.org/status/fetch.php?pkg=paraview=armhf=5.12.0%2Bdfsg-4=1716421071=0


The error message is

[  3%] Linking CXX shared library 
../../../../lib/i386-linux-gnu/libvtkprotobuf-pv5.12.so
cd /<>/build.python3.12/ThirdParty/protobuf/vtkprotobuf/src && 
/usr/bin/cmake -E cmake_link_script CMakeFiles/protobuf.dir/link.txt --verbose=1
/usr/bin/c++ -fPIC -g -O2 -ffile-prefix-map=/<>=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -O0  -g  -Wl,-lc -Wl,-z,relro -Wl,--as-needed  -shared 
-Wl,-soname,libvtkprotobuf-pv5.12.so.1 -o 
../../../../lib/i386-linux-gnu/libvtkprotobuf-pv5.12.so.5.12 
CMakeFiles/protobuf.dir/google/protobuf/compiler/importer.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/compiler/parser.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/coded_stream.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/gzip_stream.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/io_win32.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/printer.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/strtod.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/tokenizer.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/zero_copy_stream.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/zero_copy_stream_impl.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/io/zero_copy_stream_impl_lite.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/bytestream.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/common.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/int128.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/status.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/statusor.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/stringpiece.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/stringprintf.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/structurally_valid.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/strutil.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/substitute.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/stubs/time.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/delimited_message_util.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/field_comparator.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/field_mask_util.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/json_util.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/message_differencer.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/time_util.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/type_resolver_util.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/datapiece.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/default_value_objectwriter.cc.o
 CMakeFiles/protobuf.dir/google/protobuf/util/internal/error_listener.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/field_mask_utility.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/json_escaping.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/json_objectwriter.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/json_stream_parser.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/object_writer.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/proto_writer.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/protostream_objectsource.cc.o
 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/protostream_objectwriter.cc.o
 CMakeFiles/protobuf.dir/google/protobuf/util/internal/type_info.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/util/internal/utility.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/any.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/any_lite.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/any.pb.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/api.pb.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/arena.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/descriptor.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/descriptor.pb.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/descriptor_database.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/duration.pb.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/dynamic_message.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/empty.pb.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/extension_set.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/extension_set_heavy.cc.o 
CMakeFiles/protobuf.dir/google/protobuf/field_mask.pb.cc.o 

Bug#1071518: paraview: s390x test failure: vtkClientServerStream::Invoke stream_value invalid

2024-05-20 Thread Drew Parsons
Package: paraview
Version: 5.11.2+dfsg-7
Severity: important
Control: forwarded -1 
https://gitlab.kitware.com/paraview/paraview/-/issues/22620

The Debian build of paraview (5.11.2) fails CI tests on the s390x architecture.
Test logs can be found at 
https://ci.debian.net/packages/p/paraview/unstable/s390x/ or 
https://ci.debian.net/packages/p/paraview/testing/s390x/
e.g. https://ci.debian.net/packages/p/paraview/unstable/s390x/46541216/

An invalid stream appears to get generated, such that 
vtkClientServerStream::Invoke reports an invalid first argument

Argument 0 = stream_value invalid

The full error message (showing three repeats) is

 93s autopkgtest [15:31:11]: test paraviewtest.py: [---
 95s  ABORT #
 95s (   1.850s) [paraview]   vtkPVSessionCore.cxx:355ERR| 
vtkPVSessionCore (0x2a7a9a0): Invalid arguments to 
vtkClientServerStream::Invoke.  There must be at least two arguments.  The 
first must be an object and the second a string.
 95s while processing
 95s Message 0 = Invoke
 95s   Argument 0 = stream_value invalid
 95s   Argument 1 = string_value {RegisterProgressEvent}
 95s   Argument 2 = vtk_object_pointer {vtkSMTimeKeeper (0x2e42dc0)}
 95s   Argument 3 = int32_value {256}
 95s 
 95s (   1.850s) [paraview]   vtkPVSessionCore.cxx:356ERR| 
vtkPVSessionCore (0x2a7a9a0): Aborting execution for debugging purposes.
 95s (   1.852s) [paraview]   vtkPVSessionCore.cxx:355ERR| 
vtkPVSessionCore (0x2a7a9a0): Invalid arguments to 
vtkClientServerStream::Invoke.  There must be at least two arguments.  The 
first must be an object and the second a string.
 95s while processing
 95s Message 0 = Invoke
 95s   Argument 0 = stream_value invalid
 95s   Argument 1 = string_value {RegisterProgressEvent}
 95s   Argument 2 = vtk_object_pointer {vtkSMAnimationScene (0x28293a0)}
 95s   Argument 3 = int32_value {261}
 95s 
 95s (   1.852s) [paraview]   vtkPVSessionCore.cxx:356ERR| 
vtkPVSessionCore (0x2a7a9a0): Aborting execution for debugging purposes.
 95s  ABORT #
 95s (   1.853s) [paraview]   vtkPVSessionCore.cxx:355ERR| 
vtkPVSessionCore (0x2a7a9a0): Invalid arguments to 
vtkClientServerStream::Invoke.  There must be at least two arguments.  The 
first must be an object and the second a string.
 95s while processing
 95s Message 0 = Invoke
 95s   Argument 0 = stream_value invalid
 95s   Argument 1 = string_value {RegisterProgressEvent}
 95s   Argument 2 = vtk_object_pointer {vtkPVKeyFrameAnimationCueForProxies 
(0x2f165c0)}
 95s   Argument 3 = int32_value {263}
 95s 
 95s  ABORT #
 95s (   1.853s) [paraview]   vtkPVSessionCore.cxx:356ERR| 
vtkPVSessionCore (0x2a7a9a0): Aborting execution for debugging purposes.
 95s (   1.854s) [paraview]   vtkPVSessionCore.cxx:355ERR| 
vtkPVSessionCore (0x2a7a9a0): Invalid arguments to 
vtkClientServerStream::Invoke.  There must be at least two arguments.  The 
first must be an object and the second a string.
...
96s while processing
 96s Message 0 = Invoke
 96s   Argument 0 = stream_value invalid
 96s   Argument 1 = string_value {CleanupPendingProgress}
 96s 
 96s (   1.904s) [paraview]   vtkPVSessionCore.cxx:356ERR| 
vtkPVSessionCore (0x2a7a9a0): Aborting execution for debugging purposes.
 96s autopkgtest [15:31:14]:  summary
 96s paraviewtest.py  FAIL stderr: (   1.850s) [paraview]   
vtkPVSessionCore.cxx:355ERR| vtkPVSessionCore (0x2a7a9a0): Invalid 
arguments to vtkClientServerStream::Invoke.  There must be at least two 
arguments.  The first must be an object and the second a string.
 



Bug#1067064: transition: petsc hypre

2024-05-20 Thread Drew Parsons

On 2024-05-15 09:20, Sebastian Ramacher wrote:


I think the best way to deal with it is to pretend it never happened
and move on with petsc 3.20, upgrading from petsc 3.19.  We'd want to
doing this upgrade anyway.


Please go ahead.


All uploads have been made now (including dolfin).  Just binNMUs remain 
(including mshr).


Something weird happened with the slepc upload (3.20.2+dfsg1-1). The 
upload record or changelog was never processed (no news entry on the 
tracker page), the RC bug never got closed, and there were no build logs 
for the main architectures, as if they had been built separately and 
uploaded manually.  I dealt with it by just uploading a second release 
(3.20.2+dfsg1-2) for rebuild.


Drew



Bug#900364: paraview: Paraview viewer mishandles fractional Qt screen scaling

2024-05-19 Thread Drew Parsons
Package: paraview
Followup-For: Bug #900364
Control: tags 900364 moreinfo

Fractional scaling seems to be working now, at least for eDP-1 > 1.

Can you confirm if the problem is fixed in paraview 5.11.2 or later?



Bug#842405: paraview: can't load hdf5 file: vtkSISourceProxy: Failed to create vtkXdmfReader

2024-05-19 Thread Drew Parsons
Package: paraview
Followup-For: Bug #842405
Control: tags 842405 moreinfo

HDF5 is a general data format, it has no natural viewing format.

When you load a file into paraview you need to specify which reader
you want to open it with. Which reader are you expecting to use with
your .h5 file?

Your test file does not crash with H5PartReader or HDF5 Rage Reader.



Bug#820453: paraview: "Help -> ParaView Guide" gives error

2024-05-19 Thread Drew Parsons
Package: paraview
Followup-For: Bug #820453

"Help -> ParaView Guide" now (5.11.2) provides a (functioning) url
link to https://docs.paraview.org/en/v5.11.2/
which opens normally in the web browser.

Not so helpful if you don't have internet access, but at least it's
not giving an error now.  Perhaps we can close this bug.



Bug#982601: python3-paraview Conflicts: python3-vtk9

2024-05-19 Thread Drew Parsons
Package: paraview
Version: 5.11.2+dfsg-6+b9
Followup-For: Bug #982601
X-Debbugs-Cc: Petter Reinholdtsen 

Petter, paraview might never be aligned with vtk.

See https://gitlab.kitware.com/paraview/paraview/-/issues/18751

If facet-analyser needs to be able to work with either (both)
python3-paraview or python3-vtk9, then it will need to check the
specific VTK version and adjust the API it's using to accommodate the
discrepancy.



Bug#1070894: slepc: FTBFS: Fatal Error: Cannot open module file ‘slepcmfn.mod’ for reading at (1): No such file or directory

2024-05-17 Thread Drew Parsons
Source: slepc
Followup-For: Bug #1070894
Control: tags -1 ftbfs fixed-upstream
Control: forwarded -1 https://gitlab.com/slepc/slepc/-/issues/83

Fixed upstream in 3.20.2,
https://gitlab.com/slepc/slepc/-/merge_requests/639



Bug#1067064: transition: petsc hypre

2024-05-15 Thread Drew Parsons

On 2024-05-15 10:00, Drew Parsons wrote:

On 2024-05-15 09:20, Sebastian Ramacher wrote:


While looking at the tracker, I noticed that petsc is still building
manual -dbg packages. IS there a reason that those have not been
converted to automatic -dbgsym packages?


No, it's not about symbols.  -dbg uses a debug configuration, i.e. 
provides more information about the state of the calculation.  In 
principle it could be used to help better design applications using 
petsc.  It's possible that no one uses these -dbg packages.


I'll remove the dh_strip override so the main (production) build gets 
its own symbols package, rather than treating the debug build as its 
symbols package.




Bug#1067064: transition: petsc hypre

2024-05-15 Thread Drew Parsons

On 2024-05-15 09:20, Sebastian Ramacher wrote:

Control: tags -1 confirmed


The petsc patch for the 64-bit time_t transition was deeply invasive.
It makes petsc (and slepc) essentially unmaintainable.

I think the best way to deal with it is to pretend it never happened
and move on with petsc 3.20, upgrading from petsc 3.19.  We'd want to
doing this upgrade anyway.


Please go ahead.


Thanks.


While looking at the tracker, I noticed that petsc is still building
manual -dbg packages. IS there a reason that those have not been
converted to automatic -dbgsym packages?


No, it's not about symbols.  -dbg uses a debug configuration, i.e. 
provides more information about the state of the calculation.  In 
principle it could be used to help better design applications using 
petsc.  It's possible that no one uses these -dbg packages.


Drew



Bug#1069321: FTBFS: [Makefile:163: check] Error 1

2024-05-14 Thread Drew Parsons
Source: hypre
Followup-For: Bug #1069321

It's passing in reproducibility rebuilds.
Perhaps it was a transitory glitch.



Bug#1069321: FTBFS: [Makefile:163: check] Error 1

2024-05-13 Thread Drew Parsons
Source: hypre
Followup-For: Bug #1069321
Control: tags -1 ftbfs

hypre is passing debci and reproducibility tests on armhf.

Looks like the error reported here was a transitory issue,
possibly related to openmpi upgrades.



Bug#1066449: mpi4py: FTBFS: Segmentation fault in tests

2024-05-06 Thread Drew Parsons
Source: mpi4py
Followup-For: Bug #1066449
Control: tags -1 ftbfs

The bug report stopped scanning at the first "error", which is not an
error (it's a check).  The last error is the relevant one

testIReadIWrite (test_io.TestIOSelf.testIReadIWrite) ... ok
testIReadIWriteAll (test_io.TestIOSelf.testIReadIWriteAll) ... 
[ip-10-84-234-105:152676] *** Process received signal ***
[ip-10-84-234-105:152676] Signal: Segmentation fault (11)
[ip-10-84-234-105:152676] Signal code: Address not mapped (1)
[ip-10-84-234-105:152676] Failing at address: (nil)
[ip-10-84-234-105:152676] [ 0] 
/lib/x86_64-linux-gnu/libc.so.6(+0x3c510)[0x7f7a3a86f510]
[ip-10-84-234-105:152676] *** End of error message ***
Segmentation fault


Possibly a transitory inconsistency caught in the middle of an openmpi update.



Bug#1069432: mpi4py: FTBFS on armhf: ld: cannot find -llmpe: No such file or directory

2024-05-05 Thread Drew Parsons
Source: mpi4py
Followup-For: Bug #1069432

There have been ongoing issues with OpenMPI on 32-bit architectures,
partly related to drop of 32-bit support by pmix.

This bug is likely related to that i.e. not a bug in mpi4py itself.



Bug#1069377: scipy: FTBFS on arm64: make[1]: *** [debian/rules:161: execute_after_dh_auto_install] Error 1

2024-05-04 Thread Drew Parsons
Source: scipy
Followup-For: Bug #1069377
Control: tags -1 ftbfs

This is an odd error.  Looks as if the behaviour changed in respect to
which exception gets emitted.

There's a new release needing to get packaged. Likely it resolves the
issue.



Bug#1070338: FTBFS: libadios2_serial_evpath.so.2.10.0: undefined reference to `cmfabric_add_static_transport'

2024-05-03 Thread Drew Parsons
Source: adios2
Version: 2.10.0+dfsg1-1
Severity: normal
Tags: ftbfs
Control: forwarded -1 https://github.com/ornladios/ADIOS2/issues/4156

Building the debian package for adios 2.10.0 (in experimental) fails with 
"libadios2_serial_evpath.so.2.10.0: undefined reference to 
`cmfabric_add_static_transport'":

[228/329] : && /usr/bin/c++ -g -O2 -ffile-prefix-map=/projects/adios2=. 
-fstack-protector-strong -fstack-clash-protection -Wformat 
-Werror=format-security -fcf-protection -Wdate-time -D_
FORTIFY_SOURCE=2 -Wl,-z,relro 
source/adios2/toolkit/remote/CMakeFiles/adios2_remote_server.dir/remote_server.cpp.o
 
source/adios2/toolkit/remote/CMakeFiles/adios2_remote_server.dir/remote_common.cpp.o
 -o bin/adio
s2_remote_server.serial  lib/x86_64-linux-gnu/libadios2_serial_evpath.so.2.10.0 
 lib/x86_64-linux-gnu/libadios2_serial_core.so.2.10.0  
lib/x86_64-linux-gnu/libadios2_serial_ffs.so.2.10.0  lib/x86_64-linux-gnu/li
badios2_serial_atl.so.2.10.0  -ldl  
-Wl,-rpath-link,/projects/adios2/build-serial/lib/x86_64-linux-gnu && :
FAILED: bin/adios2_remote_server.serial 
: && /usr/bin/c++ -g -O2 -ffile-prefix-map=/projects/adios2=. 
-fstack-protector-strong -fstack-clash-protection -Wformat 
-Werror=format-security -fcf-protection -Wdate-time -D_FORTIFY_SO
URCE=2 -Wl,-z,relro 
source/adios2/toolkit/remote/CMakeFiles/adios2_remote_server.dir/remote_server.cpp.o
 
source/adios2/toolkit/remote/CMakeFiles/adios2_remote_server.dir/remote_common.cpp.o
 -o bin/adios2_remote_
server.serial  lib/x86_64-linux-gnu/libadios2_serial_evpath.so.2.10.0  
lib/x86_64-linux-gnu/libadios2_serial_core.so.2.10.0  
lib/x86_64-linux-gnu/libadios2_serial_ffs.so.2.10.0  
lib/x86_64-linux-gnu/libadios2_se
rial_atl.so.2.10.0  -ldl  
-Wl,-rpath-link,/projects/adios2/build-serial/lib/x86_64-linux-gnu && :
/usr/bin/ld: lib/x86_64-linux-gnu/libadios2_serial_evpath.so.2.10.0: undefined 
reference to `cmfabric_add_static_transport'
collect2: error: ld returned 1 exit status



Bug#1069106: openmpi: 32 bit pmix_init:startup:internal-failure: help-pmix-runtime.txt: No such file

2024-04-21 Thread Drew Parsons
Source: openmpi
Version: 4.1.6-12
Followup-For: Bug #1069106
Control: reopen 1069106

4.1.6-12 was intended to fix this bug, but it's still occuring

e.g.
https://ci.debian.net/packages/o/openmpi/unstable/i386/45630865/
https://ci.debian.net/packages/o/openmpi/unstable/armhf/45630866/



Bug#1062405: dolfin: NMU diff for 64-bit time_t transition

2024-04-20 Thread Drew Parsons
Source: dolfin
Followup-For: Bug #1062405
Control: severity 1062405 testing
Control: tags 1062405 moreinfo

This time_t patch was never applied.  Is it not needed after all? 
Can we close this bug now?



Bug#1069106: openmpi: 32 bit pmix_init:startup:internal-failure: help-pmix-runtime.txt: No such file

2024-04-16 Thread Drew Parsons
Source: openmpi
Version: 4.1.6-9
Severity: serious
Justification: ftbfs
Control: affects -1 src:fenics-dolfinx src:petsc

openmpi 4.1.6-9 is failing its own tests on 32-bit systems,
presumeably after they were configuring to use a local copy of pmix
instead of libpmix-dev.

For instance on i386
https://ci.debian.net/data/autopkgtest/unstable/i386/o/openmpi/45463906/log.gz
the error message is

 69s autopkgtest [12:19:13]: test compile_run_mpicc: [---
 69s --
 69s Sorry!  You were supposed to get help about:
 69s pmix_init:startup:internal-failure
 69s But I couldn't open the help file:
 69s /usr/share/pmix/help-pmix-runtime.txt: No such file or directory.  
Sorry!
 69s --
 69s [ci-107-5e0ffbcf:02362] PMIX ERROR: NOT-FOUND in file 
../../../../../../../../opal/mca/pmix/pmix3x/pmix/src/server/pmix_server.c at 
line 237
 

The same error affects builds and debci testing of client packages on
the 32 bit architectures, e.g. petsc, dolfinx



Bug#1062383: dolfinx-mpc: NMU diff for 64-bit time_t transition

2024-04-12 Thread Drew Parsons
Source: dolfinx-mpc
Followup-For: Bug #1062383
Control: severity 1062383 important
Control: tags 1062383 moreinfo

This patch was never applied.  Do we not need it after all?  Can we
close this bug now?



Bug#1062587: fenics-dolfinx: NMU diff for 64-bit time_t transition

2024-04-12 Thread Drew Parsons
Source: fenics-dolfinx
Followup-For: Bug #1062587
Control: severity 1062587 important
Control: tags 1062587 moreinfo

This time_t patch was never applied.  I presume it's not needed after
all and we can close this bug now?



Bug#1068661: python3-scikit-build-core: needs Depends: python3-pyproject-metadata

2024-04-09 Thread Drew Parsons
Source: scikit-build-core
Followup-For: Bug #1068661

I think dh_python3 is not adding the dependency automatically since
it's listed in pyproject.toml under pyproject (optional dependency).
Hence it'd need to be added manually.

pathspec is listed there too so I figure it would probably want to be
added as well (python3-pathspec)

If you want to respect the "optional" nature of the dependency then use
Recommends: rather than Depends:.  But I'd say it's simpler (more
robust) to just use Depends: for the packaging.



Bug#1068661: python3-scikit-build-core: needs Depends: python3-pyproject-metadata

2024-04-08 Thread Drew Parsons
Package: python3-scikit-build-core
Version: 0.8.2-1
Severity: normal

scikit-build-core imports pyproject_metadata in
/usr/lib/python3/dist-packages/scikit_build_core/settings/metadata.py
but the python3-scikit-build-core does not declare the corresponding
Depends: python3-pyproject-metadata

So the Depends should be added, or possibly
python3-scikit-build-core's pyproject.toml (or setup.py) needs to list
pyproject_metadata as required (then the source package needs
Build-Depends: python3-pyproject-metadata



-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT)
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages python3-scikit-build-core depends on:
ii  python3 [python3-supported-min]  3.11.8-1
ii  python3-exceptiongroup   1.2.0-1
ii  python3-importlib-metadata   4.12.0-1
ii  python3-packaging24.0-1
ii  python3-tomli2.0.1-2
ii  python3-typing-extensions4.10.0-1

python3-scikit-build-core recommends no packages.

python3-scikit-build-core suggests no packages.

-- no debconf information



Bug#1068319: armci-mpi: debian-...@lists.debian.org

2024-04-06 Thread Drew Parsons
Source: armci-mpi
Followup-For: Bug #1068319

armci-mpi is already building for mpich so the transition should be
manageable.  Will just need to make sure it switches off the openmpi
build cleanly.



Bug#1068464: deal.ii: FTBFS: libgmp not linked, libdeal.ii.g.so.9.5.1: error: undefined reference to '__gmpn_neg'

2024-04-05 Thread Drew Parsons
Source: deal.ii
Version: 9.5.1-2
Severity: normal
Tags: ftbfs

I'm getting an error running deal.ii tests building against petsc 3.20
(from experimental)

[100%] Built target dealii_release
make  -f tests/CMakeFiles/test.dir/build.make tests/CMakeFiles/test.dir/depend
make[5]: Entering directory 
'/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu'
cd /home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu && 
/usr/bin/cmake -E cmake_depends "Unix Makefiles" 
/home/drew/projects/misc/build/deal.ii-9.5.1 
/home/drew/projects/misc/build/deal.ii-9.5.1/tests 
/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu 
/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu/tests 
/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu/tests/CMakeFiles/test.dir/DependInfo.cmake
 "--color="
make[5]: Leaving directory 
'/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu'
make  -f tests/CMakeFiles/test.dir/build.make tests/CMakeFiles/test.dir/build
make[5]: Entering directory 
'/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu'
[100%] Running quicktests...
/usr/bin/cmake -DCMAKE_BUILD_TYPE=DEBUG -P 
/home/drew/projects/misc/build/deal.ii-9.5.1/tests/run_quick_tests.cmake
-- Running quick_tests in DEBUG mode with -j8:
Test project /home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu
  Start 19: test_dependency/quick_tests.mpi.debug.executable
  Start 23: test_dependency/quick_tests.p4est.debug.executable
  Start 29: test_dependency/quick_tests.step-metis.debug.executable
  Start  1: quick_tests/adolc.debug
  Start  3: quick_tests/affinity.debug
  Start  4: quick_tests/affinity.release
  Start  5: quick_tests/assimp.debug
  Start  7: quick_tests/boost_zlib.debug
 1/25 Test  #1: quick_tests/adolc.debug 
...***Failed9.33 sec
/home/drew/projects/misc/build/deal.ii-9.5.1/obj-x86_64-linux-gnu/lib/x86_64-linux-gnu/libdeal.ii.g.so.9.5.1:
 error: undefined reference to '__gmpn_neg'
collect2: error: ld returned 1 exit status
gmake[9]: *** [CMakeFiles/quick_tests.adolc.debug.dir/build.make:299: 
adolc.debug/adolc.debug] Error 1
gmake[8]: *** [CMakeFiles/Makefile2:261: 
CMakeFiles/quick_tests.adolc.debug.dir/all] Error 2
gmake[7]: *** [CMakeFiles/Makefile2:294: 
CMakeFiles/quick_tests.adolc.debug.test.dir/rule] Error 2
gmake[6]: *** [Makefile:173: quick_tests.adolc.debug.test] Error 2

Likewise undefined __gmpn_com with quick_tests/affinity.release,
quick_tests/step.release.  All 25 tests fail with either undefined
__gmpn_neg or __gmpn_com.


__gmp* is provided by libgmp, which suggests build configuration for
gmp is not getting through, not setting -lgmp for linking.

deal.ii recently built successfully in unstable (for time_t), so it's
not time to mark this bug severity: serious.  I get it when trying to
rebuild against petsc 3.20 from experimental. But it's not obvious to
me that petsc 3.20 itself would be triggering the problem.  Perhaps
it's a local issue on my system with the experimental petsc builds
that might resolve itself after we upload petsc 3.20 to unstable,
which I've requested in transition Bug#1067064.

Filing this bug to keep track, or see if anyone else is experiencing
the same problem.


-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT)
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



Bug#1067784: Doesn't contain libpmix.so.2

2024-04-05 Thread Drew Parsons
Package: libpmix2t64
Version: 5.0.2-2
Followup-For: Bug #1067784
Control: affects 1067784 nwchem nwchem-openmpi
Control: reopen 1067784

Looks like 5.0.2-2 annihilated the symlink fix made in 5.0.2-1.1

See nwchem tests,

 https://ci.debian.net/packages/n/nwchem/unstable/amd64/44696719/

 90s  Running tests/cosmo_h3co/cosmo_h3co
 90s
 90s  cleaning scratch
 90s  copying input and verified output files
 90s  running nwchem (/usr/bin/nwchem)  with 1 processors
 90s
 90s  NWChem execution failed
 90s [ci-096-a6a59e1f:01975] mca_base_component_repository_open: unable to open 
mca_pmix_ext3x: libpmix.so.2: cannot open shared object file: No such file or 
directory (ignored)
 90s [ci-096-a6a59e1f:01975] [[33167,0],0] ORTE_ERROR_LOG: Not found in file 
../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
 90s --
 90s It looks like orte_init failed for some reason; your parallel process is
 90s likely to abort.  There are many reasons that a parallel process can
 90s fail during orte_init; some of which are due to configuration or
 90s environment problems.  This failure appears to be an internal failure;
 90s here's some additional information (which may only be relevant to an
 90s Open MPI developer):
 90s
 90s   opal_pmix_base_select failed
 90s   --> Returned value Not found (-13) instead of ORTE_SUCCESS





-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT)
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages libpmix2t64 depends on:
ii  libc6   2.37-15.1
ii  libevent-core-2.1-7t64  2.1.12-stable-8.1+b1
ii  libevent-pthreads-2.1-7t64  2.1.12-stable-8.1+b1
ii  libhwloc-plugins2.10.0-1+b1
ii  libhwloc15  2.10.0-1+b1
ii  libmunge2   0.5.15-4
ii  zlib1g  1:1.3.dfsg-3.1

libpmix2t64 recommends no packages.

libpmix2t64 suggests no packages.

-- no debconf information



Bug#1067085: nanobind-dev not in python path and without distribution info

2024-04-03 Thread Drew Parsons
Package: nanobind-dev
Version: 1.9.2-1
Followup-For: Bug #1067085

The path discrepancy comes from the cmake build, with CMakeLists.txt
defining installation path NB_INSTALL_DATADIR via
CMAKE_INSTALL_DATADIR (which is /usr/share).

The upstream installation instructions don't mention cmake (they
explain how to use cmake for using nanobind, not for installing it).
Instead the installation instructions recommend pip or conda install.

Looking at the pypi wheel that pip would use, it doesn't specify the
top level path at all, it just starts the files from "/nanobind".  So
it would be pip itself knowing to install them to /usr/lib/python3.x.

It looks like upstream isn't expecting CMakeLists.txt to be used
directly for installation.

On the other hand, pyproject.toml does specify cmake as a build
requirement, although setup.py does not use it.  I'm not aware that
pip build would invoke cmake just because it's listed in
[build-system] requires.

So it's a bit of a mystery what upstream's intentions for nanobind
installation are.  It's as if they have some other unpublished script
that they use to generate pypi wheels.

The debian packaging declares a cmake build via PYBUILD_SYSTEM = cmake.
What do we get if we don't do that, and just let pybuild process
pyproject.toml and setup.py as a normal python module without calling
cmake directly?  (that probably needs
Build-Depends: pybuild-plugin-pyproject)

(pyproject.toml also lists ninja.  Perhaps pybuild/pip would invoke
ninja, which invokes cmake.  scikit-build is also a concerned party.
But setup.py doesn't explicitly invoke any of them.
I haven't tried this direct python build yet.)

On the other hand, upstream's github CI testing does invoke cmake
explicitly, in .github/workflows/ci.yml.  But is that just for tests?
(i.e. "using" nanobind, not "installing" it)

If we do move the nanobind files to /usr/lib/python3, there's
still the question of how cmake will find it there (cmake find_package
would search for an arch-independent package in /usr/share/,
or /usr/share/cmake/).  But that's where the upstream usage
instructions come in, they recommend invoking ${python} -m nanobind --cmake_dir
and adding it to CMAKE_PREFIX_PATH.  So in that sense a "python"
installation should be ok, rather than a PYBUILD_SYSTEM = cmake
installation.

That said, looking at the usage guides it seems to be that nanobind
should work just fine with cmake from /usr/share/nanobind, so long as
you ignore the recommendation to use "python3 -m nanobind --cmake_dir"
Except that NB_DIR would need to be set (used in
nanobind-config.cmake, though nanobind-config.cmake could be easily
patched to give it a default value if not already set)

Is there any use-case for using nanobind (using, not installing)
without doing it through cmake? (i.e. does the nanobind python module
have any purpose other than providing a --cmake_dir flag for defining
NB_DIR in a user's cmake scripts?)



Bug#1067308: python-meshplex: FTBFS: dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p "3.12 3.11" --test-args=--ignore-glob=\*test_io\* returned exit code 13

2024-04-02 Thread Drew Parsons
Source: python-meshplex
Followup-For: Bug #1067308
Control: tags -1 ftbfs

I can't reproduce this error now.  It must have been resolved by a
separate library transition, or possibly a numpy update.

If 0.17.1-3 passes tests successfully then we can close this bug.



Bug#1067064: transition: petsc hypre

2024-03-17 Thread Drew Parsons
Package: release.debian.org
Severity: normal
X-Debbugs-Cc: pe...@packages.debian.org, francesco.balla...@unicatt.it
Control: affects -1 + src:petsc
User: release.debian@packages.debian.org
Usertags: transition

The petsc patch for the 64-bit time_t transition was deeply invasive.
It makes petsc (and slepc) essentially unmaintainable.

I think the best way to deal with it is to pretend it never happened
and move on with petsc 3.20, upgrading from petsc 3.19.  We'd want to
doing this upgrade anyway.

As part of this transition I'll also upgrade hypre from 2.28.0 to
2.29.0.

I've checked that sundials and getdp build without problem against the
new petsc. dolfinx, dolfin too.

deal.ii builds but fails tests with a reference to undefined
__gmpn_com symbols. This indicates instructions to link to libgmp
didn't get through.  I think this is unrelated to the petsc upgrade,
and I suspect it might be an artifact of my local installation. i.e. I
suspect the build on buildds will be fine. If necessary we can update
deal.ii to take more care linking GMP.


Ben file:

title = "petsc";
is_affected = .depends ~ "libpetsc*3.19" | .depends ~ "libpetsc*3.20";
is_good = .depends ~ "libpetsc*3.20";
is_bad = .depends ~ "libpetsc*3.19";



Bug#1064749: pymatgen: FTBFS: make[1]: *** [debian/rules:104: override_dh_auto_test] Error 1

2024-03-16 Thread Drew Parsons
Source: pymatgen
Followup-For: Bug #1064749
Control: tags -1 ftbfs

I can't reproduce this error.

See also

https://buildd.debian.org/status/fetch.php?pkg=pymatgen=amd64=2024.1.27%2Bdfsg1-7=1708967242=0
https://tests.reproducible-builds.org/debian/rbuild/unstable/amd64/pymatgen_2024.1.27+dfsg1-7.rbuild.log.gz

https://ci.debian.net/data/autopkgtest/unstable/amd64/p/pymatgen/43357166/log.gz
https://ci.debian.net/data/autopkgtest/testing/amd64/p/pymatgen/43355416/log.gz


test_quasiharmonic_debye_approx is passing on all systems.

I think we can close this bug.



Bug#1066973: RM: pymatgen [mips64el] -- ROM; FTBFS on mips64el

2024-03-16 Thread Drew Parsons
Package: ftp.debian.org
Severity: normal
Tags: ftbfs
X-Debbugs-Cc: pymat...@packages.debian.org
Control: affects -1 + src:pymatgen
User: ftp.debian@packages.debian.org
Usertags: remove

A FTBFS problem with rust-python-pkginfo on mips64el (Bug#1066972) is
resulting in a long chain of packages failing to build on mips64el and
blocking migration to testing.

The problem needs to be fixed at the level of rust-python-pkginfo
(hence I filed Bug#1066972).

But in the meantime it would be helpful for pymatgen to simply drop it
from unstable (and testing) on mips64el so that updates, including the
fix for security bug CVE-2024-23346, can migrate to testing.



Bug#1066972: rust-python-pkginfo: FTBFS on mips64el: missing librust-rfc2047-decoder-0.2+default-dev

2024-03-16 Thread Drew Parsons
Source: rust-python-pkginfo
Version: 0.5.5-1
Severity: serious
Tags: ftbfs
Justification: FTBFS

rust-python-pkginfo is failing to build on mips64el due to a missing
librust-rfc2047-decoder-0.2+default-dev

Indeed, there is no librust-rfc2047-decoder-0.2+default-dev package.
Should the Build dependency be removed?

The bug prevents python-maturin from building on mips64el, which in
turn prevents a long chain of packages from building and migrating to
testing.


-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT)
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



Bug#1066890: rdma-core: don't build docs on minor arches (-DNO_MAN_PAGES=1)

2024-03-14 Thread Drew Parsons
Source: rdma-core
Version: 50.0-2
Severity: normal

rdma-core does not build on minor architectures, since pandoc is not
available.

pandoc is only needed for documentation (man pages).  It might be
possible to only build docs for arch-indepdent builds, then using
Build-Depends-Indep: pandoc 

In any case the rdma-core docs (man pages actually) are controlled by
NO_MAN_PAGES in CMakeLists.txt. What would be needed is to configure
cmake with -DNO_MAN_PAGES=1 on the minor architectires (if not all
binary-arch builds) to avoid or reduce the footprint of the pandoc
Build-Depends.


-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT)
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



Bug#1065323: petsc: bad Provides in libpetsc64-real3.19t64, libpetsc64-complex3.19t64 and libpetsc-real3.19t64

2024-03-04 Thread Drew Parsons
Source: petsc
Followup-For: Bug #1065323

petsc has a complex set of symlink farms since it needs to enable
multiple alternative build profiles.

I'll implement the patch in a way that doesn't let t64 get in the way
of updating subsequently (to 3.20 in the near future).

Drew



Bug#1064810: transition: mpi-defaults

2024-02-26 Thread Drew Parsons

On 2024-02-26 07:40, Alastair McKinstry wrote:

Package: release.debian.org
Severity: normal
User: release.debian@packages.debian.org
Usertags: transition
X-Debbugs-Cc: mpi-defau...@packages.debian.org, 
debian-scie...@lists.debian.org

Control: affects -1 + src:mpi-defaults

OpenMPI 5.0 drops 32-bit support, so we need to move those archs to 
MPICH.


notes = 
"https://lists.debian.org/debian-release/2023/11/msg00379.html;;



Be mindful that Ubuntu is about to freeze for their noble LTS release.
We're (or I'm) still updating some Debian packages with the hope to get 
the new versions (and new packages) into noble. Since it's an LTS 
release, our packages will still be supporting their users in, say, 3 or 
4 years time.


Would it be reasonable to pause the 32-bit mpich transition until after 
they've frozen noble?
Or alternatively, can this mpich transition be completed in time to make 
it into their freeze (only days left).


Drew



Bug#894462: paraview: edges are blotted [regression]

2024-02-24 Thread Drew Parsons

On 2024-02-24 12:40, Francesco Poli wrote:

On Thu, 04 Jan 2024 12:18:21 +0100 Drew Parsons 


Can you confirm paraview 5.11 is meeting your image quality
expectations?


Only if I disable FXAA (which, by the way, is enabled by default, but
can luckily be disabled in the settings!).

...

Could you please forward it upstream, instead?
Thanks for your time and dedication!



I don't understand what bug you want forwarded upstream.

FXAA anti-aliasing is enabled by default, but if you don't like you can 
switch it off.
Upstream have provided an option in Edit/Settings/RenderView so that you 
can do that.

What else do you want them to do?

Do you mean you want them to remove antialiasing entirely?  That's not 
acceptable.  There are plenty of bug reports demanding antialiasing.  Do 
you want them to deactivate antialiasing by default?  I can't see that 
working well, for the same reason. Do you want them to offer alternative 
antialiasing algorithms?  Which ones?




Bug#1064367: gnome-core: demote gnome-software to Recommends

2024-02-20 Thread Drew Parsons
Package: gnome-core
Version: 1:44+1
Severity: normal

gnome-core is generally useful for maintaining a Gnome desktop
environment. gnome-software is not.

Some people find gnome-software useful, but it is certainly not core
for a Gnome environment when apt is used for package installation.

On the contrary, gnome-software introduces other problems, including
an unwelcome packagekitd dependency. The two together are currently
spamming syslog (Bug#1064364).

All the problems associated with gnome-software could be alleviated
simply by making gnome-core Recommend: gnome-software, instead of
Depends:.



Bug#1064364: gnome-software: causes packagekit to spam syslog

2024-02-20 Thread Drew Parsons
Package: gnome-software
Version: 46~beta-1
Severity: important

gnome-software is causing packagekit to spam the syslog (and resources
generally).  Even if I stop packagekit with
  sudo systemctl stop packagekit
gnome-software causes it to immediately restart.

The only workaround is to mask it with
  sudo systemctl mask packagekit
which sends packagekit startup to /dev/null.  That seems like an
excessive solution. gnome-software should not be triggering it every
second in the first place.

The problem (when not masked) causes packagekit to run every 1-5
seconds. So /var/log/syslog looks like:

2024-02-20T21:32:32.925448+01:00 sandy PackageKit: get-updates transaction 
/218502_dbddaeaa from uid 1000 finished with success after 1307ms
2024-02-20T21:32:37.603603+01:00 sandy PackageKit: get-updates transaction 
/218503_dcaeccca from uid 1000 finished with success after 1358ms
2024-02-20T21:32:39.000560+01:00 sandy PackageKit: get-updates transaction 
/218504_aebabccd from uid 1000 finished with success after 1390ms
2024-02-20T21:32:39.685662+01:00 sandy PackageKit: get-details transaction 
/218505_caabdbae from uid 1000 finished with success after 637ms
2024-02-20T21:32:41.050140+01:00 sandy PackageKit: get-updates transaction 
/218506_dbabbdcb from uid 1000 finished with success after 1357ms
2024-02-20T21:32:45.600123+01:00 sandy PackageKit: get-updates transaction 
/218507_dadbdbbd from uid 1000 finished with success after 1350ms
2024-02-20T21:32:46.988202+01:00 sandy PackageKit: get-updates transaction 
/218508_babbbeab from uid 1000 finished with success after 1381ms
2024-02-20T21:32:47.689595+01:00 sandy PackageKit: get-details transaction 
/218509_dacbeaeb from uid 1000 finished with success after 657ms
2024-02-20T21:32:49.071197+01:00 sandy PackageKit: get-updates transaction 
/218510_eeebadbc from uid 1000 finished with success after 1375ms
2024-02-20T21:32:53.605185+01:00 sandy PackageKit: get-updates transaction 
/218511_accacdcd from uid 1000 finished with success after 1365ms
2024-02-20T21:32:54.978413+01:00 sandy PackageKit: get-updates transaction 
/218512_cbeeeaea from uid 1000 finished with success after 1366ms
2024-02-20T21:32:55.712877+01:00 sandy PackageKit: get-details transaction 
/218513_cdeabbad from uid 1000 finished with success after 668ms
2024-02-20T21:32:57.082777+01:00 sandy PackageKit: get-updates transaction 
/218514_cccabecc from uid 1000 finished with success after 1363ms
2024-02-20T21:33:01.634916+01:00 sandy PackageKit: get-updates transaction 
/218515_bded from uid 1000 finished with success after 1384ms
2024-02-20T21:33:03.024803+01:00 sandy PackageKit: get-updates transaction 
/218516_aecbbddb from uid 1000 finished with success after 1383ms
2024-02-20T21:33:03.731606+01:00 sandy PackageKit: get-details transaction 
/218517_cbcbdcbc from uid 1000 finished with success after 646ms
2024-02-20T21:33:05.140069+01:00 sandy PackageKit: get-updates transaction 
/218518_ecaeedaa from uid 1000 finished with success after 1401ms
2024-02-20T21:33:09.583931+01:00 sandy PackageKit: get-updates transaction 
/218519_bddabbce from uid 1000 finished with success after 1342ms
2024-02-20T21:33:10.956327+01:00 sandy PackageKit: get-updates transaction 
/218520_deeebcca from uid 1000 finished with success after 1365ms
2024-02-20T21:33:11.628647+01:00 sandy PackageKit: get-details transaction 
/218521_baeeaacc from uid 1000 finished with success after 623ms
2024-02-20T21:33:12.990549+01:00 sandy PackageKit: get-updates transaction 
/218522_ebaebdbc from uid 1000 finished with success after 1354ms
2024-02-20T21:33:17.639282+01:00 sandy PackageKit: get-updates transaction 
/218523_ccbcecda from uid 1000 finished with success after 1395ms
2024-02-20T21:33:19.004690+01:00 sandy PackageKit: get-updates transaction 
/218524_aeddacad from uid 1000 finished with success after 1357ms
2024-02-20T21:33:19.740742+01:00 sandy PackageKit: get-details transaction 
/218525_dcdcbbeb from uid 1000 finished with success after 684ms
2024-02-20T21:33:21.09+01:00 sandy PackageKit: get-updates transaction 
/218526_eabcdaad from uid 1000 finished with success after 1296ms
2024-02-20T21:33:25.591749+01:00 sandy PackageKit: get-updates transaction 
/218527_decabccd from uid 1000 finished with success after 1377ms
2024-02-20T21:33:26.766187+01:00 sandy PackageKit: get-updates transaction 
/218528_cacbbeed from uid 1000 finished with success after 1168ms
2024-02-20T21:33:27.405423+01:00 sandy PackageKit: get-details transaction 
/218529_aaaeadbd from uid 1000 finished with success after 601ms
2024-02-20T21:33:28.655457+01:00 sandy PackageKit: get-updates transaction 
/218530_abdceccd from uid 1000 finished with success after 1245ms
2024-02-20T21:33:33.663481+01:00 sandy PackageKit: get-updates transaction 
/218531_dbabddab from uid 1000 finished with success after 1396ms
2024-02-20T21:33:35.060284+01:00 sandy PackageKit: get-updates transaction 
/218532_dadbccaa from uid 1000 finished with success after 1390ms

Bug#1064280: scikit-learn: armhf tests failing: not giving expected divide-by-zero warning

2024-02-19 Thread Drew Parsons
Source: scikit-learn
Version: 1.4.1.post1+dfsg-1
Severity: normal

sklearn 1.4 is passing most tests but two remain "failing" on armhf.

test_tfidf_no_smoothing and test_qda_regularization are "expected to
fail" by emitting a divide-by-zero warning, but they emit no such
exception.

I guess it's a particularity of the way armhf handles floating point
calculations.  I'd suggest just skipping these two tests on armhf,
unless upstream wants to inspect more deeply to fix it.

armhf was already failing tests so this will not prevent migration to
testing (i.e. no need for severity: serious).



Bug#1064224: python-hmmlearn: fails variational gaussian tests with sklearn 1.4

2024-02-18 Thread Drew Parsons
Source: python-hmmlearn
Version: 0.3.0-3
Severity: serious
Justification: debci

python-hmmlearn is failing variational_gaussian tests
(test_fit_mcgrory_titterington1d) with sklearn 1.4.

This comment upstream is relevant:
https://github.com/hmmlearn/hmmlearn/issues/539#issuecomment-1871436258

It's likely fixed in upstream PR#531
https://github.com/hmmlearn/hmmlearn/pull/531

If not, then I'd suggest skipping test_fit_mcgrory_titterington1d
until there's a better fix upstream.

PR#545 might also be generally helpful.



Bug#1064223: imbalanced-learn: fails tests with sklearn 1.4: needs new versions

2024-02-18 Thread Drew Parsons
Source: imbalanced-learn
Version: 0.10.0-2
Severity: serious
Justification: debci

imbalanced-learn 0.10 fails tests with sklearn 1.4.

The problem is fixed upstrema with v0.12.



Bug#896017: "/usr/bin/ld: cannot find -lstdc++" when building with clang

2024-02-18 Thread Drew Parsons
Package: clang
Version: 1:16.0-57
Followup-For: Bug #896017

This bug is live again.
Tests of xtensor-blas report:

-- Check for working CXX compiler: /usr/bin/clang++
-- Check for working CXX compiler: /usr/bin/clang++ - broken
CMake Error at /usr/share/cmake-3.28/Modules/CMakeTestCXXCompiler.cmake:60 
(message):
  The C++ compiler

"/usr/bin/clang++"

  is not able to compile a simple test program.

  It fails with the following output:

Change Dir: 
'/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21'

Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f 
Makefile cmTC_b5980/fast
/usr/bin/gmake  -f CMakeFiles/cmTC_b5980.dir/build.make 
CMakeFiles/cmTC_b5980.dir/build
gmake[1]: Entering directory 
'/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21'
Building CXX object CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o
/usr/bin/clang++-MD -MT CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o 
-MF CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o.d -o 
CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o -c 
/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21/testCXXCompiler.cxx
Linking CXX executable cmTC_b5980
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_b5980.dir/link.txt 
--verbose=1
/usr/bin/clang++ CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o -o 
cmTC_b5980 
/usr/bin/ld: cannot find -lstdc++: No such file or directory
clang: error: linker command failed with exit code 1 (use -v to see 
invocation)
gmake[1]: *** [CMakeFiles/cmTC_b5980.dir/build.make:100: cmTC_b5980] Error 1
gmake[1]: Leaving directory 
'/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21'
gmake: *** [Makefile:127: cmTC_b5980/fast] Error 2



The test passes if libstdc++-14-dev is installed. Does it mean clang
has been misbuilt against stdc++-14?  Or should the stdc++
dependencies of the clang package be updated?





-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.6.15-amd64 (SMP w/8 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages clang depends on:
ii  clang-16  1:16.0.6-19

clang recommends no packages.

clang suggests no packages.

-- no debconf information



Bug#1064038: masakari: fails TestObjectVersions.test_versions

2024-02-16 Thread Drew Parsons
Source: masakari
Version: 16.0.0-2
Severity: serious
Justification: debci
Control: affects -1 src:sphinx src:python-skbio src:scipy

masakari has started failing tests:

229s FAIL: 
masakari.tests.unit.objects.test_objects.TestObjectVersions.test_versions
229s masakari.tests.unit.objects.test_objects.TestObjectVersions.test_versions
229s --
229s testtools.testresult.real._StringException: Traceback (most recent call 
last):
229s   File 
"/tmp/autopkgtest-lxc.qzstlq9s/downtmp/build.m9a/src/masakari/tests/unit/objects/test_objects.py",
 line 721, in test_versions
229s self.assertEqual(expected, actual,
229s   File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 394, 
in assertEqual
229s self.assertThat(observed, matcher, message)
229s   File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 481, 
in assertThat
229s raise mismatch_error
229s testtools.matchers._impl.MismatchError: !=:
229s reference = {'FailoverSegmentList': '1.0-dfc5c6f5704d24dcaa37b0bbb03cbe60',
229s  'HostList': '1.0-25ebe1b17fbd9f114fae8b6a10d198c0',
229s  'NotificationList': '1.0-25ebe1b17fbd9f114fae8b6a10d198c0',
229s  'VMoveList': '1.0-63fff36dee683c7a1555798cb233ad3f'}
229s actual= {'FailoverSegmentList': '1.0-d4308727e4695fb16ecb451c81ab46e8',
229s  'HostList': '1.0-347911fa6ac5ae880a64e7bb4d89a71f',
229s  'NotificationList': '1.0-347911fa6ac5ae880a64e7bb4d89a71f',
229s  'VMoveList': '1.0-25a6ab4249e4a10cb33929542ff3c745'}
229s : Some objects have changed; please make sure the versions have been 
bumped, and then update their hashes here.


The test failure is preventing sphinx migration to testing,
which in turn is blocking other packages from migrating
(python-skbio, which blocks scipy)



Bug#1029701: scikit-learn: tests fail with scipy 1.10

2024-02-15 Thread Drew Parsons
Source: scikit-learn
Version: 1.2.1+dfsg-1
Followup-For: Bug #1029701
Control: severity 1029701 serious

scipy 1.11 is now uploaded to unstable,
so bumping this bug severity to serious.



Bug#1063527: einsteinpy: test_plotting fails to converge with scipy 1.11

2024-02-15 Thread Drew Parsons

Control: severity 1063527 serious

scipy 1.11 is now uploaded to unstable, so bumping this bug severity to 
serious




Bug#1063881: nvidia-graphics-drivers: provide dependency package to catch all packages of given version

2024-02-15 Thread Drew Parsons

On 2024-02-15 14:20, Andreas Beckmann wrote:

On 14/02/2024 00.37, Drew Parsons wrote:

It would be much easier to switch between versions in unstable and
experimental, or upgrade from experimental, if there were a dummy
dependency package that depends on all the manifold nvidia component
packages for the given version.  It could be called nvidia-driver-all,
for instance.  Then only the one package needs to be marked for
upgrade (or downgrade) and will bring in all the others.

Can it be done?


Yes. I'd probably call it nvidia-driver-full (as in texlive-full)
since -all could be mistaken as 'installs all (supported) driver
series'.


That sounds sensible.


And you would want hard Depends and no Recommends ?


I think it would need to be a hard Depends.  Otherwise a Recommends 
would only activate once the first time the dependency package is 
installed. Since it's not mandatory, it wouldn't succeed in maintaining 
consistent versions when upgrading or downgrading.  A Recommends (=) 
together with a Conflicts would not work since the versioned 
dependencies don't have a != operator to use with Conflicts.



Is there anything that should be excluded?


Only question I can think of for exclusion is whether cuda should be 
included. For sure not everyone who would want the driver upgrade would 
necessarily want cuda as well, in the sense that they simply aren't 
using cuda.   So one option is make two dependency packages, 
nvidia-driver-full for the drivers without cuda, and nvidia-cuda-full 
(or just cuda-full) for the cuda components.  I guess nvidia-opencl-icd 
(nvidia-opencl-common) might belong in nvidia-driver-full since it's 
kind of a "conflict of interest" to put it with cuda.


Two dependency packages like this would meet requirements fine I think.  
But if it's too much trouble to split them that way and you'd prefer one 
dependency package, then I'd suggest including the cuda packages in it.



Are there any binary packages from different source packages that
should be included as well? Mainly thinking about bits that are
included in the .run file but since source is available, we build it
from source instead. nvidia-settings, nvidia-xconfig,
nvidia-persistenced?


I don't think the dependency package would need to set external 
dependencies.  The actual binary packages would bring these in as needed 
in their own Dependencies.  The dependency package would just need to 
make sure all the nvidia package versions remain in step.


Drew



Bug#1063881: nvidia-graphics-drivers: provide dependency package to catch all packages of given version

2024-02-13 Thread Drew Parsons
Source: nvidia-graphics-drivers
Version: 525.147.05-6
Severity: normal

>From time to time the version of nvidia-driver in experimental is far
ahead of the current version in unstable.  It's often desirable to
install it to see if the new version fixes particular problems.

The problem is that there are many different packages generated for
nvidia-graphics-drivers.  Ideally the same version would want to be
installed for all of them, including the cuda or opencl components.
This means to upgrade to the new version in experimental, one has to
individually select every single nvidia component.  There's more than
20 of them, it's a bit of effort.

Conversely, if the experimental version becomes stale, it does not get
automatically updated.  One might need to step back to the nvidia
version in unstable if the experimental version no longer confirms to
package standards, or to get automatic updating going forward.  Or
there might be a new version in experimental, which is not
automatically updated.  Either way, one again has to select every
single component package and mark it explicitly for downgrade or
upgrade.

It would be much easier to switch between versions in unstable and
experimental, or upgrade from experimental, if there were a dummy
dependency package that depends on all the manifold nvidia component
packages for the given version.  It could be called nvidia-driver-all,
for instance.  Then only the one package needs to be marked for
upgrade (or downgrade) and will bring in all the others.

Can it be done?



Bug#1063856: hdf5: new upstream release

2024-02-13 Thread Drew Parsons
Source: hdf5
Followup-For: Bug #1063856

For further context, HDF upstream no longer supports hdf5 1.10, nor
hdf5 1.12 (i.e. no more releases will be made in these series)

see
https://github.com/HDFGroup/hdf5#release-schedule
https://github.com/h5py/h5py/issues/2312
https://forum.hdfgroup.org/t/release-of-hdf5-1-12-3-library-and-tools-newsletter-200/11924

hdf5 1.14 supports the REST VOL API, which may improve cloud computing
performance.
see https://github.com/HDFGroup/vol-rest
https://github.com/h5py/h5py/issues/2316



Bug#1063856: hdf5: new upstream release

2024-02-13 Thread Drew Parsons
Source: hdf5
Version: 1.10.10+repack-3
Severity: normal
X-Debbugs-Cc: debian-scie...@lists.debian.org

What our situation with our hdf5 package version?

We're currently using hdf5 1.10.10,
but 1.12.2 has been available in experimental for some time,
and upsteam has released 1.14.3.

Should we be upgrading now to hdft 1.14 (or 1.12)?

There's no current urgency, but I'm worried some bitrot might set in
as upstream developers focus on using the more recent HDF5 releases.

Drew



Bug#1063752: custodian: Inappriate maintainer address

2024-02-13 Thread Drew Parsons
Source: custodian
Followup-For: Bug #1063752
X-Debbugs-Cc: Debichem Team 
Control: reassign 1063752 lists.debian.org
Control: affects 1063752 src:custodian

Scott Kitterman reported that lists.alioth.debian.org is bouncing
emails from debian official addresses (ftpmas...@ftp-master.debian.org
in this case, processing packages for the Debichem team with Maintainer
address debichem-de...@lists.alioth.debian.org)

Scott filed the bug against src:custodian, but the bug must be in the
mailing list daemon, so I'm reassigning the bug to lists.debian.org



Bug#1063752: custodian: Inappriate maintainer address

2024-02-12 Thread Drew Parsons
Source: custodian
Followup-For: Bug #1063752

I am confused by this bug report.

The debichem Maintainer address used for custodian is the same as that
used for any other debichem package.  No problems were reported for
the other packages.



Bug#1063636: python-pynndescent: test_distances fails with scipy 1.11: Unknown Distance Metric: kulsinski

2024-02-10 Thread Drew Parsons
Source: python-pynndescent
Version: 0.5.8-2
Severity: normal
Control: block 1061605 by -1

python-pynndescent is failing test_distances with scipy 1.11 from experimental

103s else:
103s >   raise ValueError('Unknown Distance Metric: %s' % mstr)
103s E   ValueError: Unknown Distance Metric: kulsinski
103s 
103s /usr/lib/python3/dist-packages/scipy/spatial/distance.py:2230: ValueError
103s === warnings summary 
===
103s pynndescent/tests/test_distances.py: 13 warnings
103s   /usr/lib/python3/dist-packages/numba/np/ufunc/array_exprs.py:301: 
DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; 
use ast.Constant instead
103s return ast.Num(expr.value), {}
103s 
103s -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
103s === short test summary info 

103s FAILED 
../build.PmL/src/pynndescent/tests/test_distances.py::test_binary_check[kulsinski]
103s FAILED 
../build.PmL/src/pynndescent/tests/test_distances.py::test_sparse_binary_check[kulsinski]
103s  2 failed, 33 passed, 14 skipped, 13 warnings in 21.58s 



scipy 1.11 is currently in experimental but I'd like to upload soon to
unstable to resolve Bug#1061605, which would make this bug serious.

Looks like it's been fixed in the latest release of pynndescent.



Bug#1063584: python-skbio: tests fail with scipy 1.11

2024-02-09 Thread Drew Parsons
Evidently fixed upstream with 
https://github.com/scikit-bio/scikit-bio/pull/1887


see also https://github.com/scikit-bio/scikit-bio/pull/1930
(for python 3.12)



Bug#1029701: scikit-learn: tests fail with scipy 1.10

2024-02-09 Thread Drew Parsons
Source: scikit-learn
Version: 1.2.1+dfsg-1
Followup-For: Bug #1029701

scikit-learn continues to fail with scipy 1.11 from experimental

648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_quantile.py::test_incompatible_solver_for_sparse_input[interior-point]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_quantile.py::test_linprog_failure
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_quantile.py::test_warning_new_default
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_cdist_bool_metric[X_bool0-Y_bool0-kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_cdist_bool_metric[X_bool1-Y_bool1-kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_pdist_bool_metrics[X_bool0-kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_pdist_bool_metrics[X_bool1-kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_pairwise.py::test_pairwise_boolean_distance[kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/neighbors/tests/test_neighbors.py::test_kneighbors_brute_backend[float64-kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/neighbors/tests/test_neighbors.py::test_radius_neighbors_brute_backend[kulsinski]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/preprocessing/tests/test_data.py::test_power_transformer_yeojohnson_any_input[X3]
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/tests/test_base.py::test_clone_sparse_matrices
648s FAILED 
../../../../usr/lib/python3/dist-packages/sklearn/tests/test_common.py::test_estimators[PowerTransformer()-check_fit2d_1sample]
648s = 13 failed, 24496 passed, 2459 skipped, 2 deselected, 116 xfailed, 43 
xpassed, 2600 warnings in 577.48s (0:09:37) =

scipy 1.11 is currently in experimental but I'd like to upload soon to
unstable to resolve Bug#1061605, which would make this bug serious.

It's likely fixed in the newer upstream releases.



Bug#1063584: python-skbio: tests fail with scipy 1.11

2024-02-09 Thread Drew Parsons
Source: python-skbio
Version: 0.5.9-3
Severity: normal
Control: block 1061605 by -1

python-skbio is failing various tests with scipy 1.11 from experimental

150s FAILED 
skbio/diversity/alpha/tests/test_base.py::BaseTests::test_fisher_alpha
150s FAILED 
skbio/diversity/tests/test_driver.py::BetaDiversityTests::test_available_metrics
150s FAILED 
skbio/diversity/tests/test_driver.py::BetaDiversityTests::test_qualitative_bug_issue_1549
150s FAILED 
skbio/stats/tests/test_composition.py::AncomTests::test_ancom_fail_multiple_groups

scipy 1.11 is currently in experimental but I'd like to upload soon to
unstable to resolve Bug#1061605, which would make this bug serious.



Bug#1063567: dh-python: documentation is unclear for setting env variables to control python version

2024-02-09 Thread Drew Parsons
Package: dh-python
Version: 6.20231223
Severity: normal

pybuild operations can be controlled to some extent with environment
variables. which is often more tidy than using override_dh_auto_... in
debian/rules.

The control I want to apply is run the build only for the default python
(adios2, for instance is built via cmake which only detects the default
python version)

It's not clear how to use pybuild's environment variables to do this.
The pybuild man page discusses PYBUILD_DISABLE=python3.9 for excluding
a particular python version.  But this is the opposite of want I need.
I want something like

PYTHON3_DEFAULT = $(shell py3versions -d)
export PYBUILD_ENABLE=$(PYTHON3_DEFAULT)

but there's no such option.

The man page also mentions
PYBUILD_OPTION_VERSIONED_INTERPRETER (f.e. PYBUILD_CLEAN_ARGS_python3.2)
but that's only for setting arguments for a specific python version,
not for only using a specific python version. 

How should debian/rules set up the pybuild environment to only build
for the default python version? It's not clear from the man page how to do
this using pybuild's environment variables.



-- System Information:
Debian Release: trixie/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.6.13-amd64 (SMP w/8 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE
Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_AU:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages dh-python depends on:
ii  python3 3.11.6-1
ii  python3-distutils   3.11.5-1
ii  python3-setuptools  68.1.2-2

dh-python recommends no packages.

Versions of packages dh-python suggests:
ii  dpkg-dev   1.22.4
ii  flit   3.9.0-2
ii  libdpkg-perl   1.22.4
ii  python3-build  1.0.3-2
ii  python3-installer  0.7.0+dfsg1-2
ii  python3-wheel  0.42.0-1

-- no debconf information



Bug#1063527: einsteinpy: test_plotting fails to converge with scipy 1.11

2024-02-09 Thread Drew Parsons
Source: einsteinpy
Version: 0.4.0-3
Severity: important
Control: block 1061605 by -1

einsteinpy is failing test_plotting with scipy 1.11 from experimental

143s if disp:
143s msg = ("Failed to converge after %d iterations, value is %s."
143s% (itr + 1, p))
143s >   raise RuntimeError(msg)
143s E   RuntimeError: Failed to converge after 50 iterations, value is 
nan.
143s 
143s /usr/lib/python3/dist-packages/scipy/optimize/_zeros_py.py:381: 
RuntimeError
143s __ ERROR at setup of test_plot_calls_plt_plot 
__

likewise test_plotter_has_correct_attributes
 
scipy 1.11 is currently in experimental but I'd like to upload soon to
unstable to resolve Bug#1061605, which would make this bug serious.



Bug#1063526: astroml: test_iterative_PCA fails with scipy 1.11: unexpected keyword argument 'sym_pos'

2024-02-09 Thread Drew Parsons
Source: astroml
Version: 1.0.2-2
Severity: important
Control: block 1061605 by -1

astroml is failing test_iterative_PCA with scipy 1.11 from experimental

 82s for i in range(n_samples):
 82s VWV = np.dot(VT[:n_ev], (notM[i] * VT[:n_ev]).T)
 82s >   coeffs[i] = solve(VWV, VWx[:, i], sym_pos=True, 
overwrite_a=True)
 82s E   TypeError: solve() got an unexpected keyword argument 
'sym_pos'
 82s 
 82s 
/usr/lib/python3/dist-packages/astroML/dimensionality/iterative_pca.py:127: 
TypeError
 
>From the error description it's probably easy to fix, just need to
work out what is going on with the sym_pos argument.
 
scipy 1.11 is currently in experimental but I'd like to upload soon to
unstable to resolve Bug#1061605, which would make this bug serious.



Bug#1060971: mdtraj: FTBFS: dpkg-buildpackage: error: dpkg-source -b . subprocess returned exit status 2

2024-02-08 Thread Drew Parsons
Source: mdtraj
Followup-For: Bug #1060971
X-Debbugs-Cc: 1060971-d...@bugs.debian.org
Control: fixed 1060971 1.9.9-1

Fixed with cython3-legacy at the same time the bug was filed.



Bug#1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for Enterprise Certificates

2024-02-08 Thread Drew Parsons
Hi Maytham,  golang-github-googleapis-enterprise-certificate-proxy is 
now built on the main architectures.

https://buildd.debian.org/status/package.php?p=golang-github-googleapis-enterprise-certificate-proxy

It's still not building on the auxiliary architectures.
Can you see a way of extending or altering the patch for them as well?

Drew



Bug#1063352: ITP: ngspetsc -- a PETSc interface for NGSolve

2024-02-06 Thread Drew Parsons
Package: wnpp
Severity: wishlist
Owner: Drew Parsons 
X-Debbugs-Cc: debian-de...@lists.debian.org, debian-scie...@lists.debian.org, 
francesco.balla...@unicatt.it

* Package name: ngspetsc
  Version : git HEAD
  Upstream Contact: Umberto Zerbinati 
* URL : https://github.com/NGSolve/ngsPETSc
* License : MIT
  Programming Lang: Python
  Description : a PETSc interface for NGSolve

ngsPETSc is a PETSc interface for NGSolve. It extends the utility of
meshes generated by netgen and interfaces with finite element solvers
such as dolfinx (fenicsx) as well as NGSolve, Firedrake.

To be maintained by the Debian Science team alongside netgen.
Co-maintained with Francesco Ballarin.



Bug#1062356: adios2: flaky autopkgtest (host dependent): times out on big host

2024-02-06 Thread Drew Parsons
Source: adios2
Followup-For: Bug #1062356

The flakey test is adios2-mpi-examples. debian/tests is building and
running them manually, and running on only 3 processors (mpirun -n 3)
So the problem can't be overload of the machine.

I'll just skip insituGlobalArraysReaderNxN_mpi.

For reference, upstream is making some changes to make it more
reliable to run tests against the installed library,
https://github.com/ornladios/ADIOS2/pull/3906
also
https://github.com/ornladios/ADIOS2/pull/3820
Not certain that that directly makes insituGlobalArraysReaderNxN_mpi
more reliable though.



Bug#1062356: adios2: flaky autopkgtest (host dependent): times out on big host

2024-02-06 Thread Drew Parsons
Source: adios2
Followup-For: Bug #1062356

Can't be quite as simple as just the host machine.

https://ci.debian.net/data/autopkgtest/unstable/amd64/a/adios2/41403641/log.gz
completed in 9 minutes,
while
https://ci.debian.net/data/autopkgtest/unstable/amd64/a/adios2/41496866/log.gz
failed with timeout.

But that was ci-worker13 in both cases.
Maybe it's a race condition.

Might be simplest to just skip insituGlobalArraysReaderNxN_mpi
though I can also review how many CPUs are invoked by the test.
Usually safer not to run tests on all 64 available cpus, for instance,
if there are that many on the machine.



Bug#1061605: scipy: tests skipped during build and autopkgtest not in sync

2024-02-05 Thread Drew Parsons
Source: scipy
Followup-For: Bug #1061605

Note that debci tests are passing on all arches (where built) for
scipy 11.

I'm inclined to accept this as a solution.
i.e. update the list of builds tests to skip for scipy 11
rather than reorganise debian/tests skips for scipy 10.



Bug#1061386: libxtensor-dev: Fails to install for arm64 arch on amd64

2024-02-05 Thread Drew Parsons
Package: libxtensor-dev
Followup-For: Bug #1061386
Control: fixed 1061386 0.24.7-1

I'm a bit confused why you're installing libxtensor-dev:arm64 on
amd64. Wouldn't it make more sense to install libxtensor-dev:amd64?

In any case this was fixed in libxtensor-dev 0.24.7 (actually in
0.24.4-1exp1), making libxtensor-dev arch:all.

Since the package is header-only, you can probably install 0.24.7-5
from testing.  Let me know if that works successfully.

Drew



Bug#1062827: RFP: pydevtool -- CLI dev tools powered by pydoit

2024-02-05 Thread Drew Parsons

On 2024-02-05 18:44, Drew Parsons wrote:

building and testing.  The scipy team have created pybuild, which uses


That should read "The scipy team have created dev.py" of course.

(debian created pybuild)



Bug#1062827: RFP: pydevtool -- CLI dev tools powered by pydoit

2024-02-05 Thread Drew Parsons

On 2024-02-03 21:07, c.bu...@posteo.jp wrote:

I checked upstream README.md.
I have no clue what this is.

Can someone explain please?

Am 03.02.2024 18:05 schrieb dpars...@debian.org:

Package: wnpp

* Package name: pydevtool
* URL : https://github.com/pydoit/pydevtool
  Description : CLI dev tools powered by pydoit

Python dev tool. doit powered, integration with:
- click and rich for custom CLI
- linters: pycodestlye, pyflakes



I can only explain phenomenologically, based on what scipy is trying to 
do with it.


scipy uses it in a dev.py script,
https://github.com/scipy/scipy/blob/eb6d8e085087ce9854e92d6b0cdc6d70f0ff0074/dev.py#L125
dev.py is a developers' tool used to run scipy's tests,
https://scipy.github.io/devdocs/dev/contributor/devpy_test.html
See also
https://scipy.github.io/devdocs/building/index.html

pydevtool provides a cli submodule, from which scipy uses 
UnifiedContext, CliGroup, Task
These are used to create and organise a Test class, keeping track of 
parameters used when running tests,

such as verbosity level or install prefix.

pydevtool itself is built on top of doit, which is a framework for 
managing general tasks.


The dev tasks in question here are building and testing the package (we 
want to use it for the test tasks).


dev.py also has other tasks

Probably the best way to think about it is that dev.py is scipy's 
counterpart to debian's pybuild, providing a solution to the challenge 
of building a complex python package.  We've created pybuild, which uses 
dhpython as a framework for managing the tasks involved in building and 
testing.  The scipy team have created pybuild, which uses pydevtool 
(together with click, doit) as a framework for managing the tasks.


`grep Task dev.py` gives a list of other tasks that dev.py can handle:
Test
RefguideCheck
Build
Test
Bench
Mypy
Doc
RefguideCheck

So pydevtool is used to help manage the execution of these various 
tasks.




Bug#1053939: pymatgen: test failure with pandas 2.1

2024-02-01 Thread Drew Parsons
Source: pymatgen
Followup-For: Bug #1053939

[apologies for the spam. testing mail server configuration now]



Bug#1053939: pymatgen: test failure with pandas 2.1

2024-02-01 Thread Drew Parsons
Source: pymatgen
Followup-For: Bug #1053939

Looks like the latest release should be fine with pandas 2.
Currently building in experimental.



Bug#1052028: pydantic

2024-01-30 Thread Drew Parsons

block 1061609 by 1052028 1052619
affects 1061609 python3-emmet-core
thanks

The latest version of python-emmet-core (used by pymatgen) requires 
pydantic2.




Bug#1061605: scipy: tests skipped during build and autopkgtest not in sync

2024-01-27 Thread Drew Parsons
Source: scipy
Followup-For: Bug #1061605

Easy enough to relax tolerances or skip tests if we needed to.
test_maxiter_worsening was giving problems on other arches.

But strange the test only started failing when pythran was deactivated.
I've reactivated it in 1.10.1-8, we'll see if it restores success.



Bug#1020561: python3-scipy: Scipy upgrade requires c++ compiler

2024-01-27 Thread Drew Parsons
Package: python3-scipy
Followup-For: Bug #1020561

Confirmed that tests still pass (amd64) if python3-pythran is forcibly
not installed.

Making an audit of where pythran is actually used (in v.10),
at runtime that is:

scipy/interpolate
_rbfinterp_pythran.py
see also setup.py, meson.build

scipy/optimize
_group_columns.py
used by ._numdiff.group_columns

scipy/linalg
_matfuncs_sqrtm_triu.py
(not clear that this is used. meson.build refers to the cython variant
_matfuncs_sqrtm_triu.pyx)

scipy/stats
_stats_pythran.py
_hypotests.py
_stats_mstats_common.py

scipy/signal
_spectral.py


The pythran definitions are supplied as
# pythran export ...

So they are enclosed in comments.
If pythran is not present then the definition is handled as a normal
comment, i.e. ignored.

At build time python extensions are built using these definitions via
meson.build
e.g. interpolate/_rbfinterp_pythran.cpython-39-x86_64-linux-gnu.so
But once these a built pythran is not needed to rebuild them.

This does confuse me, I thought the advantage of pythran was a jit
optimisation at runtime. In this case pythran just provides an
automated means of running cython, rather than an optimisation to the
runtime cpu.  Not clear then what the advantage of
optimize/_group_columns.py is over optimize/_group_columns.pyx
Perhaps the pythran variant is better tuned.

So, what is pythran is doing is essentially replacing the .py file
with a .so library.  It's an ahead-of-time compiler, not a
just-in-time compiler.

Conclusion:, we want to use pythran at build time.  But there's no further
reason to depend on it at runtime (not even as Recommends)



Bug#1001105: can I help with pyvista packaging?

2024-01-27 Thread Drew Parsons

On 2024-01-27 18:28, Francesco Ballarin wrote:

OK Andreas, I'll push to master. Let me take the lead on that, and
I'll come back to you and Drew with progress and questions.

I think I have some ideas on how to get started on the basic package.


Thanks Francesco and Andreas.  Will be interesting to see the dolfinx 
demos running in their full pyvista livery.




The full package (i.e., all optional components that one can install
with "pip install pyvista[all]") will be much more complex, because it
depends on trame, which comes split in five interdependent packages

...

and who knows how many dependencies each one of those have.

...

but let's start with a less ambitious goal ;)


I agree, get the basic functionality in place first :)




I think that the error you see is because python3-vtk9 is only built
for python 3.11, but unit tests are getting run with python 3.12.


This is annoying problem, constrained by cmake limitations.  Other 
packages are also affected, like spglib, which then constrains pymatgen. 
The problem is that cmake does not allow for building python modules 
over multiple python versions.  The FEniCS project has been smarter 
about it, keeping the C++ library and the python module build separate 
(the first using cmake, the latter using setup/pyproject).


Not much we can do about it with vtk9 in the short term. Complaints 
should be pushed to kitware though.
They seem to think it should be done in your source, which is pretty 
weird.

https://gitlab.kitware.com/cmake/cmake/-/issues/21797

Different source dirs makes little sense, but possibly cmake could be 
run multiple times, once for each python, in separate build dirs.




Bug#1061609: ITP: pydantic-settings -- settings management using pydantic

2024-01-27 Thread Drew Parsons
Package: wnpp
Severity: wishlist
Owner: Drew Parsons 
X-Debbugs-Cc: debian-de...@lists.debian.org, debian-pyt...@lists.debian.org

* Package name: pydantic-settings
  Version : 2.1.0
  Upstream Contact: Samuel Colvin 
* URL : https://github.com/pydantic/pydantic-settings
* License : MIT
  Programming Lang: Python
  Description : settings management using pydantic

Settings management using Pydantic, this is the new official home of
Pydantic's BaseSettings.

Pydantic Settings provides optional Pydantic features for loading a
settings or config class from environment variables or secrets files.

pydantic-settings is used by latest versions of python-mp-api

To be maintained by the Debian Python Team alongside pydantic.



Bug#1020561: python3-scipy: Scipy upgrade requires c++ compiler

2024-01-27 Thread Drew Parsons

On 2024-01-27 09:30, Graham Inggs wrote:

Hi

It seems (at least in scipy/1.10.1-6), that python3-pythran was a
build-dependency for all architectures [1], yet, on armhf,
python3-scipy did not have a runtime dependency on python3-pythran
[2].

The build log of scipy/1.10.1-6 on armhf [3], confirms:

Building scipy with SCIPY_USE_PYTHRAN=1

I do not recall seeing any bug reports or autopkgtest failures due to 
this.


Is it possible that scipy can be built with Pythran support, and
python3-pythran can be an optional dependency at runtime?

If this is true, then we can downgrade python3-pythran from a Depends
to a Recommends.



A good question. We can look into this and check.



Bug#1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for Enterprise Certificates

2024-01-23 Thread Drew Parsons

On 2024-01-23 14:39, Maytham Alsudany wrote:

Hi Drew,

On Tue, 2024-01-23 at 11:24 +0100, Drew Parsons wrote:

> > Hi Maytham, I can upload it.  But note how pkcs11 is failing on 32 bit
> > arches.  That needs to be sorted out.  I had been waiting for that
> > before uploading enterprise-certificate-proxy.
>
> 
https://salsa.debian.org/go-team/packages/golang-github-google-go-pkcs11/-/merge_requests/2
>
> go-pkcs11 builds successfully and passes autopkgtest, lintian, and
> piuparts on
> both amd64 and i386.

The problem is on debci. See the failing tests at
  https://ci.debian.net/packages/g/golang-github-google-go-pkcs11/

summarised also at
https://tracker.debian.org/pkg/golang-github-google-go-pkcs11


I'm aware, and the PR I've linked is a fix, please have a look.

You can look at the patch file itself at [1] (have a look at the 
description to

understand what the PR/patch does).


Thanks Maytham.  The patch handling it via malloc_arg makes sense.

I left a review commenting about supporting other 32 bit architectures, 
not just 386 and arm.
I can see how to adapt your patch to control it at build time. Let me 
know if you're happy

with that idea or if you can see another way to do it.

(an alternative could be checking bits, along the lines of
"const PtrSize = 32 << uintptr(^uintptr(0)>>63)"
But I wouldn't necessarily trust that to always give the right 
indication.

Your idea of handling two separate definitions should work fine)

Drew



  1   2   3   4   5   6   7   8   9   10   >