Bug#1067064: transition: petsc hypre
Package: release.debian.org Severity: normal X-Debbugs-Cc: pe...@packages.debian.org, francesco.balla...@unicatt.it Control: affects -1 + src:petsc User: release.debian@packages.debian.org Usertags: transition The petsc patch for the 64-bit time_t transition was deeply invasive. It makes petsc (and slepc) essentially unmaintainable. I think the best way to deal with it is to pretend it never happened and move on with petsc 3.20, upgrading from petsc 3.19. We'd want to doing this upgrade anyway. As part of this transition I'll also upgrade hypre from 2.28.0 to 2.29.0. I've checked that sundials and getdp build without problem against the new petsc. dolfinx, dolfin too. deal.ii builds but fails tests with a reference to undefined __gmpn_com symbols. This indicates instructions to link to libgmp didn't get through. I think this is unrelated to the petsc upgrade, and I suspect it might be an artifact of my local installation. i.e. I suspect the build on buildds will be fine. If necessary we can update deal.ii to take more care linking GMP. Ben file: title = "petsc"; is_affected = .depends ~ "libpetsc*3.19" | .depends ~ "libpetsc*3.20"; is_good = .depends ~ "libpetsc*3.20"; is_bad = .depends ~ "libpetsc*3.19";
Bug#1064749: pymatgen: FTBFS: make[1]: *** [debian/rules:104: override_dh_auto_test] Error 1
Source: pymatgen Followup-For: Bug #1064749 Control: tags -1 ftbfs I can't reproduce this error. See also https://buildd.debian.org/status/fetch.php?pkg=pymatgen=amd64=2024.1.27%2Bdfsg1-7=1708967242=0 https://tests.reproducible-builds.org/debian/rbuild/unstable/amd64/pymatgen_2024.1.27+dfsg1-7.rbuild.log.gz https://ci.debian.net/data/autopkgtest/unstable/amd64/p/pymatgen/43357166/log.gz https://ci.debian.net/data/autopkgtest/testing/amd64/p/pymatgen/43355416/log.gz test_quasiharmonic_debye_approx is passing on all systems. I think we can close this bug.
Bug#1066973: RM: pymatgen [mips64el] -- ROM; FTBFS on mips64el
Package: ftp.debian.org Severity: normal Tags: ftbfs X-Debbugs-Cc: pymat...@packages.debian.org Control: affects -1 + src:pymatgen User: ftp.debian@packages.debian.org Usertags: remove A FTBFS problem with rust-python-pkginfo on mips64el (Bug#1066972) is resulting in a long chain of packages failing to build on mips64el and blocking migration to testing. The problem needs to be fixed at the level of rust-python-pkginfo (hence I filed Bug#1066972). But in the meantime it would be helpful for pymatgen to simply drop it from unstable (and testing) on mips64el so that updates, including the fix for security bug CVE-2024-23346, can migrate to testing.
Bug#1066972: rust-python-pkginfo: FTBFS on mips64el: missing librust-rfc2047-decoder-0.2+default-dev
Source: rust-python-pkginfo Version: 0.5.5-1 Severity: serious Tags: ftbfs Justification: FTBFS rust-python-pkginfo is failing to build on mips64el due to a missing librust-rfc2047-decoder-0.2+default-dev Indeed, there is no librust-rfc2047-decoder-0.2+default-dev package. Should the Build dependency be removed? The bug prevents python-maturin from building on mips64el, which in turn prevents a long chain of packages from building and migrating to testing. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled
Bug#1066890: rdma-core: don't build docs on minor arches (-DNO_MAN_PAGES=1)
Source: rdma-core Version: 50.0-2 Severity: normal rdma-core does not build on minor architectures, since pandoc is not available. pandoc is only needed for documentation (man pages). It might be possible to only build docs for arch-indepdent builds, then using Build-Depends-Indep: pandoc In any case the rdma-core docs (man pages actually) are controlled by NO_MAN_PAGES in CMakeLists.txt. What would be needed is to configure cmake with -DNO_MAN_PAGES=1 on the minor architectires (if not all binary-arch builds) to avoid or reduce the footprint of the pandoc Build-Depends. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.7.9-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled
Bug#1065323: petsc: bad Provides in libpetsc64-real3.19t64, libpetsc64-complex3.19t64 and libpetsc-real3.19t64
Source: petsc Followup-For: Bug #1065323 petsc has a complex set of symlink farms since it needs to enable multiple alternative build profiles. I'll implement the patch in a way that doesn't let t64 get in the way of updating subsequently (to 3.20 in the near future). Drew
Bug#1064810: transition: mpi-defaults
On 2024-02-26 07:40, Alastair McKinstry wrote: Package: release.debian.org Severity: normal User: release.debian@packages.debian.org Usertags: transition X-Debbugs-Cc: mpi-defau...@packages.debian.org, debian-scie...@lists.debian.org Control: affects -1 + src:mpi-defaults OpenMPI 5.0 drops 32-bit support, so we need to move those archs to MPICH. notes = "https://lists.debian.org/debian-release/2023/11/msg00379.html;; Be mindful that Ubuntu is about to freeze for their noble LTS release. We're (or I'm) still updating some Debian packages with the hope to get the new versions (and new packages) into noble. Since it's an LTS release, our packages will still be supporting their users in, say, 3 or 4 years time. Would it be reasonable to pause the 32-bit mpich transition until after they've frozen noble? Or alternatively, can this mpich transition be completed in time to make it into their freeze (only days left). Drew
Bug#894462: paraview: edges are blotted [regression]
On 2024-02-24 12:40, Francesco Poli wrote: On Thu, 04 Jan 2024 12:18:21 +0100 Drew Parsons Can you confirm paraview 5.11 is meeting your image quality expectations? Only if I disable FXAA (which, by the way, is enabled by default, but can luckily be disabled in the settings!). ... Could you please forward it upstream, instead? Thanks for your time and dedication! I don't understand what bug you want forwarded upstream. FXAA anti-aliasing is enabled by default, but if you don't like you can switch it off. Upstream have provided an option in Edit/Settings/RenderView so that you can do that. What else do you want them to do? Do you mean you want them to remove antialiasing entirely? That's not acceptable. There are plenty of bug reports demanding antialiasing. Do you want them to deactivate antialiasing by default? I can't see that working well, for the same reason. Do you want them to offer alternative antialiasing algorithms? Which ones?
Bug#1064367: gnome-core: demote gnome-software to Recommends
Package: gnome-core Version: 1:44+1 Severity: normal gnome-core is generally useful for maintaining a Gnome desktop environment. gnome-software is not. Some people find gnome-software useful, but it is certainly not core for a Gnome environment when apt is used for package installation. On the contrary, gnome-software introduces other problems, including an unwelcome packagekitd dependency. The two together are currently spamming syslog (Bug#1064364). All the problems associated with gnome-software could be alleviated simply by making gnome-core Recommend: gnome-software, instead of Depends:.
Bug#1064364: gnome-software: causes packagekit to spam syslog
Package: gnome-software Version: 46~beta-1 Severity: important gnome-software is causing packagekit to spam the syslog (and resources generally). Even if I stop packagekit with sudo systemctl stop packagekit gnome-software causes it to immediately restart. The only workaround is to mask it with sudo systemctl mask packagekit which sends packagekit startup to /dev/null. That seems like an excessive solution. gnome-software should not be triggering it every second in the first place. The problem (when not masked) causes packagekit to run every 1-5 seconds. So /var/log/syslog looks like: 2024-02-20T21:32:32.925448+01:00 sandy PackageKit: get-updates transaction /218502_dbddaeaa from uid 1000 finished with success after 1307ms 2024-02-20T21:32:37.603603+01:00 sandy PackageKit: get-updates transaction /218503_dcaeccca from uid 1000 finished with success after 1358ms 2024-02-20T21:32:39.000560+01:00 sandy PackageKit: get-updates transaction /218504_aebabccd from uid 1000 finished with success after 1390ms 2024-02-20T21:32:39.685662+01:00 sandy PackageKit: get-details transaction /218505_caabdbae from uid 1000 finished with success after 637ms 2024-02-20T21:32:41.050140+01:00 sandy PackageKit: get-updates transaction /218506_dbabbdcb from uid 1000 finished with success after 1357ms 2024-02-20T21:32:45.600123+01:00 sandy PackageKit: get-updates transaction /218507_dadbdbbd from uid 1000 finished with success after 1350ms 2024-02-20T21:32:46.988202+01:00 sandy PackageKit: get-updates transaction /218508_babbbeab from uid 1000 finished with success after 1381ms 2024-02-20T21:32:47.689595+01:00 sandy PackageKit: get-details transaction /218509_dacbeaeb from uid 1000 finished with success after 657ms 2024-02-20T21:32:49.071197+01:00 sandy PackageKit: get-updates transaction /218510_eeebadbc from uid 1000 finished with success after 1375ms 2024-02-20T21:32:53.605185+01:00 sandy PackageKit: get-updates transaction /218511_accacdcd from uid 1000 finished with success after 1365ms 2024-02-20T21:32:54.978413+01:00 sandy PackageKit: get-updates transaction /218512_cbeeeaea from uid 1000 finished with success after 1366ms 2024-02-20T21:32:55.712877+01:00 sandy PackageKit: get-details transaction /218513_cdeabbad from uid 1000 finished with success after 668ms 2024-02-20T21:32:57.082777+01:00 sandy PackageKit: get-updates transaction /218514_cccabecc from uid 1000 finished with success after 1363ms 2024-02-20T21:33:01.634916+01:00 sandy PackageKit: get-updates transaction /218515_bded from uid 1000 finished with success after 1384ms 2024-02-20T21:33:03.024803+01:00 sandy PackageKit: get-updates transaction /218516_aecbbddb from uid 1000 finished with success after 1383ms 2024-02-20T21:33:03.731606+01:00 sandy PackageKit: get-details transaction /218517_cbcbdcbc from uid 1000 finished with success after 646ms 2024-02-20T21:33:05.140069+01:00 sandy PackageKit: get-updates transaction /218518_ecaeedaa from uid 1000 finished with success after 1401ms 2024-02-20T21:33:09.583931+01:00 sandy PackageKit: get-updates transaction /218519_bddabbce from uid 1000 finished with success after 1342ms 2024-02-20T21:33:10.956327+01:00 sandy PackageKit: get-updates transaction /218520_deeebcca from uid 1000 finished with success after 1365ms 2024-02-20T21:33:11.628647+01:00 sandy PackageKit: get-details transaction /218521_baeeaacc from uid 1000 finished with success after 623ms 2024-02-20T21:33:12.990549+01:00 sandy PackageKit: get-updates transaction /218522_ebaebdbc from uid 1000 finished with success after 1354ms 2024-02-20T21:33:17.639282+01:00 sandy PackageKit: get-updates transaction /218523_ccbcecda from uid 1000 finished with success after 1395ms 2024-02-20T21:33:19.004690+01:00 sandy PackageKit: get-updates transaction /218524_aeddacad from uid 1000 finished with success after 1357ms 2024-02-20T21:33:19.740742+01:00 sandy PackageKit: get-details transaction /218525_dcdcbbeb from uid 1000 finished with success after 684ms 2024-02-20T21:33:21.09+01:00 sandy PackageKit: get-updates transaction /218526_eabcdaad from uid 1000 finished with success after 1296ms 2024-02-20T21:33:25.591749+01:00 sandy PackageKit: get-updates transaction /218527_decabccd from uid 1000 finished with success after 1377ms 2024-02-20T21:33:26.766187+01:00 sandy PackageKit: get-updates transaction /218528_cacbbeed from uid 1000 finished with success after 1168ms 2024-02-20T21:33:27.405423+01:00 sandy PackageKit: get-details transaction /218529_aaaeadbd from uid 1000 finished with success after 601ms 2024-02-20T21:33:28.655457+01:00 sandy PackageKit: get-updates transaction /218530_abdceccd from uid 1000 finished with success after 1245ms 2024-02-20T21:33:33.663481+01:00 sandy PackageKit: get-updates transaction /218531_dbabddab from uid 1000 finished with success after 1396ms 2024-02-20T21:33:35.060284+01:00 sandy PackageKit: get-updates transaction /218532_dadbccaa from uid 1000 finished with success after 1390ms
Bug#1064280: scikit-learn: armhf tests failing: not giving expected divide-by-zero warning
Source: scikit-learn Version: 1.4.1.post1+dfsg-1 Severity: normal sklearn 1.4 is passing most tests but two remain "failing" on armhf. test_tfidf_no_smoothing and test_qda_regularization are "expected to fail" by emitting a divide-by-zero warning, but they emit no such exception. I guess it's a particularity of the way armhf handles floating point calculations. I'd suggest just skipping these two tests on armhf, unless upstream wants to inspect more deeply to fix it. armhf was already failing tests so this will not prevent migration to testing (i.e. no need for severity: serious).
Bug#1064224: python-hmmlearn: fails variational gaussian tests with sklearn 1.4
Source: python-hmmlearn Version: 0.3.0-3 Severity: serious Justification: debci python-hmmlearn is failing variational_gaussian tests (test_fit_mcgrory_titterington1d) with sklearn 1.4. This comment upstream is relevant: https://github.com/hmmlearn/hmmlearn/issues/539#issuecomment-1871436258 It's likely fixed in upstream PR#531 https://github.com/hmmlearn/hmmlearn/pull/531 If not, then I'd suggest skipping test_fit_mcgrory_titterington1d until there's a better fix upstream. PR#545 might also be generally helpful.
Bug#1064223: imbalanced-learn: fails tests with sklearn 1.4: needs new versions
Source: imbalanced-learn Version: 0.10.0-2 Severity: serious Justification: debci imbalanced-learn 0.10 fails tests with sklearn 1.4. The problem is fixed upstrema with v0.12.
Bug#896017: "/usr/bin/ld: cannot find -lstdc++" when building with clang
Package: clang Version: 1:16.0-57 Followup-For: Bug #896017 This bug is live again. Tests of xtensor-blas report: -- Check for working CXX compiler: /usr/bin/clang++ -- Check for working CXX compiler: /usr/bin/clang++ - broken CMake Error at /usr/share/cmake-3.28/Modules/CMakeTestCXXCompiler.cmake:60 (message): The C++ compiler "/usr/bin/clang++" is not able to compile a simple test program. It fails with the following output: Change Dir: '/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21' Run Build Command(s): /usr/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile cmTC_b5980/fast /usr/bin/gmake -f CMakeFiles/cmTC_b5980.dir/build.make CMakeFiles/cmTC_b5980.dir/build gmake[1]: Entering directory '/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21' Building CXX object CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o /usr/bin/clang++-MD -MT CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o -MF CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o.d -o CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o -c /tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21/testCXXCompiler.cxx Linking CXX executable cmTC_b5980 /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_b5980.dir/link.txt --verbose=1 /usr/bin/clang++ CMakeFiles/cmTC_b5980.dir/testCXXCompiler.cxx.o -o cmTC_b5980 /usr/bin/ld: cannot find -lstdc++: No such file or directory clang: error: linker command failed with exit code 1 (use -v to see invocation) gmake[1]: *** [CMakeFiles/cmTC_b5980.dir/build.make:100: cmTC_b5980] Error 1 gmake[1]: Leaving directory '/tmp/autopkgtest.BxR3Rk/autopkgtest_tmp/build/CMakeFiles/CMakeScratch/TryCompile-lvLS21' gmake: *** [Makefile:127: cmTC_b5980/fast] Error 2 The test passes if libstdc++-14-dev is installed. Does it mean clang has been misbuilt against stdc++-14? Or should the stdc++ dependencies of the clang package be updated? -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.15-amd64 (SMP w/8 CPU threads; PREEMPT) Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages clang depends on: ii clang-16 1:16.0.6-19 clang recommends no packages. clang suggests no packages. -- no debconf information
Bug#1064038: masakari: fails TestObjectVersions.test_versions
Source: masakari Version: 16.0.0-2 Severity: serious Justification: debci Control: affects -1 src:sphinx src:python-skbio src:scipy masakari has started failing tests: 229s FAIL: masakari.tests.unit.objects.test_objects.TestObjectVersions.test_versions 229s masakari.tests.unit.objects.test_objects.TestObjectVersions.test_versions 229s -- 229s testtools.testresult.real._StringException: Traceback (most recent call last): 229s File "/tmp/autopkgtest-lxc.qzstlq9s/downtmp/build.m9a/src/masakari/tests/unit/objects/test_objects.py", line 721, in test_versions 229s self.assertEqual(expected, actual, 229s File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 394, in assertEqual 229s self.assertThat(observed, matcher, message) 229s File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 481, in assertThat 229s raise mismatch_error 229s testtools.matchers._impl.MismatchError: !=: 229s reference = {'FailoverSegmentList': '1.0-dfc5c6f5704d24dcaa37b0bbb03cbe60', 229s 'HostList': '1.0-25ebe1b17fbd9f114fae8b6a10d198c0', 229s 'NotificationList': '1.0-25ebe1b17fbd9f114fae8b6a10d198c0', 229s 'VMoveList': '1.0-63fff36dee683c7a1555798cb233ad3f'} 229s actual= {'FailoverSegmentList': '1.0-d4308727e4695fb16ecb451c81ab46e8', 229s 'HostList': '1.0-347911fa6ac5ae880a64e7bb4d89a71f', 229s 'NotificationList': '1.0-347911fa6ac5ae880a64e7bb4d89a71f', 229s 'VMoveList': '1.0-25a6ab4249e4a10cb33929542ff3c745'} 229s : Some objects have changed; please make sure the versions have been bumped, and then update their hashes here. The test failure is preventing sphinx migration to testing, which in turn is blocking other packages from migrating (python-skbio, which blocks scipy)
Bug#1029701: scikit-learn: tests fail with scipy 1.10
Source: scikit-learn Version: 1.2.1+dfsg-1 Followup-For: Bug #1029701 Control: severity 1029701 serious scipy 1.11 is now uploaded to unstable, so bumping this bug severity to serious.
Bug#1063527: einsteinpy: test_plotting fails to converge with scipy 1.11
Control: severity 1063527 serious scipy 1.11 is now uploaded to unstable, so bumping this bug severity to serious
Bug#1063881: nvidia-graphics-drivers: provide dependency package to catch all packages of given version
On 2024-02-15 14:20, Andreas Beckmann wrote: On 14/02/2024 00.37, Drew Parsons wrote: It would be much easier to switch between versions in unstable and experimental, or upgrade from experimental, if there were a dummy dependency package that depends on all the manifold nvidia component packages for the given version. It could be called nvidia-driver-all, for instance. Then only the one package needs to be marked for upgrade (or downgrade) and will bring in all the others. Can it be done? Yes. I'd probably call it nvidia-driver-full (as in texlive-full) since -all could be mistaken as 'installs all (supported) driver series'. That sounds sensible. And you would want hard Depends and no Recommends ? I think it would need to be a hard Depends. Otherwise a Recommends would only activate once the first time the dependency package is installed. Since it's not mandatory, it wouldn't succeed in maintaining consistent versions when upgrading or downgrading. A Recommends (=) together with a Conflicts would not work since the versioned dependencies don't have a != operator to use with Conflicts. Is there anything that should be excluded? Only question I can think of for exclusion is whether cuda should be included. For sure not everyone who would want the driver upgrade would necessarily want cuda as well, in the sense that they simply aren't using cuda. So one option is make two dependency packages, nvidia-driver-full for the drivers without cuda, and nvidia-cuda-full (or just cuda-full) for the cuda components. I guess nvidia-opencl-icd (nvidia-opencl-common) might belong in nvidia-driver-full since it's kind of a "conflict of interest" to put it with cuda. Two dependency packages like this would meet requirements fine I think. But if it's too much trouble to split them that way and you'd prefer one dependency package, then I'd suggest including the cuda packages in it. Are there any binary packages from different source packages that should be included as well? Mainly thinking about bits that are included in the .run file but since source is available, we build it from source instead. nvidia-settings, nvidia-xconfig, nvidia-persistenced? I don't think the dependency package would need to set external dependencies. The actual binary packages would bring these in as needed in their own Dependencies. The dependency package would just need to make sure all the nvidia package versions remain in step. Drew
Bug#1063881: nvidia-graphics-drivers: provide dependency package to catch all packages of given version
Source: nvidia-graphics-drivers Version: 525.147.05-6 Severity: normal >From time to time the version of nvidia-driver in experimental is far ahead of the current version in unstable. It's often desirable to install it to see if the new version fixes particular problems. The problem is that there are many different packages generated for nvidia-graphics-drivers. Ideally the same version would want to be installed for all of them, including the cuda or opencl components. This means to upgrade to the new version in experimental, one has to individually select every single nvidia component. There's more than 20 of them, it's a bit of effort. Conversely, if the experimental version becomes stale, it does not get automatically updated. One might need to step back to the nvidia version in unstable if the experimental version no longer confirms to package standards, or to get automatic updating going forward. Or there might be a new version in experimental, which is not automatically updated. Either way, one again has to select every single component package and mark it explicitly for downgrade or upgrade. It would be much easier to switch between versions in unstable and experimental, or upgrade from experimental, if there were a dummy dependency package that depends on all the manifold nvidia component packages for the given version. It could be called nvidia-driver-all, for instance. Then only the one package needs to be marked for upgrade (or downgrade) and will bring in all the others. Can it be done?
Bug#1063856: hdf5: new upstream release
Source: hdf5 Followup-For: Bug #1063856 For further context, HDF upstream no longer supports hdf5 1.10, nor hdf5 1.12 (i.e. no more releases will be made in these series) see https://github.com/HDFGroup/hdf5#release-schedule https://github.com/h5py/h5py/issues/2312 https://forum.hdfgroup.org/t/release-of-hdf5-1-12-3-library-and-tools-newsletter-200/11924 hdf5 1.14 supports the REST VOL API, which may improve cloud computing performance. see https://github.com/HDFGroup/vol-rest https://github.com/h5py/h5py/issues/2316
Bug#1063856: hdf5: new upstream release
Source: hdf5 Version: 1.10.10+repack-3 Severity: normal X-Debbugs-Cc: debian-scie...@lists.debian.org What our situation with our hdf5 package version? We're currently using hdf5 1.10.10, but 1.12.2 has been available in experimental for some time, and upsteam has released 1.14.3. Should we be upgrading now to hdft 1.14 (or 1.12)? There's no current urgency, but I'm worried some bitrot might set in as upstream developers focus on using the more recent HDF5 releases. Drew
Bug#1063752: custodian: Inappriate maintainer address
Source: custodian Followup-For: Bug #1063752 X-Debbugs-Cc: Debichem Team Control: reassign 1063752 lists.debian.org Control: affects 1063752 src:custodian Scott Kitterman reported that lists.alioth.debian.org is bouncing emails from debian official addresses (ftpmas...@ftp-master.debian.org in this case, processing packages for the Debichem team with Maintainer address debichem-de...@lists.alioth.debian.org) Scott filed the bug against src:custodian, but the bug must be in the mailing list daemon, so I'm reassigning the bug to lists.debian.org
Bug#1063752: custodian: Inappriate maintainer address
Source: custodian Followup-For: Bug #1063752 I am confused by this bug report. The debichem Maintainer address used for custodian is the same as that used for any other debichem package. No problems were reported for the other packages.
Bug#1063636: python-pynndescent: test_distances fails with scipy 1.11: Unknown Distance Metric: kulsinski
Source: python-pynndescent Version: 0.5.8-2 Severity: normal Control: block 1061605 by -1 python-pynndescent is failing test_distances with scipy 1.11 from experimental 103s else: 103s > raise ValueError('Unknown Distance Metric: %s' % mstr) 103s E ValueError: Unknown Distance Metric: kulsinski 103s 103s /usr/lib/python3/dist-packages/scipy/spatial/distance.py:2230: ValueError 103s === warnings summary === 103s pynndescent/tests/test_distances.py: 13 warnings 103s /usr/lib/python3/dist-packages/numba/np/ufunc/array_exprs.py:301: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead 103s return ast.Num(expr.value), {} 103s 103s -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 103s === short test summary info 103s FAILED ../build.PmL/src/pynndescent/tests/test_distances.py::test_binary_check[kulsinski] 103s FAILED ../build.PmL/src/pynndescent/tests/test_distances.py::test_sparse_binary_check[kulsinski] 103s 2 failed, 33 passed, 14 skipped, 13 warnings in 21.58s scipy 1.11 is currently in experimental but I'd like to upload soon to unstable to resolve Bug#1061605, which would make this bug serious. Looks like it's been fixed in the latest release of pynndescent.
Bug#1063584: python-skbio: tests fail with scipy 1.11
Evidently fixed upstream with https://github.com/scikit-bio/scikit-bio/pull/1887 see also https://github.com/scikit-bio/scikit-bio/pull/1930 (for python 3.12)
Bug#1029701: scikit-learn: tests fail with scipy 1.10
Source: scikit-learn Version: 1.2.1+dfsg-1 Followup-For: Bug #1029701 scikit-learn continues to fail with scipy 1.11 from experimental 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_quantile.py::test_incompatible_solver_for_sparse_input[interior-point] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_quantile.py::test_linprog_failure 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/linear_model/tests/test_quantile.py::test_warning_new_default 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_cdist_bool_metric[X_bool0-Y_bool0-kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_cdist_bool_metric[X_bool1-Y_bool1-kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_pdist_bool_metrics[X_bool0-kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_dist_metrics.py::test_pdist_bool_metrics[X_bool1-kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/metrics/tests/test_pairwise.py::test_pairwise_boolean_distance[kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/neighbors/tests/test_neighbors.py::test_kneighbors_brute_backend[float64-kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/neighbors/tests/test_neighbors.py::test_radius_neighbors_brute_backend[kulsinski] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/preprocessing/tests/test_data.py::test_power_transformer_yeojohnson_any_input[X3] 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/tests/test_base.py::test_clone_sparse_matrices 648s FAILED ../../../../usr/lib/python3/dist-packages/sklearn/tests/test_common.py::test_estimators[PowerTransformer()-check_fit2d_1sample] 648s = 13 failed, 24496 passed, 2459 skipped, 2 deselected, 116 xfailed, 43 xpassed, 2600 warnings in 577.48s (0:09:37) = scipy 1.11 is currently in experimental but I'd like to upload soon to unstable to resolve Bug#1061605, which would make this bug serious. It's likely fixed in the newer upstream releases.
Bug#1063584: python-skbio: tests fail with scipy 1.11
Source: python-skbio Version: 0.5.9-3 Severity: normal Control: block 1061605 by -1 python-skbio is failing various tests with scipy 1.11 from experimental 150s FAILED skbio/diversity/alpha/tests/test_base.py::BaseTests::test_fisher_alpha 150s FAILED skbio/diversity/tests/test_driver.py::BetaDiversityTests::test_available_metrics 150s FAILED skbio/diversity/tests/test_driver.py::BetaDiversityTests::test_qualitative_bug_issue_1549 150s FAILED skbio/stats/tests/test_composition.py::AncomTests::test_ancom_fail_multiple_groups scipy 1.11 is currently in experimental but I'd like to upload soon to unstable to resolve Bug#1061605, which would make this bug serious.
Bug#1063567: dh-python: documentation is unclear for setting env variables to control python version
Package: dh-python Version: 6.20231223 Severity: normal pybuild operations can be controlled to some extent with environment variables. which is often more tidy than using override_dh_auto_... in debian/rules. The control I want to apply is run the build only for the default python (adios2, for instance is built via cmake which only detects the default python version) It's not clear how to use pybuild's environment variables to do this. The pybuild man page discusses PYBUILD_DISABLE=python3.9 for excluding a particular python version. But this is the opposite of want I need. I want something like PYTHON3_DEFAULT = $(shell py3versions -d) export PYBUILD_ENABLE=$(PYTHON3_DEFAULT) but there's no such option. The man page also mentions PYBUILD_OPTION_VERSIONED_INTERPRETER (f.e. PYBUILD_CLEAN_ARGS_python3.2) but that's only for setting arguments for a specific python version, not for only using a specific python version. How should debian/rules set up the pybuild environment to only build for the default python version? It's not clear from the man page how to do this using pybuild's environment variables. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.13-amd64 (SMP w/8 CPU threads; PREEMPT) Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages dh-python depends on: ii python3 3.11.6-1 ii python3-distutils 3.11.5-1 ii python3-setuptools 68.1.2-2 dh-python recommends no packages. Versions of packages dh-python suggests: ii dpkg-dev 1.22.4 ii flit 3.9.0-2 ii libdpkg-perl 1.22.4 ii python3-build 1.0.3-2 ii python3-installer 0.7.0+dfsg1-2 ii python3-wheel 0.42.0-1 -- no debconf information
Bug#1063527: einsteinpy: test_plotting fails to converge with scipy 1.11
Source: einsteinpy Version: 0.4.0-3 Severity: important Control: block 1061605 by -1 einsteinpy is failing test_plotting with scipy 1.11 from experimental 143s if disp: 143s msg = ("Failed to converge after %d iterations, value is %s." 143s% (itr + 1, p)) 143s > raise RuntimeError(msg) 143s E RuntimeError: Failed to converge after 50 iterations, value is nan. 143s 143s /usr/lib/python3/dist-packages/scipy/optimize/_zeros_py.py:381: RuntimeError 143s __ ERROR at setup of test_plot_calls_plt_plot __ likewise test_plotter_has_correct_attributes scipy 1.11 is currently in experimental but I'd like to upload soon to unstable to resolve Bug#1061605, which would make this bug serious.
Bug#1063526: astroml: test_iterative_PCA fails with scipy 1.11: unexpected keyword argument 'sym_pos'
Source: astroml Version: 1.0.2-2 Severity: important Control: block 1061605 by -1 astroml is failing test_iterative_PCA with scipy 1.11 from experimental 82s for i in range(n_samples): 82s VWV = np.dot(VT[:n_ev], (notM[i] * VT[:n_ev]).T) 82s > coeffs[i] = solve(VWV, VWx[:, i], sym_pos=True, overwrite_a=True) 82s E TypeError: solve() got an unexpected keyword argument 'sym_pos' 82s 82s /usr/lib/python3/dist-packages/astroML/dimensionality/iterative_pca.py:127: TypeError >From the error description it's probably easy to fix, just need to work out what is going on with the sym_pos argument. scipy 1.11 is currently in experimental but I'd like to upload soon to unstable to resolve Bug#1061605, which would make this bug serious.
Bug#1060971: mdtraj: FTBFS: dpkg-buildpackage: error: dpkg-source -b . subprocess returned exit status 2
Source: mdtraj Followup-For: Bug #1060971 X-Debbugs-Cc: 1060971-d...@bugs.debian.org Control: fixed 1060971 1.9.9-1 Fixed with cython3-legacy at the same time the bug was filed.
Bug#1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for Enterprise Certificates
Hi Maytham, golang-github-googleapis-enterprise-certificate-proxy is now built on the main architectures. https://buildd.debian.org/status/package.php?p=golang-github-googleapis-enterprise-certificate-proxy It's still not building on the auxiliary architectures. Can you see a way of extending or altering the patch for them as well? Drew
Bug#1063352: ITP: ngspetsc -- a PETSc interface for NGSolve
Package: wnpp Severity: wishlist Owner: Drew Parsons X-Debbugs-Cc: debian-de...@lists.debian.org, debian-scie...@lists.debian.org, francesco.balla...@unicatt.it * Package name: ngspetsc Version : git HEAD Upstream Contact: Umberto Zerbinati * URL : https://github.com/NGSolve/ngsPETSc * License : MIT Programming Lang: Python Description : a PETSc interface for NGSolve ngsPETSc is a PETSc interface for NGSolve. It extends the utility of meshes generated by netgen and interfaces with finite element solvers such as dolfinx (fenicsx) as well as NGSolve, Firedrake. To be maintained by the Debian Science team alongside netgen. Co-maintained with Francesco Ballarin.
Bug#1062356: adios2: flaky autopkgtest (host dependent): times out on big host
Source: adios2 Followup-For: Bug #1062356 The flakey test is adios2-mpi-examples. debian/tests is building and running them manually, and running on only 3 processors (mpirun -n 3) So the problem can't be overload of the machine. I'll just skip insituGlobalArraysReaderNxN_mpi. For reference, upstream is making some changes to make it more reliable to run tests against the installed library, https://github.com/ornladios/ADIOS2/pull/3906 also https://github.com/ornladios/ADIOS2/pull/3820 Not certain that that directly makes insituGlobalArraysReaderNxN_mpi more reliable though.
Bug#1062356: adios2: flaky autopkgtest (host dependent): times out on big host
Source: adios2 Followup-For: Bug #1062356 Can't be quite as simple as just the host machine. https://ci.debian.net/data/autopkgtest/unstable/amd64/a/adios2/41403641/log.gz completed in 9 minutes, while https://ci.debian.net/data/autopkgtest/unstable/amd64/a/adios2/41496866/log.gz failed with timeout. But that was ci-worker13 in both cases. Maybe it's a race condition. Might be simplest to just skip insituGlobalArraysReaderNxN_mpi though I can also review how many CPUs are invoked by the test. Usually safer not to run tests on all 64 available cpus, for instance, if there are that many on the machine.
Bug#1061605: scipy: tests skipped during build and autopkgtest not in sync
Source: scipy Followup-For: Bug #1061605 Note that debci tests are passing on all arches (where built) for scipy 11. I'm inclined to accept this as a solution. i.e. update the list of builds tests to skip for scipy 11 rather than reorganise debian/tests skips for scipy 10.
Bug#1061386: libxtensor-dev: Fails to install for arm64 arch on amd64
Package: libxtensor-dev Followup-For: Bug #1061386 Control: fixed 1061386 0.24.7-1 I'm a bit confused why you're installing libxtensor-dev:arm64 on amd64. Wouldn't it make more sense to install libxtensor-dev:amd64? In any case this was fixed in libxtensor-dev 0.24.7 (actually in 0.24.4-1exp1), making libxtensor-dev arch:all. Since the package is header-only, you can probably install 0.24.7-5 from testing. Let me know if that works successfully. Drew
Bug#1062827: RFP: pydevtool -- CLI dev tools powered by pydoit
On 2024-02-05 18:44, Drew Parsons wrote: building and testing. The scipy team have created pybuild, which uses That should read "The scipy team have created dev.py" of course. (debian created pybuild)
Bug#1062827: RFP: pydevtool -- CLI dev tools powered by pydoit
On 2024-02-03 21:07, c.bu...@posteo.jp wrote: I checked upstream README.md. I have no clue what this is. Can someone explain please? Am 03.02.2024 18:05 schrieb dpars...@debian.org: Package: wnpp * Package name: pydevtool * URL : https://github.com/pydoit/pydevtool Description : CLI dev tools powered by pydoit Python dev tool. doit powered, integration with: - click and rich for custom CLI - linters: pycodestlye, pyflakes I can only explain phenomenologically, based on what scipy is trying to do with it. scipy uses it in a dev.py script, https://github.com/scipy/scipy/blob/eb6d8e085087ce9854e92d6b0cdc6d70f0ff0074/dev.py#L125 dev.py is a developers' tool used to run scipy's tests, https://scipy.github.io/devdocs/dev/contributor/devpy_test.html See also https://scipy.github.io/devdocs/building/index.html pydevtool provides a cli submodule, from which scipy uses UnifiedContext, CliGroup, Task These are used to create and organise a Test class, keeping track of parameters used when running tests, such as verbosity level or install prefix. pydevtool itself is built on top of doit, which is a framework for managing general tasks. The dev tasks in question here are building and testing the package (we want to use it for the test tasks). dev.py also has other tasks Probably the best way to think about it is that dev.py is scipy's counterpart to debian's pybuild, providing a solution to the challenge of building a complex python package. We've created pybuild, which uses dhpython as a framework for managing the tasks involved in building and testing. The scipy team have created pybuild, which uses pydevtool (together with click, doit) as a framework for managing the tasks. `grep Task dev.py` gives a list of other tasks that dev.py can handle: Test RefguideCheck Build Test Bench Mypy Doc RefguideCheck So pydevtool is used to help manage the execution of these various tasks.
Bug#1053939: pymatgen: test failure with pandas 2.1
Source: pymatgen Followup-For: Bug #1053939 [apologies for the spam. testing mail server configuration now]
Bug#1053939: pymatgen: test failure with pandas 2.1
Source: pymatgen Followup-For: Bug #1053939 Looks like the latest release should be fine with pandas 2. Currently building in experimental.
Bug#1052028: pydantic
block 1061609 by 1052028 1052619 affects 1061609 python3-emmet-core thanks The latest version of python-emmet-core (used by pymatgen) requires pydantic2.
Bug#1061605: scipy: tests skipped during build and autopkgtest not in sync
Source: scipy Followup-For: Bug #1061605 Easy enough to relax tolerances or skip tests if we needed to. test_maxiter_worsening was giving problems on other arches. But strange the test only started failing when pythran was deactivated. I've reactivated it in 1.10.1-8, we'll see if it restores success.
Bug#1020561: python3-scipy: Scipy upgrade requires c++ compiler
Package: python3-scipy Followup-For: Bug #1020561 Confirmed that tests still pass (amd64) if python3-pythran is forcibly not installed. Making an audit of where pythran is actually used (in v.10), at runtime that is: scipy/interpolate _rbfinterp_pythran.py see also setup.py, meson.build scipy/optimize _group_columns.py used by ._numdiff.group_columns scipy/linalg _matfuncs_sqrtm_triu.py (not clear that this is used. meson.build refers to the cython variant _matfuncs_sqrtm_triu.pyx) scipy/stats _stats_pythran.py _hypotests.py _stats_mstats_common.py scipy/signal _spectral.py The pythran definitions are supplied as # pythran export ... So they are enclosed in comments. If pythran is not present then the definition is handled as a normal comment, i.e. ignored. At build time python extensions are built using these definitions via meson.build e.g. interpolate/_rbfinterp_pythran.cpython-39-x86_64-linux-gnu.so But once these a built pythran is not needed to rebuild them. This does confuse me, I thought the advantage of pythran was a jit optimisation at runtime. In this case pythran just provides an automated means of running cython, rather than an optimisation to the runtime cpu. Not clear then what the advantage of optimize/_group_columns.py is over optimize/_group_columns.pyx Perhaps the pythran variant is better tuned. So, what is pythran is doing is essentially replacing the .py file with a .so library. It's an ahead-of-time compiler, not a just-in-time compiler. Conclusion:, we want to use pythran at build time. But there's no further reason to depend on it at runtime (not even as Recommends)
Bug#1001105: can I help with pyvista packaging?
On 2024-01-27 18:28, Francesco Ballarin wrote: OK Andreas, I'll push to master. Let me take the lead on that, and I'll come back to you and Drew with progress and questions. I think I have some ideas on how to get started on the basic package. Thanks Francesco and Andreas. Will be interesting to see the dolfinx demos running in their full pyvista livery. The full package (i.e., all optional components that one can install with "pip install pyvista[all]") will be much more complex, because it depends on trame, which comes split in five interdependent packages ... and who knows how many dependencies each one of those have. ... but let's start with a less ambitious goal ;) I agree, get the basic functionality in place first :) I think that the error you see is because python3-vtk9 is only built for python 3.11, but unit tests are getting run with python 3.12. This is annoying problem, constrained by cmake limitations. Other packages are also affected, like spglib, which then constrains pymatgen. The problem is that cmake does not allow for building python modules over multiple python versions. The FEniCS project has been smarter about it, keeping the C++ library and the python module build separate (the first using cmake, the latter using setup/pyproject). Not much we can do about it with vtk9 in the short term. Complaints should be pushed to kitware though. They seem to think it should be done in your source, which is pretty weird. https://gitlab.kitware.com/cmake/cmake/-/issues/21797 Different source dirs makes little sense, but possibly cmake could be run multiple times, once for each python, in separate build dirs.
Bug#1061609: ITP: pydantic-settings -- settings management using pydantic
Package: wnpp Severity: wishlist Owner: Drew Parsons X-Debbugs-Cc: debian-de...@lists.debian.org, debian-pyt...@lists.debian.org * Package name: pydantic-settings Version : 2.1.0 Upstream Contact: Samuel Colvin * URL : https://github.com/pydantic/pydantic-settings * License : MIT Programming Lang: Python Description : settings management using pydantic Settings management using Pydantic, this is the new official home of Pydantic's BaseSettings. Pydantic Settings provides optional Pydantic features for loading a settings or config class from environment variables or secrets files. pydantic-settings is used by latest versions of python-mp-api To be maintained by the Debian Python Team alongside pydantic.
Bug#1020561: python3-scipy: Scipy upgrade requires c++ compiler
On 2024-01-27 09:30, Graham Inggs wrote: Hi It seems (at least in scipy/1.10.1-6), that python3-pythran was a build-dependency for all architectures [1], yet, on armhf, python3-scipy did not have a runtime dependency on python3-pythran [2]. The build log of scipy/1.10.1-6 on armhf [3], confirms: Building scipy with SCIPY_USE_PYTHRAN=1 I do not recall seeing any bug reports or autopkgtest failures due to this. Is it possible that scipy can be built with Pythran support, and python3-pythran can be an optional dependency at runtime? If this is true, then we can downgrade python3-pythran from a Depends to a Recommends. A good question. We can look into this and check.
Bug#1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for Enterprise Certificates
On 2024-01-23 14:39, Maytham Alsudany wrote: Hi Drew, On Tue, 2024-01-23 at 11:24 +0100, Drew Parsons wrote: > > Hi Maytham, I can upload it. But note how pkcs11 is failing on 32 bit > > arches. That needs to be sorted out. I had been waiting for that > > before uploading enterprise-certificate-proxy. > > https://salsa.debian.org/go-team/packages/golang-github-google-go-pkcs11/-/merge_requests/2 > > go-pkcs11 builds successfully and passes autopkgtest, lintian, and > piuparts on > both amd64 and i386. The problem is on debci. See the failing tests at https://ci.debian.net/packages/g/golang-github-google-go-pkcs11/ summarised also at https://tracker.debian.org/pkg/golang-github-google-go-pkcs11 I'm aware, and the PR I've linked is a fix, please have a look. You can look at the patch file itself at [1] (have a look at the description to understand what the PR/patch does). Thanks Maytham. The patch handling it via malloc_arg makes sense. I left a review commenting about supporting other 32 bit architectures, not just 386 and arm. I can see how to adapt your patch to control it at build time. Let me know if you're happy with that idea or if you can see another way to do it. (an alternative could be checking bits, along the lines of "const PtrSize = 32 << uintptr(^uintptr(0)>>63)" But I wouldn't necessarily trust that to always give the right indication. Your idea of handling two separate definitions should work fine) Drew
Bug#1061385: golang-github-google-go-pkcs11: tests fail on 32 bit architectures
Source: golang-github-google-go-pkcs11 Version: 0.3.0+dfsg-1 Severity: serious Justification: debci Control: block 1024276 by -1 go-pkcs11 tests are failing on 32-bit architectures (armel, armhf, i386), see https://ci.debian.net/packages/g/golang-github-google-go-pkcs11/ This is preventing migration to testing, see https://tracker.debian.org/pkg/golang-github-google-go-pkcs11 and therefore blocking processing of golang-github-googleapis-enterprise-certificate-proxy Excerpt from the test log for i386 reports ... 79s crypto/x509 79s github.com/google/go-pkcs11/pkcs11 82s # github.com/google/go-pkcs11/pkcs11 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:300:33: cannot use (_Ciconst_sizeof_CK_UTF8CHAR) * _Ctype_ulong(len(s)) (value of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:1019:38: cannot use _Ctype_ulong(n) (value of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:1100:33: cannot use attrs[0].ulValueLen * (_Ciconst_sizeof_CK_BYTE) (value of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:1104:33: cannot use attrs[1].ulValueLen (variable of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:1137:37: cannot use attrs[0].ulValueLen (variable of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:1141:37: cannot use attrs[1].ulValueLen (variable of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s src/github.com/google/go-pkcs11/pkcs11/pkcs11.go:1440:35: cannot use _Ctype_ulong(n) (value of type _Ctype_ulong) as _Ctype_uint value in argument to (_Cfunc__CMalloc) 82s dh_auto_build: error: cd _build && go install -trimpath -v -p 2 github.com/google/go-pkcs11/pkcs11 returned exit code 1
Bug#1060401: ITP: python-scooby -- A lightweight tool for easily reporting your Python environment's package versions and hardware resources
On 2024-01-23 12:20, Andreas Tille wrote: Hi, I've seen your commits to DPT on salsa. Do you need any help to create a debian/ dir which does not exist yet? Kind regards Andreas. Hi Andreas, Francesco has prepared a debian dir in an MR on salsa. Would be great if you could review the MR and merge (for my part it looks fine to me) Drew
Bug#1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for Enterprise Certificates
On 2024-01-23 11:08, Maytham Alsudany wrote: Hi Drew, On Tue, 2024-01-23 at 09:12 +0100, Drew Parsons wrote: On 2024-01-23 07:51, Maytham Alsudany wrote: > Hi Drew, > > Now that golang-github-google-go-pkcs11 has been uploaded and accepted, > is it > possible for you to now upload > golang-github-googleapis-enterprise-certificate- > proxy? > Hi Maytham, I can upload it. But note how pkcs11 is failing on 32 bit arches. That needs to be sorted out. I had been waiting for that before uploading enterprise-certificate-proxy. https://salsa.debian.org/go-team/packages/golang-github-google-go-pkcs11/-/merge_requests/2 go-pkcs11 builds successfully and passes autopkgtest, lintian, and piuparts on both amd64 and i386. The problem is on debci. See the failing tests at https://ci.debian.net/packages/g/golang-github-google-go-pkcs11/ summarised also at https://tracker.debian.org/pkg/golang-github-google-go-pkcs11
Bug#1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for Enterprise Certificates
On 2024-01-23 07:51, Maytham Alsudany wrote: Hi Drew, Now that golang-github-google-go-pkcs11 has been uploaded and accepted, is it possible for you to now upload golang-github-googleapis-enterprise-certificate- proxy? Hi Maytham, I can upload it. But note how pkcs11 is failing on 32 bit arches. That needs to be sorted out. I had been waiting for that before uploading enterprise-certificate-proxy. Drew
Bug#1061357: python3-spglib: spglib python package not identified by dh_python3
Package: python3-spglib Version: 2.2.0-2 Severity: normal We have dh_python3 to help automatically identify python package dependencies when building a package. It's unable to identify spglib, however. For instance a pymatgen build log shows dh_python3 -a -O--buildsystem=pybuild I: dh_python3 pydist:302: Cannot find package that provides spglib. Please add package that provides it to Build-Depends or add "spglib python3-spglib" line to debian/py3dist-overrides or add proper dependency to Depends by hand and ignore this info. I'm not sure exactly why dh_python3 can't identify it. I would have thought it could find out which package provides /usr/lib/python3/dist-packages/spglib. But it's not doing that. Maybe it's a bug in dh-python. But I notice that python3-spglib doesn't provide the .dist-info directory that most other python packages provide. I guess dh-python is using the dist-info mechanism to identify packages. Perhaps we should file a bug against dh-python for it to run something like "dpkg -S /usr/lib/python3/dist-packages/" if it doesn't find a dist-info entry. In the meantime, I figure spglib's lack of dist-info occurs because it's a cmake build rather than a python setuptools build, in which case the problem sits alongside Bug#1061263. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.9-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages python3-spglib depends on: ii libc6 2.37-13 ii libsymspg2 2.2.0-2 ii python33.11.6-1 ii python3-numpy 1:1.24.2-2 python3-spglib recommends no packages. python3-spglib suggests no packages. -- no debconf information
Bug#1061263: python3-spglib: fails with python3.12 (extension not built)
On 2024-01-22 09:28, Andrius Merkys wrote: Hi Drew, spglib should be configured to build for all available python versions. In other libraries (e.g. fenics-basix) this is done by building the C library separately from the the python module. Thanks for a pointer, I will give fenics-basix a look. I did not manage to figure out a way to build spglib for all available Python versions. Thinking about it, it could be tricky since spglib handles the python build from cmake with no setup.py. The others run the two parts separately, cmake for the library and python setup for the python module. Seems to be a flaw in cmake that it can't handle multiple pythons. Could possibly run cmake multiple times specifying the python version for each run. Not sure, should this be severity: serious? I think no. Supporting default Python only is not RC-critical, AFAIR. However, having spglib support all Python versions would fix other issues, like #1056457. True. For now I'll configure the client packages to run their tests only on default python then. Drew
Bug#1061263: python3-spglib: fails with python3.12 (extension not built)
Package: python3-spglib Version: 2.2.0-2 Severity: normal python3-spglib fails with python3.12, since the python extension is built only for python3.11. spglib should be configured to build for all available python versions. In other libraries (e.g. fenics-basix) this is done by building the C library separately from the the python module. Not sure, should this be severity: serious? -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.9-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages python3-spglib depends on: ii libc6 2.37-13 ii libsymspg2 2.2.0-2 ii python33.11.6-1 ii python3-numpy 1:1.24.2-2 python3-spglib recommends no packages. python3-spglib suggests no packages. -- no debconf information
Bug#1061255: ITP: custodian -- flexible just-in-time job management framework in Python
Package: wnpp Severity: wishlist Owner: Drew Parsons X-Debbugs-Cc: debian-de...@lists.debian.org, debian-pyt...@lists.debian.org, debian-scie...@lists.debian.org, debichem-de...@lists.alioth.debian.org * Package name: custodian Version : 2024.1.9 Upstream Contact: Shyue Ping Ong * URL : https://github.com/materialsproject/custodian * License : MIT/X Programming Lang: Python Description : flexible just-in-time job management framework in Python Custodian is a simple, robust and flexible just-in-time (JIT) job management framework written in Python. Using custodian, you can create wrappers that perform error checking, job management and error recovery. It has a simple plugin framework that allows you to develop specific job management workflows for different applications. Error recovery is an important aspect of many high-throughput projects that generate data on a large scale. When you are running on the order of hundreds of thousands of jobs, even an error-rate of 1% would mean thousands of errored jobs that would be impossible to deal with on a case-by-case basis. The specific use case for custodian is for long running jobs, with potentially random errors. For example, there may be a script that takes several days to run on a server, with a 1% chance of some IO error causing the job to fail. Using custodian, one can develop a mechanism to gracefully recover from the error, and restart the job with modified parameters if necessary. The current version of Custodian also comes with several sub-packages for error handling for Vienna Ab Initio Simulation Package (VASP), NwChem, QChem, FEFF, Lobster and CP2K calculations. Custodian has been developed by the Materials Project team responsible for pymatgen, and is used to manage tests for emmet-core etc. It is a general python package, but designed for computational chemistry. It could arguably be managed by the Debian Python Team, but probably best to keep it alongside pymatgen managed by the Debichem team.
Bug#1061243: FTBFS: needs update for xsimd 12
Package: libxtensor-dev Version: 0.24.7-4 Severity: important Tags: upstream Control: forwarded -1 https://github.com/xtensor-stack/xtensor/issues/2769 xtensor 0.24.7 is configured to use xsimd 10. xtensor upstream head supports xsimd 11. But xsimd 12 has been recently released and is required to provide stability on less common architectures like armhf. And xsimd major releases are not backwards compatible. We'll need to remove xsimd support from xtensor until it can be updated for xsimd 12. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.9-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages libxtensor-dev depends on: ii nlohmann-json3-dev 3.11.3-1 ii xtl-dev 0.7.5-3 Versions of packages libxtensor-dev recommends: ii libxsimd-dev 12.1.1-1 Versions of packages libxtensor-dev suggests: ii xtensor-doc 0.24.7-4 -- no debconf information
Bug#1056841: pymatgen: ftbfs with cython 3.0.x
On 2024-01-19 18:52, Drew Parsons wrote: Hi Andreas, could you push your upstream and pristine-tar branches? Otherwise we can't use your 2023.12.18 branch. I see what you mean. The tag is there, the orig tarball can be regenerated with gbp export-orig.
Bug#1058040: pymatgen: FTBFS with Python 3.12
Source: pymatgen Followup-For: Bug #1058040 No, other way around. The new monty is causing the problem. Will need to patch or upgrade pymatgen.
Bug#1058040: pymatgen: FTBFS with Python 3.12
Source: pymatgen Followup-For: Bug #1058040 Sounds like it will need the new version of monty
Bug#1056841: pymatgen: ftbfs with cython 3.0.x
On 2024-01-16 17:55, Andreas Tille wrote: Control: tags -1 pending Hi, I've applied the patch in Git and also tried to upgrade to latest upstream since there is a chance that other Python3.12 issues might be fixed. Unfortunately the upgrade is all but straightforward and I gave up finally over the changes in the sphinx documention where finally some files are missing. I've created a branch 2023.12.18 which fails with Sphinx error: root file /build/pymatgen-2023.12.18+dfsg1/docs/apidoc/index.rst not found I hope that my preliminary work might be helpful for the package but I have to give up now due to time constraints. Hope this helps Andreas. Hi Andreas, could you push your upstream and pristine-tar branches? Otherwise we can't use your 2023.12.18 branch. Drew
Bug#1061063: armhf: h5py's tests expose unaligned memory accesses during the build
Source: h5py Followup-For: Bug #1061063 Control: forwarded 1061063 https://github.com/h5py/h5py/issues/1927 The problem was raised upstream at https://github.com/h5py/h5py/issues/1927 Makes it difficult to test if we can't reproduce it on all armhf environments. A patch was suggested for the upstream report but has already been applied (fix-unaligned-access.patch). I'm not sure that we can do much more than that, since we can't reproduce the bug on debian armhf systems.
Bug#1058122: python-griddataformats: FTBFS: AttributeError: module 'configparser' has no attribute 'SafeConfigParser'. Did you mean: 'RawConfigParser'?
Source: python-griddataformats Followup-For: Bug #1058122 THanks for the patch, Yogeswaran. Upstream has fixed it in their latest release however.
Bug#1056841: pymatgen: ftbfs with cython 3.0.x
On 2024-01-16 17:55, Andreas Tille wrote: Control: tags -1 pending Hi, I've applied the patch in Git and also tried to upgrade to latest upstream since there is a chance that other Python3.12 issues might be fixed. Unfortunately the upgrade is all but straightforward and I gave up finally over the changes in the sphinx documention where finally some files are missing. I've created a branch 2023.12.18 which fails with Sphinx error: root file /build/pymatgen-2023.12.18+dfsg1/docs/apidoc/index.rst not found I hope that my preliminary work might be helpful for the package but I have to give up now due to time constraints. Hope this helps Andreas. Thanks Andreas. It has a few moving parts, all should be upgraded together. That's partly why I haven't got round to it yet. Upgrading to latest upstream versions is probably the way to do it. We'll get it done before next stable release. Drew
Bug#1057949: nbconvert: needs update for new version of pandoc: PDF creating failed
Source: nbconvert Version: 6.5.3-4 Followup-For: Bug #1057949 An nbconvert update (>= 7.6) is also needed to support the latest version of pandoc 3.1.3. That is, to avoid the warning that pandoc 3.1.3 is "unsupported". cf. https://github.com/spatialaudio/nbsphinx/issues/750#issuecomment-1613973946
Bug#1060804: hickle: failing tests with h5py 3.10
Source: hickle Version: 5.0.2-7 Severity: serious Justification: debci h5py 3.10 is triggering test failure in hickle 54s test_H5NodeFilterProxy 54s 54s h5_data = 54s 54s def test_H5NodeFilterProxy(h5_data): 54s """ 54s tests H5NodeFilterProxy class. This class allows to temporarily rewrite 54s attributes of h5py.Group and h5py.Dataset nodes before being loaded by 54s hickle._load method. 54s """ 54s 54s # load data and try to directly modify 'type' and 'base_type' Attributes 54s # which will fail cause hdf5 file is opened for read only 54s h5_node = h5_data['somedata'] 54s pytest_errclass = KeyError if h5py.__version__ >= '3.9.0' else OSError 54s with pytest.raises(pytest_errclass): 54s try: 54s > h5_node.attrs['type'] = pickle.dumps(list) 54s 54s /tmp/autopkgtest-lxc.72qwkxrs/downtmp/build.c7h/src/hickle/tests/test_01_hickle_helpers.py:127: 54s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 54s h5py/_debian_h5py_serial/_objects.pyx:54: in h5py._debian_h5py_serial._objects.with_phil.wrapper 54s ??? 54s h5py/_debian_h5py_serial/_objects.pyx:55: in h5py._debian_h5py_serial._objects.with_phil.wrapper 54s ??? 54s /usr/lib/python3/dist-packages/h5py/_debian_h5py_serial/_hl/attrs.py:104: in __setitem__ 54s self.create(name, data=value) 54s /usr/lib/python3/dist-packages/h5py/_debian_h5py_serial/_hl/attrs.py:200: in create 54s h5a.delete(self._id, name) 54s h5py/_debian_h5py_serial/_objects.pyx:54: in h5py._debian_h5py_serial._objects.with_phil.wrapper 54s ??? 54s h5py/_debian_h5py_serial/_objects.pyx:55: in h5py._debian_h5py_serial._objects.with_phil.wrapper 54s ??? 54s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 54s 54s > ??? 54s E KeyError: 'Unable to delete attribute (no write intent on file)' 54s 54s h5py/_debian_h5py_serial/h5a.pyx:145: KeyError Sounds like an issue that should be raised upstream, unless it just means that hickle needs rebuilding against h5py 3.10. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.9-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled
Bug#1058944: transition: petsc
On 2023-12-20 23:31, Sebastian Ramacher wrote: Control: tags -1 confirmed On 2023-12-18 19:02:51 +0100, Drew Parsons wrote: I'd like to upgrade PETSc (and SLEPc, and their python packages) from 3.18 to 3.19. All packages are now built or rebuilt with tests passing. (except for deal.ii which has boost 1.83 issues).
Bug#837796: paraview: segfaults when performing query-based selections
Package: paraview Followup-For: Bug #837796 Control: fixed -1 5.11.0+dfsg-1 I think this problem is now dealt with (paraview 5.11). The query-based selection is not segfaulting now. The values are found as described in the tutorial, and the 3D can image gets highlighting indicated the selection points. There is a warning message if python3-paraview is not installed, though the selection nevertheless proceeds without segfault. Selection proceeds cleanly if python3-paraview is installed.
Bug#894462: paraview: edges are blotted [regression]
Package: paraview Followup-For: Bug #894462 The png generated by the procedure in this bug looks fine to me, using current paraview (5.11). It's slightly blurry but no more than you'd expect for a png file (it's raster image format, not a vector image format, you can't expect to main high resolution if you zoom into the pixels). Can you confirm paraview 5.11 is meeting your image quality expectations? As far as I can tell, we can close this bug now.
Bug#959677: paraview: Python calculator does not work
Package: paraview Followup-For: Bug #959677 The calculator seems to be working fine e.g. tutorial steps at https://docs.paraview.org/en/latest/UsersGuide/filteringData.html#python-calculator change the colour of the sphere depending on the value entered in the calculator. Can you confirm you installed python3-paraview? It is required for the paraview python functionality. I think this bug can be closed.
Bug#754968: paraview: cmake crashes when find paraview
Package: paraview Followup-For: Bug #754968 find_package(ParaView) is now operating successfully in cmake. (needs libgdal-dev) That's with paraview-dev 5.11.2+dfsg-4 and cmake 3.28.1-1. Can you confirm paraview is now working for you from cmake, and we can close this bug?
Bug#753685: dist-packages/paraview/ColorMaps.xml missing
Package: paraview Followup-For: Bug #753685 I suspect the problem with ColorMaps is fixed in paraview 5.11 but can't confirm since paraview.simple has other issues. It wants to use inspect.getargspec, but there's no such function anymore (python 3.11). $ python3 -c "import paraview.simple" ( 0.963s) [paraview]vtkPVPythonAlgorithmPlu:184 WARN| Failed to load Python plugin: Failed to import `paraview.detail.pythonalgorithm`. Traceback (most recent call last): File "/usr/lib/python3/dist-packages/paraview/detail/pythonalgorithm.py", line 6, in from inspect import getargspec ImportError: cannot import name 'getargspec' from 'inspect' (/usr/lib/python3.11/inspect.py)
Bug#954329: paraview: opacity incorrectly handled by the debian bullseye(testing) package paraview-5.7
Package: paraview Followup-For: Bug #954329 I can't see a problem with the test file using latest paraview 5.11. It displays a translucent sphere coloured red on one side, blue on the other. Can you confirm this bug has now been fixed?
Bug#688875: paraview: cannot select and copy current time
Package: paraview Followup-For: Bug #688875 I can't see a "Current Time Controls" toolbar in paraview 5.11. Is it the "Time Inspector" tool bar? Are your requirements now met in paraview 5.11? Can we close this bug?
Bug#1058007: python-wsaccel: runtime dependency on cython
Source: python-wsaccel Followup-For: Bug #1058007 Control: severity 1058007 serious I'd say the problem in Bug #1058007 is stronger than Matthias indicated. The dependency on cython3 is making other packages uninstallable on systems which have cython3-legacy installed. And cython3-legacy is needed to build many packages, since the upgrade to cython3 v3 is still new. For instance, paraview doesn't even use python3-wsaccel, but it does use python3-autobahn which uses python-wsaccel. So this bug makes paraview uninstallable when cython3-legacy is installed. cython3 is not required as a package dependency in python-wsaccel's setup.py. It's only used to build the package. As far as I can see, Once the .so python extensions are built, cython3 is not used directly after that. So cython3 needs to be kept as a Build-Depends (or changed to cython3-legacy, I'm not certain about that), but needs to be removed as an explicit python3-wsaccel package dependency. There is a new release v0.6.6 which enables support for Python 3.12. Marking severity: serious since the bug is affecting installation of other packages.
Bug#1058007: python-wsaccel: runtime dependency on cython
On 2024-01-04 11:07, Drew Parsons wrote: For instance, paraview doesn't even use python3-wsaccel, but it does use python3-autobahn which uses python-wsaccel. So this bug makes paraview uninstallable when cython3-legacy is installed. In paraview's case, it's already stopped using python3-autobahn. So python-wsaccel won't affect paraview installations once paraview is updated. Feel free to lower the severity back to important if you feel I set it too high.
Bug#1059842: FTBFS: test fails: ZFP lib not compiled with -DBIT_STREAM_WORD_TYPE=uint8
Package: python3-hdf5plugin Version: 4.0.1-3 Severity: serious Justification: FTBFS python3-hdf5plugin has started failing testZfp, causing both FTBFS and debci failure. The error message from debci is 56s ERROR: testZfp (__main__.TestHDF5PluginRW.testZfp) (options={'lossless': False}, dtype=) 56s Write/read test with zfp filter plugin 56s -- 56s Traceback (most recent call last): 56s File "/usr/lib/python3/dist-packages/hdf5plugin/test.py", line 245, in testZfp 56s self._test('zfp', dtype=dtype, **options) 56s File "/usr/lib/python3/dist-packages/hdf5plugin/test.py", line 86, in _test 56s f.create_dataset("data", data=data, chunks=data.shape, **args) 56s File "/usr/lib/python3/dist-packages/h5py/_debian_h5py_serial/_hl/group.py", line 183, in create_dataset 56s dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds) 56s^^ 56s File "/usr/lib/python3/dist-packages/h5py/_debian_h5py_serial/_hl/dataset.py", line 163, in make_new_dset 56s dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl, dapl=dapl) 56s ^^^ 56s File "h5py/_debian_h5py_serial/_objects.pyx", line 54, in h5py._debian_h5py_serial._objects.with_phil.wrapper 56s File "h5py/_debian_h5py_serial/_objects.pyx", line 55, in h5py._debian_h5py_serial._objects.with_phil.wrapper 56s File "h5py/_debian_h5py_serial/h5d.pyx", line 138, in h5py._debian_h5py_serial.h5d.create 56s ValueError: Unable to create dataset (ZFP lib not compiled with -DBIT_STREAM_WORD_TYPE=uint8) There is a "BIT_STREAM_WORD_TYPE uint8" build definition in setup.py. Sounds like it might not have been activated but is needed. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.6.8-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages python3-hdf5plugin depends on: ii python3 3.11.6-1 ii python3-h5py 3.9.0-5 Versions of packages python3-hdf5plugin recommends: ii hdf5-filter-plugin [hdf5-filter-plugin-lz4-seria 0.0~git2022.49e3b65-4 l] ii hdf5-filter-plugin-blosc-serial 0.0~git20220616.9683f7d-5 pn hdf5-filter-plugin-bz2-serial ii hdf5-filter-plugin-zfp-serial 1.1.0+git20221021-4 ii hdf5-plugin-lzf 3.9.0-5 python3-hdf5plugin suggests no packages. -- no debconf information
Bug#1059791: python3-mpi4py: testPackUnpackExternal alignment error on sparc64
Package: python3-mpi4py Version: 3.1.5-2 Severity: normal Control: forwarded -1 https://github.com/mpi4py/mpi4py/issues/147 sparc64 has started giving a Bus Error (Invalid address alignment) in testPackUnpackExternal (test_pack.TestPackExternal), testProbeRecv (test_p2p_obj_matched.TestP2PMatchedWorldDup) ... ok testPackSize (test_pack.TestPackExternal) ... ok testPackUnpackExternal (test_pack.TestPackExternal) ... [sompek:142729] *** Process received signal *** [sompek:142729] Signal: Bus error (10) [sompek:142729] Signal code: Invalid address alignment (1) [sompek:142729] Failing at address: 0x800100ea2821 [sompek:142729] *** End of error message *** Bus error make[1]: *** [debian/rules:91: override_dh_auto_test] Error 1 Full log at https://buildd.debian.org/status/fetch.php?pkg=mpi4py=sparc64=3.1.5-2=1704105171=0 It previously passed with 3.1.1. Upstream recommends just skipping the test.
Bug#1059621: sphinx-common: dh_sphinxdoc fails to run: "search.html does not load searchindex.js"
On 2023-12-30 19:55, Dmitry Shachnev wrote: Hi Drew! Hi Dmitry :) ... In lammps case, it looks like the search page is very unusual and relies on Google's Custom Search Engine. So, in fact, it searches not in the local documentation, but on docs.lammps.org website. This is exactly what Lintian warning privacy-breach-google-cse is about. I would recommend replacing upstream search.html with pristine version from sphinx-rtd-theme [1] (that lammps theme is based on). Sphinx's default search relies on source RST files (_sources), so you will also need to stop removing them in doc/Makefile. Thanks for the analysis. I'll give it a try using the pristine serach.html, and see if I can get it to work. I'll write (or close) this bug once I have results to report. Drew
Bug#1059699: python-propka-doc: mathjax not configured to display TeX equations in docs
Package: python-propka-doc Version: 3.5.0-3 Severity: normal Tags: patch The propka docs are not fully configured to display maths equations (TeX). You can see the problem opening file:///usr/share/doc/python3-propka/html/index.html in the browser, it displays "Heuristic \(\text{p}K_\text{a}\)" in the title. The problem is the mathjax configuration. From the Debian privacy policy we strive to make our documentation independent of internet acces, which you've done in debian patch no-sphinx-sitemap.patch with -mathjax_path = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML' +mathjax_path = 'file:///usr/share/javascript/mathjax/MathJax.js' However this is not enough for display. The success of the MathJax.js is senstive to the mathjax config requested. You can see upstream used "?config=TeX-AMS-MML_HTMLorMML". So the maths in the docs should render correctly if the patch is updated to -mathjax_path = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML' +mathjax_path = 'file:///usr/share/javascript/mathjax/MathJax.js?config=TeX-AMS-MML_HTMLorMML'
Bug#1059640: pnetcdf: upstream URL is lost
Source: pnetcdf Version: 1.12.3-2 Severity: normal The pnetcdf Homepage in debian/control is set to https://trac.mcs.anl.gov/projects/parallel-netcdf but this url no longer exists. Is there a new URL to replace it with?
Bug#1059621: sphinx-common: dh_sphinxdoc fails to run: "search.html does not load searchindex.js"
Package: sphinx-common Version: 7.2.6-3 Severity: normal I'm trying to run dh_sphinxdoc on new docs generated for lammps. But dh_sphinxdoc fails: $ dh_sphinxdoc dh_sphinxdoc: error: debian/lammps-doc/usr/share/doc/lammps/html/search.html does not load searchindex.js with exit code 255. Well no kidding, search.html really does not reference searchindex.js. Why is this causing dh_sphinxdoc to fail? dh_sphinxdoc should be replacing existing .js references in the html files, not complaining about ones which are not there. Adding an -Xsearch.html option does not fix the problem. -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.5.0-5-amd64 (SMP w/8 CPU threads; PREEMPT) Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages sphinx-common depends on: ii libjs-sphinxdoc 7.2.6-3 ii libjson-perl 4.1-1 ii perl 5.36.0-10 Versions of packages sphinx-common recommends: ii python3-sphinx 7.2.6-3 sphinx-common suggests no packages. -- no debconf information
Bug#1058876: libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod)
Hi Alistair, given the complexity around hacking openmpi to accommodate placing the mod files under /usr/include, I'm starting to wonder whether it's the best way of resolving Bug#1058526 in the first place. I did it bit of background reading on the fortran mod files. There's a fair bit of dissent about them, and no consensus on a proper location. e.g. https://fortranwiki.org/fortran/show/Library+distribution The files are binary dependent (and compiler version dependent), and not clear that /usr/include is the best place for them anyway. mpich seems to be fine placing them in /usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/mpich, and openmpi seemed to be happy enough doing the same up until Bug#1058526. Is there a different way of resolving Bug#1058526 without moving the mod files to /usr/include? Drew
Bug#1058876: libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod)
On 2023-12-27 09:51, Alastair McKinstry wrote: On 27/12/2023 08:45, Drew Parsons wrote: ... I guess the problem must be the common files from openmpi-common in /usr/share/openmpi/. They're not actually arch-independent. Do mpif90.openmpi and the other components actively read them at runtime? .. This appears to be it. I've been building on arm64 recently (a VM on a mac) and don't see this. There appears to be a mechanism for including ${includedir} and ${libdir} and evaluating the wrapper-data files at runtime. My hacking on these files in d/rules is causing the errors. I'll work on a better solution. I can see at the lowest level the location is pkgdatadir at l.110 (and elsewhere) in ompi/tools/wrappers/Makefile.am Not clear if hacking it at that point will interfere with the orterun binary finding them. If not, then it could in principle be replaced with something like $(pkglibdir)/$(datadir) (i.e. in a share subdir under the openmpi libdir). Might call it "pkglibdatadir". The default value for pkgdatadir is set as $(datadir)/@PACKAGE@, l.129 in toplevel Makefile.in datadir is the autotool default ${prefix}/share (i.e. /usr/share), https://www.gnu.org/software/automake/manual/html_node/Standard-Directory-Variables.html If orterun can be trained to look for the wrapper txt files in pkglibdatadir (presumably as well as pkgdatadir, not instead of), then setting and using "pkglibdatadir" instead of pkgdatadir in ompi/tools/wrappers/Makefile.am "might" be simple and reliable. Reliability depends on whether any other component uses these wrapper txt files.
Bug#1058876: libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod)
On 2023-12-26 12:45, Drew Parsons wrote: I can manually reproduce the error trivially on an arm64 chroot (amdahl.debian.org). Copying hello.f90 from openmpi's debian/tests and manually running mpif90 -o hello hello.f90 reproduces the error reference to the x86_64 include path on the arm64 machine. `mpif90.openmpi -print-search-dirs` only shows aarch64 paths though. I guess the problem must be the common files from openmpi-common in /usr/share/openmpi/. They're not actually arch-independent. Do mpif90.openmpi and the other components actively read them at runtime? For instance, /usr/share/openmpi/mpif90.openmpi-wrapper-data.txt contains fmoddir=/usr/include/x86_64-linux-gnu/fortran/gfortran-mod-15 Since openmpi-common is marked Arch: all, it's only built once, on amd64, hence x86_64-linux-gnu gets carried to the other arches. The compiler_flags variables is also affected, alongside as fmoddir. It looks like only the mpi fortran wrapper txts are affected, mpif77-wrapper-data.txt mpif77.openmpi-wrapper-data.txt mpif90-wrapper-data.txt mpif90.openmpi-wrapper-data.txt mpifort-wrapper-data.txt mpifort.openmpi-wrapper-data.txt Should these be moved from openmpi-common to libopenmpi-dev (or openmpi-bin) at /usr/lib//openmpi/share ?
Bug#1058876: libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod)
On 2023-12-26 12:31, Drew Parsons wrote: ... It's not just adios2 and sundials though. openmpi's own arm64 tests are failing on debci with a reference to x86_64-linux-gnu ... openmpi's compile_run_mpif90 test doesn't use pkgconfig anyway. It builds directly with mpif90. Could the problem be inside the mpif90.openmpi binary? That would be strange though. arm64's mpif90.openmpi oughtn't be referring to x86_64 any more than the pkgconfig file. I can manually reproduce the error trivially on an arm64 chroot (amdahl.debian.org). Copying hello.f90 from openmpi's debian/tests and manually running mpif90 -o hello hello.f90 reproduces the error reference to the x86_64 include path on the arm64 machine. `mpif90.openmpi -print-search-dirs` only shows aarch64 paths though.
Bug#1058876: libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod)
On 2023-12-26 11:00, Alastair McKinstry wrote: On 24/12/2023 10:50, Drew Parsons wrote: reopen 1058876 block 1058944 by 1058876 thanks Alas, the fix in openmpi 4.1.6-3 for the include path to the openmpi fortran modules has hardcoded x86_64-linux-gnu This is preventing builds and tests on other architectures, e.g. rebuilding sundials for the petsc transition. I think openmpi's debian/tests will also need Depends: pkg-config for the new compile_run_cc_pkgconfig test. The problem appears to be the heuristics in upstream/FindMPI.cmake in adios2 (and sundials). It happens in sid tests but not my arm64 devel environment. Debugging slowly. It's not just adios2 and sundials though. openmpi's own arm64 tests are failing on debci with a reference to x86_64-linux-gnu e.g. 79s Setting up libopenmpi-dev:arm64 (4.1.6-3) ... 79s update-alternatives: using /usr/lib/aarch64-linux-gnu/openmpi/include to provide /usr/include/aarch64-linux-gnu/mpi (mpi-aarch64-linux-gnu) in auto mode 79s Setting up autopkgtest-satdep (0) ... 79s Processing triggers for libc-bin (2.37-12) ... 83s (Reading database ... 17753 files and directories currently installed.) 83s Removing autopkgtest-satdep (0) ... 86s autopkgtest [03:14:37]: test compile_run_mpif90: [--- 86s f951: Warning: Nonexistent include directory ‘/usr/include/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi’ [-Wmissing-include-dirs] 86s hello.f90:3:11: 86s 86s 3 | use mpi 86s | 1 86s Fatal Error: Cannot open module file ‘mpi.mod’ for reading at (1): No such file or directory It's a strange error to be sure. From that error message, I thought x86_64-linux-gnu might have gotten hardcoded into the include path in ompi-f90.pc for arm64. But downloading libopenmpi-dev_4.1.6-3_arm64.deb and inspecting manually, I can see that arm64's ompi-f90.pc contains -I/usr/include/aarch64-linux-gnu/fortran/gfortran-mod-15/openmpi which would be the correct path. I unpacked libopenmpi-dev_4.1.6-3_arm64.deb manually, but I can't find any reference to include/x86_64 inside its files. openmpi's compile_run_mpif90 test doesn't use pkgconfig anyway. It builds directly with mpif90. Could the problem be inside the mpif90.openmpi binary? That would be strange though. arm64's mpif90.openmpi oughtn't be referring to x86_64 any more than the pkgconfig file. Best of luck with debugging. Drew
Bug#1058876: libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod)
reopen 1058876 block 1058944 by 1058876 thanks Alas, the fix in openmpi 4.1.6-3 for the include path to the openmpi fortran modules has hardcoded x86_64-linux-gnu This is preventing builds and tests on other architectures, e.g. rebuilding sundials for the petsc transition. I think openmpi's debian/tests will also need Depends: pkg-config for the new compile_run_cc_pkgconfig test.
Bug#1058876: libopenmpi-dev: configured header paths don't provide /usr/include
Source: openmpi Followup-For: Bug #1058876 Control: retitle 1058876 libopenmpi-dev: paths missing /usr/include...(for fortran mpi.mod) We can see from debci that openmpi's own tests are affected by the same underlying problem. That indicates that the configuration for mpif90.openmpi itself is missing the required include paths, not just the pkgconfig files. Adjusting the bug title to match.
Bug#1059067: golang-github-hirochachacha-go-smb2: superseded by golang-github-cloudsoda-go-smb2
On 2023-12-20 00:06, Maytham Alsudany wrote: Source: golang-github-hirochachacha-go-smb2 Severity: normal X-Debbugs-Cc: dpars...@debian.org -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Dear Maintainer, In the latest upstream version of rclone, golang-github-hirochachacha-go-smb2 will be superseded by golang-github-cloudsoda-go-smb2 (in NEW queue at the time of writing). As rclone is the only package that depends on hirochachacha-go-smb2, are you fine with filing a RoM RM request as soon as the new version of rclone is uploaded? Fine by me. Supporting rclone is the important thing. Drew
Bug#1058944: transition: petsc
Package: release.debian.org Severity: normal User: release.debian@packages.debian.org Usertags: transition X-Debbugs-Cc: pe...@packages.debian.org Control: affects -1 + src:petsc I'd like to upgrade PETSc (and SLEPc, and their python packages) from 3.18 to 3.19. This is expected to fix the python 3.12 build error in slepc4py, Bug#1057863. dolfin has now been patched to work with petsc 3.19. We can't yet upgrade hypre to 2.29 since it is not compatible with petsc 3.19: /projects/petsc/build/petsc/src/ksp/pc/impls/hypre/hypre.c: In function ‘PCApply_HYPRE’: /projects/petsc/build/petsc/src/ksp/pc/impls/hypre/hypre.c:446:31: error: incompatible types when assigning to type ‘hypre_Error’ from type ‘int’ 446 | hypre__global_error = 0; hypre upgrades will have to wait for petsc 3.20 next time. auto-transition: https://release.debian.org/transitions/html/auto-petsc.html Ben file: title = "petsc"; is_affected = .depends ~ "libpetsc-real3.18" | .depends ~ "libpetsc-real3.19"; is_good = .depends ~ "libpetsc-real3.19"; is_bad = .depends ~ "libpetsc-real3.18";
Bug#1024311: ITP: golang-github-google-go-pkcs11 -- Go package for loading PKCS #11 modules
On 2023-12-18 12:32, Maytham Alsudany wrote: Hi Drew, Maytham, have you got push rights to the repo to push the updates to the upstream branch and the tags for the final upstream source commit? Yep, and I have pushed the commits. Looks good, thanks again Maytham. Uploading now. Drew
Bug#1024311: Bug#1024276: #1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for
On 2023-12-17 15:12, Drew Parsons wrote: On 2023-12-17 03:30, Maytham Alsudany wrote: Hi Drew, We've got a mechanism using debian/watch and debian/copyright to automate removing third_party/pkcs11/* and creating the 0.*.0+dfsg tarball. I suggest making a Merge Request with your fix so we can review that, and then merge it all together. Done, see https://salsa.debian.org/go-team/packages/golang-github-google-go-pkcs11/-/merge_requests/1 Thanks Maytham. I've made review. Don't erase the existing package history in debian/changelog. I had a small question about the include path for pkcs11.h (how does the build system know about nss? The last revision of the MR looks good, I've merged now. Maytham, have you got push rights to the repo to push the updates to the upstream branch and the tags for the final upstream source commit? Drew
Bug#1056888: scipy: ftbfs with cython 3.0.x
Source: scipy Followup-For: Bug #1056888 Using cython3-legacy sounds like a reasonable workaround. I presume the latest version of scipy has been updated to use cython 3, but we won't know until https://salsa.debian.org/salsa/support/-/issues/360 is dealt with. cf. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1042459
Bug#1055338: Bug #1055338 override: dolfin:oldlibs/optional
For more context, note that both src:dolfin and src:fenics-dolfinx are still current (and distinct) as source packages, in the sense that they generate a different set of binary packages. src:dolfin generates python3-dolfin, while src:fenics-dolfinx generates python3-dolfinx (the next generation version of the library). It is true that python3-dolfin is considered legacy (deprecated), nevertheless the old source is still maintained for the purpose of supporting users with existing projects that have been using the old version of the library.
Bug#1057863: slepc4py ftbfs with Python 3.12
Source: slepc4py Followup-For: Bug #1057863 Correction: curexc_traceback is not used in slepc4py 3.19. Maybe it's time to upgrade to PETSc 3.19.
Bug#1057863: slepc4py ftbfs with Python 3.12
Source: slepc4py Followup-For: Bug #1057863 slepc4py does not use curexc_traceback. Perhaps this is a bug in cython?
Bug#1024311: Bug#1024276: #1024276: ITP: golang-github-googleapis-enterprise-certificate-proxy -- Google Proxies for
On 2023-12-17 03:30, Maytham Alsudany wrote: Hi Drew, We've got a mechanism using debian/watch and debian/copyright to automate removing third_party/pkcs11/* and creating the 0.*.0+dfsg tarball. I suggest making a Merge Request with your fix so we can review that, and then merge it all together. Done, see https://salsa.debian.org/go-team/packages/golang-github-google-go-pkcs11/-/merge_requests/1 Thanks Maytham. I've made review. Don't erase the existing package history in debian/changelog. I had a small question about the include path for pkcs11.h (how does the build system know about nss? Drew
Bug#1058876: libopenmpi-dev: cmake builds do not find fortan mpimod in /usr/include
Package: libopenmpi-dev Version: 4.1.6-2 Severity: serious Justification: ftbfs (other) openmpi 4.1.6-2 moved the fortran mod files from /usr/lib to /usr/include. This is probably correct, but has some consequences we need to sort out. Applications using cmake for configuration are still looking for the mod files in /usr/lib, so their builds now fail. An example is adios2, with error message: [384/1029] /usr/bin/gfortran -I/build/adios2-2.9.2+dfsg1/examples/hello/bpWriter -I/build/adios2-2.9.2+dfsg1/build-mpi/bindings/Fortran -I/usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi -I/usr/lib/x86_ 64-linux-gnu/openmpi/lib -g -O2 -ffile-prefix-map=/build/adios2-2.9.2+dfsg1=. -fstack-protector-strong -fstack-clash-protection -fcf-protection -Jexamples/hello/bpWriter -fpreprocessed -c examples/hello/bpWriter /CMakeFiles/hello_bpWriter_f_mpi.dir/helloBPWriter.F90-pp.f90 -o examples/hello/bpWriter/CMakeFiles/hello_bpWriter_f_mpi.dir/helloBPWriter.F90.o FAILED: examples/hello/bpWriter/CMakeFiles/hello_bpWriter_f_mpi.dir/helloBPWriter.F90.o /usr/bin/gfortran -I/build/adios2-2.9.2+dfsg1/examples/hello/bpWriter -I/build/adios2-2.9.2+dfsg1/build-mpi/bindings/Fortran -I/usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi -I/usr/lib/x86_64-linux-gn u/openmpi/lib -g -O2 -ffile-prefix-map=/build/adios2-2.9.2+dfsg1=. -fstack-protector-strong -fstack-clash-protection -fcf-protection -Jexamples/hello/bpWriter -fpreprocessed -c examples/hello/bpWriter/CMakeFiles /hello_bpWriter_f_mpi.dir/helloBPWriter.F90-pp.f90 -o examples/hello/bpWriter/CMakeFiles/hello_bpWriter_f_mpi.dir/helloBPWriter.F90.o /build/adios2-2.9.2+dfsg1/examples/hello/bpWriter/helloBPWriter.F90:3:9: 3 | use mpi | 1 Fatal Error: Cannot open module file ‘mpi.mod’ for reading at (1): No such file or directory compilation terminated. We can see that the compilation got configured to use -I/usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi which would be why it can't find mpi.mod in /usr/include As far as I can tell, adios2 is not making assumptions itself about the location of the mod files. I suspect the configuration is coming from cmake's FindMPI.cmake I guess we don't want openmpi 4.1.6-2 to migrate to testing until this issue is resolved, which is why I've marked Severity: serious. There are openmpi's pkgconfig files. $ pkg-config --cflags mpi-fort -I/usr/lib/x86_64-linux-gnu/openmpi/include -I/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi -I/usr/lib/x86_64-linux-gnu/openmpi/lib The fortrandir variable set in ompi-fort.pc is also located in /usr/lib (perhaps it should be so) So /usr/include is not used in the include path flags. But /usr/lib is. This must be the origin of the problem. cmake's FindMPI.cmake does indeed use pkgconfig to extract the paths: "if(_MPI_PKG AND PKG_CONFIG_FOUND)" -- System Information: Debian Release: trixie/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 'experimental') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 6.5.0-5-amd64 (SMP w/8 CPU threads; PREEMPT) Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE Locale: LANG=en_AU.UTF-8, LC_CTYPE=en_AU.UTF-8 (charmap=UTF-8), LANGUAGE=en_AU:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages libopenmpi-dev depends on: ii gfortran [gfortran-mod-15] 4:13.2.0-2 ii gfortran-11 [gfortran-mod-15] 11.4.0-7 ii gfortran-12 [gfortran-mod-15] 12.3.0-13 ii gfortran-13 [gfortran-mod-15] 13.2.0-9 ii libevent-dev 2.1.12-stable-8 ii libhwloc-dev 2.10.0-1 ii libibverbs-dev 48.0-1 ii libjs-jquery 3.6.1+dfsg+~3.5.14-1 ii libjs-jquery-ui1.13.2+dfsg-1 ii libopenmpi34.1.6-2 ii libpmix-dev5.0.1-4 ii openmpi-bin4.1.6-2 ii openmpi-common 4.1.6-2 ii zlib1g-dev 1:1.3.dfsg-3 Versions of packages libopenmpi-dev recommends: ii libcoarrays-openmpi-dev 2.10.1-1+b1 Versions of packages libopenmpi-dev suggests: pn openmpi-doc -- no debconf information
Bug#1058353: libadios2-mpi-c-dev: ships headers already shipped in libadios2-common-c-dev
Source: adios2 Followup-For: Bug #1058353 Damn, something went badly wrong in debian/rules. Sorry about that. I'll fix it as soon as possible, in the meantime use the version in testing. Drew