Your message dated Wed, 18 Jun 2025 10:31:05 +0000
with message-id <[email protected]>
and subject line Re: Bug#1107726: unblock: pytorch{,-cuda}/2.6.0+dfsg-8
(pre-approval)
has caused the Debian Bug report #1107726,
regarding unblock: pytorch-cuda/2.6.0+dfsg-7+b1
to be marked as done.
This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.
(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact [email protected]
immediately.)
--
1107726: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1107726
Debian Bug Tracking System
Contact [email protected] with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
User: [email protected]
Usertags: unblock
X-Debbugs-Cc: [email protected]
Please unblock package pytorch{,-cuda}
We need CUDA >= 12.4 to fix the src:pytorch-cuda FTBFS #1105066, which
is due to CUDA's gcc support being too old. But being a contrib package
with non-free dependencies, we have to bump the revision and manually
rebuild the binaries for all architectures and upload. Anbe plans to
make CUDA 12.4 into trixie so I presume CUDA 12.4 will enter testing soon.
src:pytorch is the CPU version which has nothing to do with this CUDA
transition. But since src:pytorch (main) and src:pytorch-cuda (contrib)
is completely identical source package, I need to do the no-change
upload of src:pytorch to avoid the weird situation where pytorch is
2.6.0+dfsg-7 while pytorch-cuda is 2.6.0+dfsg-8.
[ Reason ]
Manual rebuild of src:pytorch-cuda against a reasonable CUDA version.
The CPU version src:pytorch is no-change manual re-upload to sync the
debian revision number with src:pytorch-cuda to avoid messing up.
[ Impact ]
This is a nearly no-risk manual binary rebuild due to the annoyances of
non-free dependency.
[ Tests ]
It should work. If there is regression, it's probably a trouble in CUDA.
[ Risks ]
Very small no code change manual rebuild of this package.
[ Checklist ]
[x] all changes are documented in the d/changelog
[x] I reviewed all changes and I approve them
[x] attach debdiff against the package in testing
[ DebDiff ] The only change needed is to mandate the use of the latest
CUDA, and avoid the sbuild from picking up the old CUDA version (12.3).
diff --git a/debian/control.cuda b/debian/control.cuda index
e47874fb..3d0c9077 100644 --- a/debian/control.cuda +++
b/debian/control.cuda @@ -57,7 +57,7 @@ Build-Depends: cmake,
python3-setuptools, python3-yaml, patchelf, - nvidia-cuda-toolkit, +
nvidia-cuda-toolkit (>= 12.4.0~), nvidia-cuda-toolkit-gcc, nvidia-cudnn
(>= 8.7.0.84~cuda11.8), libcudnn-frontend-dev (>= 0.9.2~),
unblock pytorch{,-cuda}/2.6.0+dfsg-8
Thank you for using reportbug
--- End Message ---
--- Begin Message ---
Hi,
On Tue, Jun 17, 2025 at 11:29:31PM -0400, M. Zhou wrote:
> retitle -1 unblock: pytorch-cuda/2.6.0+dfsg-7+b1
>
> I have manually binNMU'ed the pytorch-cuda package:
> $ rmadison python3-torch-cuda
>
> python3-torch-cuda | 2.6.0+dfsg-7 | testing/contrib | ppc64el
> python3-torch-cuda | 2.6.0+dfsg-7 | unstable/contrib | ppc64el
> python3-torch-cuda | 2.6.0+dfsg-7+b1 | testing/contrib | amd64,
> arm64
> python3-torch-cuda | 2.6.0+dfsg-7+b1 | unstable/contrib | amd64,
> arm64
>
> It seems that the +b1 version already migrated. Is it really necessary to file
> the unblock? BTW, since binNMU works, src:pytorch (CPU-only) can be left
> intact.
The binnmu was unblocked and migrated, so I think there is nothing left to do
here, so I'm closing this bug.
> And can we remove the ppc64el architecture for src:pytorch-cuda from testing?
> The libcuda1 dependency is no longer available on ppc64el, so pytorch-cuda
> is not installable anyway.
Once the RM bug in unstable is processed, the ppc64el binary will be
automatically removed from testing as well.
Thanks,
Ivo
--- End Message ---