Quoting Michael Tokarev (2025-08-14 01:24:00) > On 14.08.2025 00:01, Michael Roth wrote: > > Hello, > > > > On behalf of the QEMU Team, I'd like to announce the availability of the > > fourth release candidate for the QEMU 10.1 release. This release is meant > > for testing purposes and should not be used in a production environment. > > > > http://download.qemu.org/qemu-10.1.0-rc3.tar.xz > > http://download.qemu.org/qemu-10.1.0-rc3.tar.xz.sig > > Hi Michael!
Hi Michael, Thanks for catching this! > > This file (qemu-10.1.0-rc3.tar.xz) is incomplete - xz does not > uncompress it. But the signature is ok. Something went wrong > in the release process. > > The .bz2 one seems to be okay though. > > If there's some error checking missing (like we don't check for > errors from tar - which might as well be the case in make-release.sh), > we can fix it.. But if it were make-release.sh, it'd fail earlier > and bz2 were bad too. It looks like make-release.sh did its job okay and this is a PEBKAC situation. =\ > > How do you make the releases? Roughly: pushd $build_dir $src_dir/configure --extra-cflags=-Wall make qemu-10.1.0-rc3.tar.xz popd Then I decompress/build the tarball and run through 'make check' and iotests on the resulting build to sanity-check the tarball, and that all was okay. However, I then copy the tarballs to a separate system to do the signing and upload to hosting site. I saw the bz2 tarball transfer okay, but stepped away once xz transfer started and didn't notice that it errored due to running out of disk space. Subsequently, the bz2 signing was successful, but xz signed also trigger ENOSPC, so that was noticed/fixed, but not the fact that xz upload had also failed prior to that due to ENOSPC, so that's why it's signed okay but the xz is corrupted. I'll add more error handling and an final md5 check before signing to close up that gap in testing/publishing, and hopefully that should avoid similar issues in the future. I've reuploaded the tarball as 10.1.0-rc3-reupload to avoid conflicts with CDN-caching, which is how we've handled reuploads in the past. Thanks, Mike > > Thanks, > > /mjt >