Bug#1070358: clang-16: -fsanitize=fuzzer not working on trixie (but working on bookworm)
Package: clang-16 Version: 1:16.0.6-26 Severity: normal Dear Maintainer, I was trying to track down a regression reported by the ClusterFuzz service, and when I try to build the fuzzing reproducer on trixie, it fails. However, it works on Bookworm. So while I can work around the problem by using the Bookworm chroot, this is not a great long-term solution. There are two problems; the first is that clang isn't finding the algorithm header file. Using e2fsprogs sources with: configure CC=clang-16 CFLAGS=-g CXX=clang++-16 --enable-fuzzing --enable-ubsan % cd tests/fuzz % make V=1 ext2fs_image_read_write_fuzzer clang++-16 -c -I. -I../../lib -I/usr/projects/e2fsprogs/e2fsprogs/lib -fsanitize=undefined -g -I/usr/include/fuse3 -pthread -DHAVE_CONFIG_H -fsanitize=fuzzer /usr/projects/e2fsprogs/e2fsprogs/tests/fuzz/ext2fs_image_read_write_fuzzer.cc -o ext2fs_image_read_write_fuzzer.o In file included from /usr/projects/e2fsprogs/e2fsprogs/tests/fuzz/ext2fs_image_read_write_fuzzer.cc:27: /usr/lib/llvm-16/lib/clang/16/include/fuzzer/FuzzedDataProvider.h:16:10: fatal error: 'algorithm' file not found #include ^~~ 1 error generated. The second problem is that it's not finding -lstdc++: % make V=1 ext2fs_check_directory_fuzzer clang++-16 -fsanitize=undefined -pthread-fsanitize=fuzzer -o ext2fs_check_directory_fuzzer ext2fs_check_directory_fuzzer.o ../../lib/libsupport.a ../../lib/libe2p.a ../../lib/libext2fs.a ../../lib/libcom_err.a -lpthread /bin/ld: cannot find -lstdc++: No such file or directory /bin/ld: cannot find -lstdc++: No such file or directory clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [Makefile:392: ext2fs_check_directory_fuzzer] Error 1 Now, this works on bookworm, using clang-14. I've also tried using clang-14 on trixie, and it's failing there too. I'm guessing it's some kind of missing package, but it's really unclear what the package should be. I do have libc++16-dev and libc++--dev installed, which is what I *think* should be the right package to get stdc++, but it's apparently not being found. Help? -- System Information: Debian Release: trixie/sid APT prefers testing APT policy: (900, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 6.6.15-amd64 (SMP w/20 CPU threads; PREEMPT) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages clang-16 depends on: ii binutils2.42-4 ii libc6 2.37-19 ii libc6-dev 2.37-19 ii libclang-common-16-dev 1:16.0.6-26 ii libclang-cpp16t64 1:16.0.6-26 ii libclang1-16t64 1:16.0.6-26 ii libgcc-13-dev 13.2.0-24 ii libllvm16t641:16.0.6-26 ii libobjc-13-dev 13.2.0-24 ii libstdc++-13-dev13.2.0-24 ii libstdc++6 14-20240330-1 ii llvm-16-linker-tools1:16.0.6-26 Versions of packages clang-16 recommends: ii llvm-16-dev 1:16.0.6-26 ii python3 3.11.8-1 Versions of packages clang-16 suggests: pn clang-16-doc pn wasi-libc -- no debconf information
Bug#1060221: linux-image-6.5.0-5-arm64: gcc13 crashes in parallel builds; fixed with the 6.6.9 kernel
Package: src:linux Version: 6.5.13-1 Severity: normal Dear Maintainer, I am running Debian testing in a Parallels VM (6 cores, 8GB memory) running on Macbook Air M2 15" (10 cores, 24GB memory). gcc13 is sig faulting when building xfsprogs or the Linux kernel when using make -j6. Building xfsprogs using make -j2 doesn't crash. It crashes both using Debian testing or a Debian bookworm chroot. The fact that it crashes less often when the system is more lightly loaded, and the gcc crash was not deterministic; that is, if I rerun the make, gcc will crash on a *different* source file, aroused my suspicions. Hence, I decided to try installing linux-image-6.6.9-arm64 from Debian unstable. This made the problem go away. The fact that there is some kind of non-determinstic userspace program (gcc13's cc1) which is resolved when using the 6.6.9 LTS kernel may mean that there is some kind of issue with the mm subsystem under memory pressure, or with the swap codepath, etc. So this may be causing some other kinds of potential silent data corruption with 6.5.13 when running under load. (No, it's not the ext4 corruption issue, since (a) that's not applicable to 6.5.13, and (b) that corruption issue involved O_SYNC / O_DIRECT writes when extending the file, and gcc doesn't use either O_SYNC or O_DIRECT writes. This might be an issue with SQL Server, but not gcc, mysql, nor postgres.) I normally wouldn't bother filing a bug, but the Debian testing's kernel is currently being blocked by a transition, and I don't know how long it's going to take to resolve the transition issue. Also, this bug may be affecting more people than just me, so I figured it would be good to give a heads up, especially since whatever the transition bug might happen to be (I don't pretend to understand it, and I wasn't able to find any information after doing some web serches), I didn't have any problem installing a newer kernel from sid. -- Package-specific info: ** Kernel log: boot messages should be attached I don't have it handy since journald has a non-persistent boot. If you really want it, though, I can boot back to the 6.5 kernel and get it extracted out. ** Model information Parallels VM runing on a Macbook Air M2 15" ** Network interface configuration: *** /etc/network/interfaces: source /etc/network/interfaces.d/* auto lo iface lo inet loopback ** PCI devices: 00:01.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] Subsystem: Parallels, Inc. 82801I (ICH9 Family) HD Audio Controller [1ab8:0400] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Kernel driver in use: snd_hda_intel Kernel modules: snd_hda_intel 00:02.0 USB controller [0c03]: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller [8086:265c] (rev 02) (prog-if 20 [EHCI]) Subsystem: Parallels, Inc. 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller [1ab8:0400] Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- SERR- TAbort- SERR- Kernel driver in use: xhci_hcd Kernel modules: xhci_pci 00:05.0 Ethernet controller [0200]: Red Hat, Inc. Virtio network device [1af4:1000] Subsystem: Parallels, Inc. Virtio network device [1ab8:0001] Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Kernel driver in use: virtio-pci 00:09.0 Unassigned class [ff00]: Parallels, Inc. Virtual Machine Communication Interface [1ab8:4000] Subsystem: Parallels, Inc. Virtual Machine Communication Interface [1ab8:0400] Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- SERR- Kernel driver in use: prl_tg Kernel modules: prl_tg 00:0a.0 VGA compatible controller [0300]: Red Hat, Inc. Virtio 1.0 GPU [1af4:1050] (rev 01) (prog-if 00 [VGA controller]) Subsystem: Parallels, Inc. Virtio 1.0 GPU [1ab8:0010] Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Kernel driver in use: virtio-pci ** USB devices: Bus 003 Device 003: ID 203a:fffb PARALLELS Virtual Keyboard Bus 003 Device 002: ID 203a:fffc PARALLELS Virtual Mouse Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 203a:fffa PARALLELS Virtual Printer (snap) Bus 001 Device 003: ID
Bug#1034000: snapshot.debian.org: Unusual number of 503s on snapshot.d.o
Package: snapshot.debian.org Followup-For: Bug #1034000 I'm seeing similar failures for a number for the last three snapshots listed at: http://snapshot.debian.org/archive/debian/?year=2023=8 Namely for: * 2023-08-06 09:19:12 * 2023-08-07 15:08:23 * 2023-08-09 03:23:42 As noted in [1] and in the Debian bug #1031628[2], we are also missing snapshots, but this is different --- even for the snapshots that are listed as present, they are missing packages, which is this bug (#1034000). [1] https://lists.debian.org/debian-devel/2023/08/msg00014.html [2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1031628 Quoting from [2], >> Note sure if/how this relates to #1029744 about debian-ports (0 in >> January, 1 in February). >> >Yes, it's the same thing; not much we can do about it at the moment. However, when I looked at Debian bug #1029744 there wasn't an explanation about what might be causing this spate of missing snapshots and missing packages in snapshots. If someone on the snapshots.debian.org team could provide some color commentary about what might be going wrong (are we running out of disk space? CPU? Memory? Personnel? Any explanation with suggestions of what help might be able to help address these problems would be greatly appreciated. Many thanks!!
Bug#1040622: systemd-sysv: reboot doesn't honor the grub-reboot settings; reboot -f does
Package: systemd-sysv Version: 252.6-1 Severity: normal Dear Maintainer, * What led up to the situation? I was updating the gce-xfstests[1] test appliance to Debian Bookworm from Debian Bullseye. [1] https://thunk.org/gce-xfstests * What exactly did you do (or not do) that was effective (or ineffective)? Unfortunately kexec has not been reliable ever since sometime after the 5.4 kernel, at least on Google Compute Engine VM's. (About 30-40% of the time, the VM hangs after the kexec; about 10% of the time, the machine is up, but it is very slow and limping, and /proc/interrupts shows that some interrupt channel is going wild. This is no doubt the kernel bug interacting with some virtual hardware in the GCE VM, but I've never been able to debug it.) Because of issues with kexec, the primary way that I reboot into the kernel that I want to test is to install the kernel as a dpkg package, and then examine /boot/grub/grub.cfg to find out where it was inserted into the grub's menu listing, and then run a command like "grub-reboot 1>4", where the number is found by examing the grub.cfg file. An example of this works can be found here[2]. [2] https://github.com/tytso/xfstests-bld/blob/9bae3253d57456987d995cf85379e9165e054381/test-appliance/files/usr/local/lib/gce-load-kernel#L169 This works *just* *fine* when using Debian Bullseye (which is using systemd 247.3-7+deb11u2). * What was the outcome of this action? Unfortunately, this no longer works in Debian Bookworm (with systemd 252.6-1). In Debian Bookworm, the grub-reboot(8) setting is ignored after triggering a reboot via /sbin/reboot. Connecting to the serial console, it appears that "reboot" is going through some code path that does NOT involve triggering a BIOS message and going through grub, where (assuming the GRUB_TIMEOUT is set to some non-zero value like 15) the grub menu would be displayed, and after 15 seconds, it would boot the kernel specified by grub-reboot, and then clear the next_entry entry in /boot/grub/grubenv. Instead, systemd appears to boot the default kernel, ignoring /boot/grub/grubenv, so I don't enter the kernel which I had just installed, and had selected via the grub-reboot(8) command. Looking at /boot/grub/grubenv, it still has the "next_entry=1>4" set by grub-reboot(8), so it appears that /boot/grub/grubenv is being completely ignored by reboot. *However*, it is properly handled if I use reboot -f. The "reboot -f" command will briefly show the BIOS version information, and then the Grub Menu, and then will boot the kernel selected by grub-reboot(8), and afterwords the next_entry field is cleared from /boot/grub/grubenv. So this is not a "grub" problem, but in the reboot path selected by systemd when doing a clean shutdown via the "reboot" command. As near as I can tell, grub is being bypassed, or at least grub is getting called in some way whic causes it to ignore the /boog/grub/grubenv parameter. In Debian Bookworm, "reboot" works like "reboot -f", in that we go through the BIOS initialization step, printing the BIOS version, and then grub is invoked in such a way that /boot/grub/grubenv is honored. * What outcome did you expect instead? The "reboot" command should go through the normal, full reboot sequence, such that grub-reboot(8) works correctly. As a workaround, I've replaced reboot sleep 60 with sync /boot sync sleep 1 reboot -f sleep 60 However, it would be desirable to let the system go through a full, clean shutdown, with file systems properly unmounted or remounted read-only in the case of the root file system. -- System Information: Debian Release: 12.0 APT prefers stable APT policy: (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 6.1.0-9-amd64 (SMP w/20 CPU threads; PREEMPT) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages systemd-sysv depends on: ii systemd 252.6-1 Versions of packages systemd-sysv recommends: ii libnss-systemd 252.6-1 ii libpam-systemd 252.6-1 systemd-sysv suggests no packages. -- no debconf information
Bug#1006253: iwd is crashing with a segfault
Package: iwd Version: 1.24-1 Severity: important Dear Maintainer, After upgrading to iwd 1.24-1, it is crashing on my system. (Dell XPS-13 with an ath10k/QCA6174/hw3.0 wireless card.) Downgrading to iwd 1.21-3 allows things to work. Restalling iwd 1.24-1 causes it crash again. If I run /usr/libexec/iwd -debug, it displays the following, and then immediately dies with a Segmentation fault: Wireless daemon version 1.24 Loaded configuration from /etc/iwd/main.conf station: Network configuration is disabled. Wiphy: 0, Name: phy0 Permanent Address: 9c:b6:d0:88:b6:0d 2.4Ghz Band: Bitrates (non-HT): 1.0 Mbps 2.0 Mbps 5.5 Mbps 11.0 Mbps 6.0 Mbps 9.0 Mbps 12.0 Mbps 18.0 Mbps 24.0 Mbps 36.0 Mbps 48.0 Mbps 54.0 Mbps HT Capabilities: HT40 Short GI for 20Mhz Short GI for 40Mhz HT RX MCS indexes: 0-15 5Ghz Band: Bitrates (non-HT): 6.0 Mbps 9.0 Mbps 12.0 Mbps 18.0 Mbps 24.0 Mbps 36.0 Mbps 48.0 Mbps 54.0 Mbps HT Capabilities: HT40 Short GI for 20Mhz Short GI for 40Mhz HT RX MCS indexes: 0-15 VHT Capabilities: Short GI for 80Mhz Max RX MCS: 0-9 for NSS: 2 Max TX MCS: 0-9 for NSS: 2 Ciphers: CCMP TKIP BIP Supported iftypes: ad-hoc station ap p2p-client p2p-go p2p-device Segmentation fault I will attach the output of running iwmon while trying to start iwd. -- System Information: Debian Release: bookworm/sid APT prefers testing APT policy: (900, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 5.16.0-1-amd64 (SMP w/8 CPU threads; PREEMPT) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled Versions of packages iwd depends on: ii init-system-helpers 1.62 ii libc62.33-6 ii libell0 0.48-0.1 ii libreadline8 8.1.2-1 Versions of packages iwd recommends: ii dbus [dbus-system-bus] 1.12.20-3 ii wireless-regdb 2021.08.28-1 iwd suggests no packages. -- no debconf information iwmon.log.gz Description: application/gzip
Bug#992469: debhelper: dh_installsystemd installs services in /usr/lib/systemd/system/
Package: debhelper Version: 13.4+nmu1 Followup-For: Bug #992469 I can confirm Samuel's analysis. When I tried debugging why building e2fsprogs was causing a new lintian error: E: e2fsprogs: systemd-service-file-outside-lib usr/lib/systemd/system/e2scrub@.service N: E: systemd-service-file-outside-lib N: N: The package ships a systemd service file outside /lib/systemd/system/ N: N: Systemd in Debian searches for unit files in /lib/systemd/system/ and N: /etc/systemd/system. Notably, it does *not* look in N: /usr/lib/systemd/system/ for service files. N: N: System administrators should have the possibility to overwrite a N: service file (or parts of it, in newer systemd versions) by placing a N: file in /etc/systemd/system, so the canonical location used for N: service files is /lib/systemd/system/. N: N: Severity: error N: N: Check: systemd N: My service files were *originally* in debian/e2fsprogs/lib/systemd, as they should have been: {/tmp/gbp/e2fsprogs-1.46.4} 1008% dh_install -v -pe2fsprogs cp --reflink=auto -a debian/tmp/etc debian/e2fsprogs// install -d debian/e2fsprogs//lib/systemd cp --reflink=auto -a debian/tmp/lib/systemd/system debian/e2fsprogs//lib/systemd/ ... But then dh_installsystemd is doing something cra-cra-crazy: {/tmp/gbp/e2fsprogs-1.46.4} 1009% dh_installsystemd -v cp --reflink=auto -a debian/e2fsprogs/usr/lib/systemd/system debian/e2fsprogs/lib/systemd rm -fr debian/e2fsprogs/usr/lib/systemd mv -f debian/e2fsprogs/lib/systemd debian/e2fsprogs/usr/lib/systemd ^^^ ?!? Please fix ASAP. This is going to block anyone from being able to build and upload packages that contain systemd service files. Many thanks, - Ted
Bug#989630: mke2fs with size limit and default discard will discard data after size limit
tags 989630 +pending thanks I finally had time to investigate this problem. It turns out the only time this bug manifests is when creating an file system smaller than (blocksize)**2 bytes (e.g., 16 megabytes when the is block size is 4k). The bug was introduced almost ten years ago (September 2011), and apparently no one noticed until you did! Thanks for providing a repro, BTW; I had initially tried reproducing this with a file system size larger than 16MB, and I couldn't see the problem. But when I used your precise reproduction instructions, I finally figured out what was going on. The patch to fix this is attached below. - Ted commit 6568ba325e54a2ae1d2617c5175936c819ab4c8c Author: Theodore Ts'o Date: Sun Jul 18 09:15:28 2021 -0400 mke2fs: only try discarding a single block to test if discard works Commit d2bfdc7ff15c ("Use punch hole as "discard" on regular files") added a test to see if the storage device actually supports discard. The intent was to try discarding the first block but since io_channel_discard() interprets the offset and count arguments in blocks, and not bytes, mke2fs was actually discarding the first 16 megabytes (when the block size is 4k). This is normally not a problem, since most file systems are larger than that, and requests to discard beyond the end of the block device are ignored. However, when creating a small file system as part of a image containing multiple partitions, the initial test discard can end up discarding data beyond the file system being created. Addresses-Debian-Bug: #989630 Reported-by: Josh Triplett Fixes: d2bfdc7ff15c ("Use punch hole as "discard" on regular files") Signed-off-by: Theodore Ts'o diff --git a/misc/mke2fs.c b/misc/mke2fs.c index 9fa6eaa7..5a35e9ef 100644 --- a/misc/mke2fs.c +++ b/misc/mke2fs.c @@ -2794,7 +2794,7 @@ static int mke2fs_discard_device(ext2_filsys fs) struct ext2fs_numeric_progress_struct progress; blk64_t blocks = ext2fs_blocks_count(fs->super); blk64_t count = DISCARD_STEP_MB; - blk64_t cur; + blk64_t cur = 0; int retval = 0; /* @@ -2802,10 +2802,9 @@ static int mke2fs_discard_device(ext2_filsys fs) * we do not print numeric progress resulting in failure * afterwards. */ - retval = io_channel_discard(fs->io, 0, fs->blocksize); + retval = io_channel_discard(fs->io, 0, 1); if (retval) return retval; - cur = fs->blocksize; count *= (1024 * 1024); count /= fs->blocksize;
Bug#987641: Bug#988830: [pre-approval] unblock e2fsprogs [Was: Bug#987641: e2fsprogs: FTBFS on armel/armhf with a 64-bit kernel]
On Thu, May 20, 2021 at 05:55:34PM +0200, Cyril Brulebois wrote: > Paul Gevers (2021-05-20): > > On 20-05-2021 00:11, Theodore Y. Ts'o wrote: > > > > Unfortunately, there was no release.debian.org bug to track this. Due > > to the current high volume to our list, this fell from the radar. To > > avoid this I now generate a pre-approval unblock request to discuss > > this, because than it shows up in our tools. Please follow up there. > > Yes, and I see a question was raised for the Installer team but > debian-boot@ wasn't cc'd, and we aren't psychic. :) My apologies for not getting the process right by opening a pre-approval bug earlier! > > Can you elaborate where you see the *risks* of the patch? Is this > > patch backwards compatible? I.e. does it work correctly on data > > generated with the old e2fsprogs? If not, what must the user do to > > avoid issues? Should it be mentioned in the release notes? That patch is rather long, but it's all mostly of the form: - tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl); + memcpy(, ext4_fc_tag_val(tl), sizeof(tail)); So the risks are very low. > > Apart from the failing test cases, I see in the patch description that > > there's also real use cases impacted (corner cases if I interpret them > > right). IIUC these are no regressions but I'd like to be sure. And > > what's the impact for users of those corner cases (especially the new > > Linux feature, I would expect that some users would be going to use > > those). Ext4 fast commits is a relatively new feature which is not enabled by mke2fs by default. It's a pretty cool feature in that in can result in some very impressive performance increases (75-130% improvements on some benchmarks), but there are still some rough edges. So in general it's not something that an "enterprise distro" would be supporting, although I imagine there will be some intrepid Debian Stable users who might want to try using it. The real world corner cases are if you are using a 32-bit arm binary on a 64-bit binary, and if you are using a sparc64 system (not an officially supported Debian arch). I'm not sure if misaligned pointer accesses are allowed in arm-32 kernel code, but it's definitely not supported on sparc64, so there is also a kernel-side patch which needed for those platforms that will be in 5.13 (landing upstream in 2-3 weeks). There are number of other minor bug fixes that I might want to include at the same time, but none of them are ones that I can honestly call "release-critical". Perhaps the one that I would want to pull in, and which is very low risk is: libext2fs: fix missing mutex unlock in an error path of the Unix I/O manager Originally from https://github.com/tytso/e2fsprogs/pull/68 Signed-off-by: Alexander Kanavin Signed-off-by: Theodore Ts'o Cheers, - Ted
Bug#987641: [PING to debian-release] Re: Bug#987641: e2fsprogs: FTBFS on armel/armhf with a 64-bit kernel
Ping to the debian-release bug. Do you want me to upload a fix to this bug where e2fsprogs fails its regression test (and thus its package build) when armhf and armel are running on a 64-bit ARM platform, but they were built successfully when run on a 32-bit ARM builder? No question this is a real bug, and it is fixed upstream already. But do you want me to upload a fix *now*, during the hard freeze, given the impact on the installer, et. al.? Thanks! - Ted On Mon, May 03, 2021 at 06:24:54PM -0400, Theodore Ts'o wrote: > On Mon, May 03, 2021 at 11:00:37PM +0200, Aurelien Jarno wrote: > > > > Maybe I should give a bit of context here. First of all, there is one armhf > > buildd, arm-arm-01, setup as an arm64 machine with a 32-bit armhf chroot. It > > has been setup following [1] a study from Steve McIntyre [1]. It appears > > that e2fsprogs first failed to build there [2] and got requeued on another > > buildd where it succeed. > > > > Now with my DSA and buildd maintainer hat on, we have been experiencing for > > quite a lot of VM crashes when building packages in 32-bit armhf/armel VMs > > on arm64 machines, so we have recently stopped using VMs to build them and > > instead rely on chroots. > > Thanks for the context. I had indeed noticed shortly after 1.46.2-1 > was released that it had failed on the first armhf buildd, and then > when it was retried, it got successfully built. Given that this was > right before the bulleye release freeze hardened, this had been on my > radar screen to fix, since it was clearly non-optimal, but I had > assumed that it would be OK to let things slide until after the > Bullseye release, since after all e2fsprogs 1.46.2-1 *did* > successfully get built on armhf. > > For me, this is really a question of timing. It will definitely be > the case that the next source upload of e2fsprogs will have the > armhf/armel build fix. The question I have is should I upload the fix > before Bullseye releases, or after the Bullseye release. > > What is the impact on the buildd and DSA support effort if we wait > until after the Debian 11.0 release? What is the pain if we leave > this unfixed until Bullseye releases (I'm assuming that it's going to > be released soon)? The buildd's aren't going to be rebuilding > e2fsprogs until the next source upload, I would think. > > Contrawise, what is the impact on the Debian Release and Debian > Installer teams if I push out a bug-fix-only e2fsprogs source package > in the next week or so? > > I'll do what is least disruptive for all of the relevant teams. Let > me know what's preferred. > > Cheers, > > - Ted
Bug#987353: CVE-2020-8903 CVE-2020-8907 CVE-2020-8933
On Thu, May 13, 2021 at 09:56:53PM +0100, Marcin Kulisz wrote: > > I hope that we're be able to change it, but for me fundamental > question is if Google is interested in participating in effort to > keep those packages in Debian main and if so what resources can be > committed to do so. From my side I can say that I'll try to find > time to work on the relevant packages or to sponsor uploads if > somebody else want to take on this task. I'd be interested in helping; while I happen to work for Google, this would only be in my personal compacity. One caveat, though, which is why I've hesitated in replying, is I don't have any experience packaging Python applications. In particular, if the Google SDK requires python packages that are either newer or older than what is packaged in Debian, Debian's prohibition of private copies of dependencies could make this quite painful (if nothing else, just simply testing to make sure things still work with variations in the Python packages available in Debian) - Ted
Bug#970176: aka new upstream release
On Tue, Oct 27, 2020 at 12:24:51AM +0100, Diederik de Haas wrote: > Package: f2fs-tools > Version: 1.11.0-1.1 > Followup-For: Bug #970176 > > On salsa.d.o it looks like version 1.13 was ready to go, but it appears > that it just never got uploaded to the archives/sid. > It would be really welcome to have a more recent version of f2fs-tools > in Bullseye and as 1.14 has been released/tagged upstream, it would be > great to have that one. (But 1.13 would be an improvement as well) > > The freeze is only a couple of months away... Unfortunately, it can take **months** for the package to go throw the NEW queue, since f2fs-tools is constantly making ABI-breaking changes so we have to bump the shared library, causing it to have to go through ftp-masters. As such, the super-long latency is super-demotivating, so I've not just bothered in a while. I suspect what we should do is drop the shared libraries all together, and just ship statically linked executables since the f2fs upstream is constitutionally incapable of keeping shared library ABI's stable. So it's on my todo list, but that's probably what needs to happen so we can actually get something posted to sid without being stalled for months and months in the NEW queue. I'll try to get it at some point, but if someone else wants to work on making the necessary changes to the package, help is welcome. Things are just a bit busy at the moment, and I've only been working on f2fs for gce/kvm-xfstests, and testing f2fs hasn't been high on my priority list. If someone else wants to be a co-maintainer, that would be great, since I was only stepping up because no one *else* was updating f2fs-tools and f2fs-tools was super-out-of-date when I first uploaded an f2fs-tools updates. Cheers, - Ted
Bug#969495: Unable to recover file with extundelete
reassign 969495 extundelete thanks extundelete works by trying to look at the journal and tries to figure out how to find old metadata blocks left over in older transactions in the jbd2 journal file. It's not part of e2fsprogs, but its own separate package. The extundelete program is a massive abstraction violation, and whether or not it works is essentially an accident. The ext4 developers don't consider themselves bound by any kind of guarantees that extundelete will continue to work in the future. We aren't going to deliberately break it, but if we add new features to make ext4 more flexible or robust (which would be the case with the metadata checksum feature), and extundelete happens to break, our reaction will be: ¯\_(ツ)_/¯ My suggestion is that you use regular backups and/or userspace solutions such as the trash-cli package, which implements the Freedesktop.org Trash Can specification: https://specifications.freedesktop.org/trash-spec/trashspec-1.0.html It may be possible to teach extundelete to deal with the combination of 64-bit and metadata_csum, but then as we add new features such as the Fast Commit[1] feature, which improves ext4's benchmark performance by 21% to 192% depending on the workload (and we are looking to see if we can use transaction batching to further improve Fast Commit's numbers), extundelete is going to break again. [1] https://lwn.net/Articles/826620/ If the extundelete upstream package author and/or debian package maintainer wants to update extundelete to keep up with new ext4 features, that would be great. If not ¯\_(ツ)_/¯ Cheers, - Ted On Thu, Sep 03, 2020 at 09:10:12PM +0200, Jonas Jensen wrote: > Package: e2fsprogs > Version: 1.44.5-1+deb10u3 (armhf) > > A deleted file can not be recovered on EXT4 partitions when created > using the "64bit" and "metadata_csum" flags. > See test case below, I run Debian 10 buster on a BananaPi BPI-M1. > The EXT4 partition is re-created on the same drive and partition. I > re-tried this multiple times always with the same result, the test > file could always be recovered when both "64bit" and "metadata_csum" > were unset in the file system, and it could not be recovered when they > were set. > > mkfs.ext4 -O > has_journal,ext_attr,resize_inode,dir_index,filetype,extent,flex_bg,sparse_super,large_file,huge_file,uninit_bg,dir_nlink,extra_isize,^64bit,^metadata_csum > /dev/sdl1 > mke2fs 1.44.5 (15-Dec-2018) > /dev/sdl1 contains a ext4 file system > last mounted on /mnt on Thu Sep 3 20:15:24 2020 > Proceed anyway? (y,N) y > Creating filesystem with 366284390 4k blocks and 91578368 inodes > Filesystem UUID: 6072e511-7f38-4685-9e5b-e168bfef5ed4 > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, > 2654208, > 4096000, 7962624, 11239424, 2048, 23887872, 71663616, 78675968, > 10240, 214990848 > > Allocating group tables: done > Writing inode tables: done > Creating journal (262144 blocks): done > Writing superblocks and filesystem accounting information: done > > mount /dev/sdl1 /mnt && echo "delete" > /mnt/un && sync && rm /mnt/un > && sync && umount /mnt > > rm restore/* && extundelete --restore-all -o restore /dev/sdl1 && cat > restore/un > NOTICE: Extended attributes are not restored. > Loading filesystem metadata ... 11179 groups loaded. > Loading journal descriptors ... 15 descriptors loaded. > Searching for recoverable inodes in directory / ... > 1 recoverable inodes found. > Looking through the directory structure for deleted files ... > 0 recoverable inodes still lost. > delete > > mkfs.ext4 -O > has_journal,ext_attr,resize_inode,dir_index,filetype,extent,flex_bg,sparse_super,large_file,huge_file,uninit_bg,dir_nlink,extra_isize,64bit,metadata_csum > /dev/sdl1 > mke2fs 1.44.5 (15-Dec-2018) > /dev/sdl1 contains a ext4 file system > last mounted on /mnt on Thu Sep 3 20:18:22 2020 > Proceed anyway? (y,N) y > Creating filesystem with 366284390 4k blocks and 91578368 inodes > Filesystem UUID: 5e744287-578e-4dc5-adf0-5e780bee2fdf > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, > 2654208, > 4096000, 7962624, 11239424, 2048, 23887872, 71663616, 78675968, > 10240, 214990848 > > Allocating group tables: done > Writing inode tables: done > Creating journal (262144 blocks): done > Writing superblocks and filesystem accounting information: done > > mount /dev/sdl1 /mnt && echo "delete" > /mnt/un && sync && rm /mnt/un > && sync && umount /mnt > > rm restore/* && extundelete --restore-all -o restore /dev/sdl1 && cat > restore/un > NOTICE: Extended attributes are not restored. > Loading filesystem metadata ... 11179 groups loaded. > Loading journal descriptors ... 0 descriptors loaded. > Searching for recoverable inodes in directory / ... > 0 recoverable inodes found.
Bug#574292: resize2fs still risky
On Thu, May 21, 2020 at 10:04:54AM +0200, Wilmer van der Gaast wrote: > I just started a downsizing resize2fs operation over an SSH session > without screen and then realised how bad an idea that was.. > > Then found this bug as a confirmation. :-) > > It looks like resize2fs (still) doesn't install any signal handlers so > I've just disabled auto-suspend on my laptop and will hope for the best. > > tytso@ mentions the danger of -M by the way, I guess that danger applies > to *any* downsizing operation, not just to bare-minimum resizes done by -M? The reason why we've never bothered with signal handlers is because it won't help against unclean/uncommanded shutdowns --- for example such as auto-suspend on your laptop. In *general* things are mostly safe unless you need to do an overlapping move of the inode table, in which case it is very hard to make it be 100% safe. You can use resize2fs -z undo_file, but that's currently not super-safe, because we aren't forcing a flush between every I/O operation --- because that would disastrous in terms of performance, and users would then be complaining about how slow the resize operation was, and how it was increasing their SSD write wear. Basically, it's hard to keep everyone happy. If you would like to help, you could try running resize2fs -p on a test file system, and then try to randomly interrupt the resize at various points, and then run e2fsck -fy and during which phase of the resize you see things getting corrupted. It's not a high priority for me, since I have way too many other things to worry about, if it is high priority for *you*, some contributions to the effort would be much appreciated. After all, Open source means you get to help fix the things you care about. :-) - Ted
Bug#958982: Fwd: Slow disk resize during boot on VMs with Debian 10 Buster images
On Mon, Apr 27, 2020 at 09:01:56AM -0700, Igor Dvorzhak wrote: > Package: e2fsprogs > > resize2fs takes extra 90 seconds to resize 2TB boot disk during boot on > Debian 10 than on Debian 9. > > Note that time to create/provision VM instance (gcloud compute instances > create ...) is the same (around 10 seconds) for both Debian 9 and Debian > 10, but time to successful SSH is different (see while loop in my test > command) - this is when VM is actually booted, not when gcloud instances > create ... returns. How the root file system is resized on Debian 9 and Debian 10 are very different, and the difference seems to stem from Google's initramfs scripts. In Debian 9, the root file system is resized after it is mounted. We can see this in the kernel messages. In Debian 10, the root file system is resized in the initramfs, and the root file system isn't even mounted until 168 seconds after the systme is booted. There are two scripts in Debian 10's initramfs on GCE, and they are both marked "Copyright 2018 Google Inc.", and they scripts/expand-lib.sh, and local-premount/expand_rootfs. These scripts are not in stock Debian's /usr/share/initramfs-tools directory, so they are a Google special. What the bash function resize_filesystem in scripts/expand-lib.sh does is that it runs e2fsck on the root partition, and then does an off-line resize. It is the off-line resize which is appearing to take a long time. Looking at the off-line resize, it does do a bit more I/O. In particuar, it's going to do 16k or so extra 4k random writes which aren't done with an on-line resize. That shouldn't translate to PD taking 90 seconds to do those writes, though! That being said, the differences do appear to be in how Google to do the file system resizing before the file system is mounted. I'm not sure why someone decided it would be a good idea to do an off-line resize in Debian 10, but that appears to be a Google decision, not a Debian decision. > I also tested this with the newer 1.45.5-2 e2fsprogs version on Debian 10 > (updated from buster-backports repo) and Ubuntu 20.04 LTS. But only Debian > 10 VM still have this regression, Ubuntu 20.04 LTS doesn't have it, so it > seems that this is a Debian 10-specific issue. > > Are there any configuration options that allow to restore Debian 9 behavior > in Debian 10 for resize2fs during VM boot time? No, it looks like it would require making some changes to the GCE's init scripts in the Debian 10 image. I suspect it means needing to dig around at: https://github.com/GoogleCloudPlatform/compute-image-packages Cheers, - Ted
Bug#956509: e2fsck with shared_blocks doesn't like shared bitmap blocks
On Sun, Apr 12, 2020 at 01:25:49AM -0700, Josh Triplett wrote: > Package: e2fsprogs > Version: 1.45.6-1 > Severity: wishlist > File: /sbin/e2fsck > > With a read-only filesystem and the shared_blocks option set, e2fsck > allows multiply referenced blocks; however, it doesn't like multiple > references to inode bitmap blocks or block bitmap blocks. I suppose, but I'm not sure it's worth the effort. If the block group is not used, then we won't have an allocation bitmap at all. And in other cases, it is highly unlikely that we can share the allocation bitmap unless 100% of the blocks (or inodes) in that block group are in use. In that case, we'll save 1/32768, netting a space savings of 0.003%. That way we actually create a shared_blocks file system today, the only thing that can get shared is data blocks --- we keep a hashmap of each block and its checksum, and so we can easily dedup data blocks. But for allocation bitmaps, we would need to construct the file system first, and then dedup all blocks, instead of finding duplicates while we are writing the file system. Do you have a use case where you really do want to create a new way of creating a shared_blocks file system where the 0.003% savings is worthwhile? - Ted
Bug#955549: f2fs-tools: fsck.f2fs segfaults
On Thu, Apr 02, 2020 at 02:01:26PM +0200, Adam Borowski wrote: > > After a lot of output on a damaged filesystem (SD card copied to an image) > fsck.f2fs dies with: > > - File name : mkfs.ext3.dpkg-new > - File size : 6 (bytes) > > Program received signal SIGSEGV, Segmentation fault. > 0x93ec in memcpy (__len=18446744073323892736, > __src=0x5560760c, __dest=0x7fffe000) at > /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34 > warning: Source file is more recent than executable. > 34 return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest)); > (gdb) bt > #0 0x93ec in memcpy (__len=18446744073323892736, > __src=0x5560760c, __dest=0x7fffe000) at > /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34 > #1 convert_encrypted_name (name=name@entry=0x5560760c " ", > len=-385658880, new=new@entry=0x7fffe000 " ", enc_name=) > at fsck.c:1132 > #2 0x55562286 in print_inode_info (sbi=0x5557db20 , > node=0x556075b0, name=1) at mount.c:183 > #3 0x55562a46 in print_node_info (sbi=, > node_block=, verbose=) at mount.c:277 > #4 0x55560d3f in dump_node (sbi=sbi@entry=0x5557db20 , > nid=nid@entry=24274, force=force@entry=1) at dump.c:520 > #5 0xe94c in fsck_verify (sbi=0x5557db20 ) at > fsck.c:2568 > #6 0x699b in do_fsck (sbi=0x5557db20 ) at main.c:569 > #7 main (argc=, argv=) at main.c:726 > > > I tried building current upstream git, also segfaults. > > I have a copy of the filesystem in question from before any repair attempts. > It has no sensitive data on it, thus I can share if needed -- 14GB. Thanks for the bug report. Can you make the file system image available somehow? Maybe for download at some URL? How well does it compress? - Ted
Bug#954820: postfix's autopkgtest on arm64 is flaky; this is blocking e2fsprogs
On Mon, Mar 23, 2020 at 11:47:58PM -0400, Scott Kitterman wrote: > > If you can't fix postfix's flaky autopkg test, can you please revert the > > explicit dependency on e2fsprogs? > > I've asked in #debci to see if they have any suggestions about the pty > shortage on arm64. Thanks; it looks like the CI folks have managed to fix the postfix failure, so at least the short-term failure is fixed. The reason why I was concerned was it looks like autopkgtest doesn't retry after failures, so a maintainer that tries to add autopkgtests, in the face of flaky CI infrastructure, is in effect punished for their attempt to make things better, since it will significantly slow (or permanently bar) testing migration --- no good deed goes unpushed... > The dependency was added due to bug #887277. Do you not support the goal of > making e2fsprogs non-essential at some point? While I realize dropping the > dependency solves your particular problem, it doesn't actually make things > more reliable. I don't think making e2fsprogs non-essential is worth a lot of effort. If it causes pain, my personal preference is to well, Not. Part of that is I suspect there are plenty of (user/adinistrator) shell scripts which use chattr, so just adding e2fsprogs as a dependency is not going to make things more reliable *anyway*. And there are plenty of other places where the size of debootstrap's minbase size can be reduced (e.g., simply separating out the translation files from a large number of packages, including shellutils) will save a heck of al ot more space. So why not go after the low-hanging fruit instead of inflicting pain on other maintainers? If the CI infrastructure were reliable, I wouldn't care about people adding dependencies on e2fsprogs. But when it isn't, then it just drives home the point that trying to make e2fsprogs non-essential isn't "free", and past a certain point, we need to ask whether it's worth the effort. > The only option used by postfix is S. Do you have a suggestion for an > alternative approach that would not suffer from the same limitations? I > inherited the current setup from lamont. I'm not personally wedded to it, > but > I think the function is important, but I'm totally open to alternative ways > to > achieve it. So first of all, are you sure 'S' (S_SYNC) is what you really want? I assume postfix, like most MTA's, will use fsync(2) as necessary to guarantee file data are properly pushed out to disk when it is necessary for reliability. And if you set the Sync flag on the directory, it gets inherited by all files created in that directory, which might not what be you want. What you might instead is the 'D' (O_DIRSYNC) flag, applies only to directories. If all you want to do is to set a flag, whether it's O_SYNC or O_DIRSYNC, creating a small C program which basically does: #include #include #include int fd, f; fd = open(buf, O_RDONLY); ioctl(fd, FS_IOC_GETFLAGS, ); f |= O_DIRSYNC; ioctl(fd, FS_IOC_SETFLAGS, ); close(fd); To clear the flag, just use "f &= ~O_DIRSYNC" instead. Or use O_SYNC if that's really want you want. It might be possible to use perl instead, but unfortunately, sys/ioctl.ph isn't in perl-base, so unless you want to add a dependency on libperl5.30 it just means adding another dependency instead of e2fsprogs. Cheers, - Ted
Bug#954820: postfix's autopkgtest on arm64 is flaky; this is blocking e2fsprogs
Package: postfix Version: 3.5.0-1 Severity: normal Postfix autopkgtest on arm64 seems to be super flakey. This is currently blocking e2fsprogs: https://qa.debian.org/excuses.php?package=e2fsprogs https://ci.debian.net/data/autopkgtest/testing/arm64/p/postfix/4630231/log.gz This seems to be a fairly common occurence. See the history here: https://ci.debian.net/packages/p/postfix/testing/arm64/ The cause appears to be running out of pty's on the arm64 CI system, plus postfix explicitly declaring a dependency on e2fsprogs due to its use of chattr. If you can't fix postfix's flaky autopkg test, can you please revert the explicit dependency on e2fsprogs? Many thanks! - Ted
Bug#954428: lintian 2.58.0 seems to not correctly suppress warnings/errors for udeb packages
Package: lintian Version: 2.58.0 Severity: normal Dear Maintainer, Lintian is reporting a number of spurious failures: E: e2fsprogs-udeb udeb: debian-changelog-file-missing E: e2fsprogs buildinfo: field-too-long Installed-Build-Depends (5090 chars > 5000) E: e2fsprogs-udeb udeb: file-in-etc-not-marked-as-conffile etc/mke2fs.conf E: e2fsprogs-udeb udeb: no-copyright-file W: e2fsprogs-udeb udeb: binary-without-manpage sbin/badblocks W: e2fsprogs-udeb udeb: binary-without-manpage sbin/e2fsck W: e2fsprogs-udeb udeb: binary-without-manpage sbin/e2label W: e2fsprogs-udeb udeb: binary-without-manpage sbin/e2mmpstatus W: e2fsprogs-udeb udeb: binary-without-manpage sbin/fsck.ext2 W: e2fsprogs-udeb udeb: binary-without-manpage sbin/fsck.ext3 W: e2fsprogs-udeb udeb: binary-without-manpage sbin/fsck.ext4 W: e2fsprogs-udeb udeb: binary-without-manpage sbin/mke2fs W: e2fsprogs-udeb udeb: binary-without-manpage sbin/mkfs.ext2 W: e2fsprogs-udeb udeb: binary-without-manpage sbin/mkfs.ext3 W: e2fsprogs-udeb udeb: binary-without-manpage sbin/mkfs.ext4 W: e2fsprogs-udeb udeb: binary-without-manpage sbin/resize2fs W: e2fsprogs-udeb udeb: binary-without-manpage sbin/tune2fs N: 5 tags overridden (2 errors, 3 info) By *definition* udebs aren't supposed to have changelogs, copyright files, or man pages. (This was found when running sbuild, in a fully uptodate sid chroot last night.) -- System Information: Debian Release: bullseye/sid APT prefers testing APT policy: (900, 'testing'), (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.5.0-rc7-00153-gb22518177d26 (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages lintian depends on: ii binutils 2.34-4 ii bzip21.0.8-2 ii diffstat 1.63-1 ii dpkg 1.19.7 ii dpkg-dev 1.19.7 ii file 1:5.38-4 ii gettext 0.19.8.1-10 ii gpg 2.2.19-3 ii intltool-debian 0.35.0+20060710.5 ii libapt-pkg-perl 0.1.36+b3 ii libarchive-zip-perl 1.68-1 ii libcapture-tiny-perl 0.48-1 ii libcgi-pm-perl 4.46-1 ii libclass-xsaccessor-perl 1.19-3+b3 ii libclone-perl0.43-2 ii libdevel-size-perl 0.83-1+b1 ii libdpkg-perl 1.19.7 ii libemail-valid-perl 1.202-1 ii libfile-basedir-perl 0.08-1 ii libfile-find-rule-perl 0.34-1 ii libfont-ttf-perl 1.06-1 ii libio-async-loop-epoll-perl 0.20-1 ii libio-async-perl 0.75-1 ii libipc-run-perl 20180523.0-2 ii libjson-maybexs-perl 1.004000-1 ii liblist-compare-perl 0.53-1 ii liblist-moreutils-perl 0.416-1+b5 ii libmoo-perl 2.003006-1 ii libmoox-aliases-perl 0.001006-1 ii libnamespace-clean-perl 0.27-1 ii libpath-tiny-perl0.108-1 ii libsereal-decoder-perl 4.011+ds-1 ii libsereal-encoder-perl 4.011+ds-1 ii libtext-levenshtein-perl 0.13-1 ii libtimedate-perl 2.3200-1 ii libtry-tiny-perl 0.30-1 ii libtype-tiny-perl1.008001-2 ii liburi-perl 1.76-2 ii libxml-libxml-perl 2.0134+dfsg-2 ii libyaml-libyaml-perl 0.80+repack-2+b1 ii man-db 2.9.1-1 ii patchutils 0.3.4-2+b1 ii perl [libdigest-sha-perl]5.30.0-9 ii t1utils 1.41-3 ii xz-utils 5.2.4-1+b1 Versions of packages lintian recommends: ii libperlio-gzip-perl 0.19-1+b6 Versions of packages lintian suggests: ii binutils-multiarch 2.34-4 ii libhtml-parser-perl3.72-5 ii libtext-template-perl 1.58-1 -- no debconf information
Bug#953926: e2fsprogs: Build-Depends on unused libattr1-dev
tags 953926 +pending thanks On Sat, Mar 14, 2020 at 07:46:18PM +0100, Guillem Jover wrote: > > This package used to use libattr for its xattr support, but got > switched to use the native support from glibc, but the Build-Depends > got left behind. Thanks for the report. The following will be in the next release of e2fsprogs. - Ted commit b3f9df9f1ba5ded7031566c94a7a9dfdcbd38aa6 Author: Theodore Ts'o Date: Sun Mar 15 00:56:01 2020 -0400 debian: drop libattr1-dev from the build dependencies list The libattr has stopped providing attr/xattr.h; we now use sys/xattr.h. So there is no longer any reason to require that the libattr1-dev package be present when building e2fsprogs, so drop it. Addresses-Debian-Bug: #953926 Signed-off-by: Theodore Ts'o diff --git a/debian/control b/debian/control index 71613e11..69471f45 100644 --- a/debian/control +++ b/debian/control @@ -2,7 +2,7 @@ Source: e2fsprogs Section: admin Priority: required Maintainer: Theodore Y. Ts'o -Build-Depends: gettext, texinfo, pkg-config, libfuse-dev [linux-any kfreebsd-any] , libattr1-dev, debhelper (>= 12.0), dh-exec, libblkid-dev, uuid-dev, m4, udev [linux-any], systemd [linux-any], cron [linux-any] +Build-Depends: gettext, texinfo, pkg-config, libfuse-dev [linux-any kfreebsd-any] , debhelper (>= 12.0), dh-exec, libblkid-dev, uuid-dev, m4, udev [linux-any], systemd [linux-any], cron [linux-any] Standards-Version: 4.4.1 Homepage: http://e2fsprogs.sourceforge.net Vcs-Browser: https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git
Bug#953494: Document 2038 migration method
On Tue, Mar 10, 2020 at 05:57:33AM +0800, 積丹尼 Dan Jacobson wrote: > Package: e2fsprogs > Version: 1.45.5-2 > > Idea: make a new file > /usr/share/doc/e2fsprogs/Year2038warnings > that would say: > > If you get > ext4 filesystem being mounted at ... supports timestamps until 2038 > (0x7fff) > warnings, here is what to do. > > As there is no way to simply "tune" the old filesystem, we must copy the > files to a new filesystem, and then make the UUID numbers the same (for > those people who mention them in their fstabs and don't want to have to > change them on each machine they use.) Well there is in theory a way to adjust a file system using "tune2s -I 256 /dev/sdXX". ***However*** this code doesn't get that much use in real life, and if you crash while you are running, you could lose all or most of your data in the file system. So I strongly recommend that you back up your file system first, and then try to use "tune2fs -I 256" --- and if it works, huzzah! If it doesn't at least you had a backup of your file system. Also note that recreating the file system will result in a better file system with better performance, so in general it's the better thing to do. Finally, realistically speaking most storage media will wear out well before we get to 2038, so I really would just relax about this whole thing and chill out. Cheers, - Ted
Bug#953493: Add "Year 2038-OK certified version" statement to man page
On Tue, Mar 10, 2020 at 05:10:29AM +0800, 積丹尼 Dan Jacobson wrote: > Package: e2fsprogs > Version: 1.45.5-2 > File: /usr/share/man/man8/mke2fs.8.gz > > Please add to the mke2fs man page: > > ** This version of mke2fs is guaranteed to make filesystems that >support timestamps *beyond* 2038. ** > (The user will be looking for "2038" in the man page. Please be sure he > finds something. Thanks.) > > "You can use it in full confidence that you will not get such kernel > warnings upon mounting. No matter what the physical device the > filesystem will be created on, no matter what version of Linux you are > using." That's actually not a true statement. Supporting file systems beyond 2038 requires using inodes > 128 bytes. That is the default with this version of mke2fs, but #1 the user can specify an inode size of 128 bytes using a command line option, and #2, the the system administrator can configure a different default using mke2fs.conf. > "The previous unfortunate behavior will never happen again, we promise." It's not even an unfortunate behavior. It's just a file system with an inode size of 128 bytes. Long enough ago, that was the only supported file system. Long enough ago, we didn't support file systems larger than 2GB, and then, we didn't support file systems larger with more than 2**32 blocks. Now we support file systems with up to 2**64 blocks. - Ted
Bug#948550: buster-pu: package e2fsprogs/1.44.5-1+deb10u2
On Wed, Jan 22, 2020 at 09:27:01AM +0100, Cyril Brulebois wrote: > You can upload. And no, it will stay in p-u-new until it's approved by > some SRM, at which point it will be made available in > stable-proposed-updates (note word order), until the point release. Great, thanks! And thanks for correcting me on the package queue names. The fact that the package won't be made available until the point release is because of the debian-installer dependency? I seem to recall that in some cases packages get made available to people installing updates before the point release happened? My apologies if this is all written up somewhere, and I'm asking stupid questions. :-) Cheers, - Ted
Bug#948550: buster-pu: package e2fsprogs/1.44.5-1+deb10u2
Oh, one more question. Is a source-only upload OK? I'm still a bit confused when a source-only upload is required, and when a binary upload is required? Is the latter only for the NEW queue? - Ted
Bug#948550: buster-pu: package e2fsprogs/1.44.5-1+deb10u2
On Tue, Jan 21, 2020 at 07:57:54PM +, Adam D. Barratt wrote: > Control: tags -1 + confirmed d-i > > On Thu, 2020-01-09 at 22:34 -0500, Theodore Y. Ts'o wrote: > > +e2fsprogs (1.44.5-1+deb10u3) buster; urgency=medium > > + > > + * Fix CVE-2019-5188: potential stack underflow in e2fsck (Closes: > > #948508) > > + * Fix use after free in e2fsck (Closes: #948517) > > This looks OK to me, but will also need a d-i ACK as e2fsprogs produces > a udeb; CCing and tagging to reflect that. Thanks! Should I go ahead and upload, or should we wait for the d-i ACK first? It'll just stay in the proposed-stable-updates queue until final approval as I understand things, correct? Cheers, - Ted
Bug#948550: buster-pu: package e2fsprogs/1.44.5-1+deb10u2
Package: release.debian.org Severity: normal Tags: buster User: release.debian@packages.debian.org Usertags: pu The reason is to fix two security issues which are fixed in 1.45.5. The debdiff is attached. Let me know if this looks good for uploading. Thanks!! diff -Nru e2fsprogs-1.44.5/debian/changelog e2fsprogs-1.44.5/debian/changelog --- e2fsprogs-1.44.5/debian/changelog 2019-09-25 13:37:44.0 -0400 +++ e2fsprogs-1.44.5/debian/changelog 2020-01-09 20:19:57.0 -0500 @@ -1,3 +1,10 @@ +e2fsprogs (1.44.5-1+deb10u3) buster; urgency=medium + + * Fix CVE-2019-5188: potential stack underflow in e2fsck (Closes: #948508) + * Fix use after free in e2fsck (Closes: #948517) + + -- Theodore Y. Ts'o Thu, 09 Jan 2020 20:19:57 -0500 + e2fsprogs (1.44.5-1+deb10u2) buster-security; urgency=high * Fix CVE-2019-5094: potential buffer overrun in e2fsck (Closes: #941139) diff -Nru e2fsprogs-1.44.5/debian/patches/e2fsck-abort-if-there-is-a-corrupted-directory-block.patch e2fsprogs-1.44.5/debian/patches/e2fsck-abort-if-there-is-a-corrupted-directory-block.patch --- e2fsprogs-1.44.5/debian/patches/e2fsck-abort-if-there-is-a-corrupted-directory-block.patch 1969-12-31 19:00:00.0 -0500 +++ e2fsprogs-1.44.5/debian/patches/e2fsck-abort-if-there-is-a-corrupted-directory-block.patch 2020-01-09 20:19:57.0 -0500 @@ -0,0 +1,53 @@ +From: Theodore Ts'o +Date: Thu, 19 Dec 2019 19:37:34 -0500 +Subject: e2fsck: abort if there is a corrupted directory block when rehashing + +In e2fsck pass 3a, when we are rehashing directories, at least in +theory, all of the directories should have had corruptions with +respect to directory entry structure fixed. However, it's possible +(for example, if the user declined a fix) that we can reach this stage +of processing with a corrupted directory entries. + +So check for that case and don't try to process a corrupted directory +block so we don't run into trouble in mutate_name() if there is a +zero-length file name. + +Addresses-Debian-Bug: 948508 +Addresses: TALOS-2019-0973 +Addresses: CVE-2019-5188 +Signed-off-by: Theodore Ts'o +(cherry picked from commit 8dd73c149f418238f19791f9d666089ef9734dff) +--- + e2fsck/rehash.c | 9 + + 1 file changed, 9 insertions(+) + +diff --git a/e2fsck/rehash.c b/e2fsck/rehash.c +index 7c4ab083..27e1429b 100644 +--- a/e2fsck/rehash.c b/e2fsck/rehash.c +@@ -159,6 +159,10 @@ static int fill_dir_block(ext2_filsys fs, + dir_offset += rec_len; + if (dirent->inode == 0) + continue; ++ if ((name_len) == 0) { ++ fd->err = EXT2_ET_DIR_CORRUPTED; ++ return BLOCK_ABORT; ++ } + if (!fd->compress && (name_len == 1) && + (dirent->name[0] == '.')) + continue; +@@ -398,6 +402,11 @@ static int duplicate_search_and_fix(e2fsck_t ctx, ext2_filsys fs, + continue; + } + new_len = ext2fs_dirent_name_len(ent->dir); ++ if (new_len == 0) { ++ /* should never happen */ ++ ext2fs_unmark_valid(fs); ++ continue; ++ } + memcpy(new_name, ent->dir->name, new_len); + mutate_name(new_name, _len); + for (j=0; j < fd->num_array; j++) { +-- +2.24.1 + diff -Nru e2fsprogs-1.44.5/debian/patches/e2fsck-don-t-try-to-rehash-a-deleted-directory.patch e2fsprogs-1.44.5/debian/patches/e2fsck-don-t-try-to-rehash-a-deleted-directory.patch --- e2fsprogs-1.44.5/debian/patches/e2fsck-don-t-try-to-rehash-a-deleted-directory.patch 1969-12-31 19:00:00.0 -0500 +++ e2fsprogs-1.44.5/debian/patches/e2fsck-don-t-try-to-rehash-a-deleted-directory.patch 2020-01-09 20:19:57.0 -0500 @@ -0,0 +1,47 @@ +From: Theodore Ts'o +Date: Thu, 19 Dec 2019 19:45:06 -0500 +Subject: e2fsck: don't try to rehash a deleted directory + +If directory has been deleted in pass1[bcd] processing, then we +shouldn't try to rehash the directory in pass 3a when we try to +rehash/reoptimize directories. + +Addresses-Debian-Bug: 948508 +Signed-off-by: Theodore Ts'o +(cherry picked from commit 71ba13755337e19c9a826dfc874562a36e1b24d3) +--- + e2fsck/pass1b.c | 4 + e2fsck/rehash.c | 2 ++ + 2 files changed, 6 insertions(+) + +diff --git a/e2fsck/pass1b.c b/e2fsck/pass1b.c +index 5693b9cf..bca701ca 100644 +--- a/e2fsck/pass1b.c b/e2fsck/pass1b.c +@@ -705,6 +705,10 @@ static void delete_file(e2fsck_t ctx, ext2_ino_t ino, + fix_problem(ctx, PR_1B_BLOCK_ITERATE, ); + if (ctx->inode_bad_map) + ext2fs_unmark_inode_bitmap2(ctx->inode_bad_map, ino); ++ if (ctx->inode_reg_map) ++ ext2fs_unmark_inode_bitmap2(ctx->inode_reg_map, ino); ++ ext2fs_unmark_inode_bitmap2(ctx->inode_dir_map, ino); ++
Bug#948517: e2fsprogs: malicious fs can cause use after free in e2fsck
Package: e2fsprogs Version: Severity: grave Tags: security Justification: user security hole E2fsprogs 1.45.5 contains a bug fix for a use after free which could potentially be used to run malicious code if a user can be tricked into running e2fsck on a maliciously crafted file system. The following commit should be backported to Debian Buster (it is not applicable to older versions of e2fsprogs): 101e73e9 - e2fsck: fix use after free in calculate_tree() No exploit exists today as far as I know, but we should backport this fix while we are addressing CVE-2019-5188 (Bug: #948508).
Bug#948508: CVE-2019-5188: malicious fs can cause stack underflow in e2fsck
Package: e2fsprogs Version: 1.43.4-2+deb9u1 Severity: grave Tags: security Justification: user security hole E2fsprogs 1.45.5 contains a bug fix for CVE-2019-5188 / TALOS-2019-0973. The following commits need to be backported to address this vulnerability in Debian Buster and Debian Stretch: 8dd73c14 - e2fsck: abort if there is a corrupted directory block when rehashing 71ba1375 - e2fsck: don't try to rehash a deleted directory The impact of this bug is that if an attacker can tricker the system into running e2fsck on an untrustworthy file system, a maliciously crafted file system could result in a stack underflow. The primary concern is on 32-bit systems; due to limitations in the kind of stack corruption which can be triggered due to this bug, it is probably not exploitable on 64-bit systems.
Bug#948193: e2scrub services delay boot by 2 seconds
On Mon, Jan 06, 2020 at 12:39:52AM -0800, Josh Triplett wrote: > That's an *additional* delay, on top of the sleeps above. The two-second > sleep in the "exitcode" function seems like the primary culprit. Note > that I don't even have lvm2-tools installed. Ah, yes, sorry, I had missed the sleep in the exitcode function. Actually it's not needed in e2scrub_all at all; it was there due copy/paste oversight. The commit below should address your concern. Cheers, - Ted commit 0b3208958eb63df6cd8b38ee63f3bc4266a683e7 Author: Theodore Ts'o Date: Mon Jan 6 16:01:23 2020 -0500 e2scrub, e2scrub_all: don't sleep unnecessarily in exitcode The two second sleep is only needed in e2scrub, and when there is a failure, so that systemd has a chance to gather the log output before e2scrub exits. It's not needed if the script is exiting successfully, and it's never needed for e2scrub_all ever. Addresses-Debian-Bug: #948193 Signed-off-by: Theodore Ts'o diff --git a/scrub/e2scrub.in b/scrub/e2scrub.in index f21499b6..30ab7cbd 100644 --- a/scrub/e2scrub.in +++ b/scrub/e2scrub.in @@ -66,7 +66,7 @@ exitcode() { # for capturing all the log messages if the scrub fails, because the # fail service uses the service name to gather log messages for the # error report. - if [ -n "${SERVICE_MODE}" ]; then + if [ -n "${SERVICE_MODE}" -a "${ret}" -ne 0 ]; then test "${ret}" -ne 0 && ret=1 sleep 2 fi diff --git a/scrub/e2scrub_all.in b/scrub/e2scrub_all.in index f0336711..4288b969 100644 --- a/scrub/e2scrub_all.in +++ b/scrub/e2scrub_all.in @@ -56,14 +56,8 @@ exitcode() { # section 22.2) and hope the admin will scan the log for what # actually happened. - # We have to sleep 2 seconds here because journald uses the pid to - # connect our log messages to the systemd service. This is critical - # for capturing all the log messages if the scrub fails, because the - # fail service uses the service name to gather log messages for the - # error report. - if [ -n "${SERVICE_MODE}" ]; then + if [ -n "${SERVICE_MODE}" -a "${ret}" -ne 0 ]; then test "${ret}" -ne 0 && ret=1 - sleep 2 fi exit "${ret}"
Bug#948193: e2scrub services delay boot by 2 seconds
On Sat, Jan 04, 2020 at 07:57:16PM -0800, Josh Triplett wrote: > Package: e2fsprogs > Version: 1.45.4-1 > Severity: important > > The e2fsprogs package installs a service and timer to run e2scrub. That > service sleeps for 2 seconds before exiting, delaying the boot by 2 > seconds. It's not necessarily 2 seconds, and it's not directly sleeping. It's however long it takes to spin up any storage devices, caused by running lvs. The bulk of the time of running "e2scrub_all -A -r" is the time to run 'lsblk' and 'lvs'. I've already queued up a change (see below) so that we won't attempt to clean up any left-over LVM snapshot volumes if e2scrub is not enabled via /etc/e2scrub.conf, and even if scrubbing is enabled, we check for snapshots via scanning /dev/mapper instead of using lvs. This commit will be part of e2fsprogs 1.45.5, to be released in the next few days. > Second, please use ConditionPathExists or similar to check for the tools > e2scrub needs (lsblk and lvcreate), rather than running a script that > checks for them and then exits. That's not the cause of most of the time needed to run e2scrub_all. We also need to run these sanity checks when e2scrub_all is run by hand, or run out of cron. > And third, please consider *not* enabling this by default. It wasn't enabled by default. And the issue of lvs being slow is fixed by: commit 333268d65d26fbb2d22f7a8b6ac797babcc69543 Author: Darrick J. Wong Date: Mon Nov 4 17:54:14 2019 -0800 e2scrub_all: don't even reap if the config file doesn't allow it Dave Chinner complains that the automated on-boot e2scrub reaping takes a long time (because the lvs command can take a while to run) even though the automated e2scrub is disabled via e2scrub.conf on his systems. We still need the reaping service to kill off stale e2scrub snapshots after a crash, but it's unnecessary to annoy everyone with slow bootup. Because we can look for the e2scrub snapshots in /dev/mapper, let's skip reaping if periodic e2scrub is disabled unless we find evidence of e2scrub snapshots in /dev. Reported-by: Dave Chinner Signed-off-by: Darrick J. Wong Signed-off-by: Theodore Ts'o - Ted
Bug#946639: iwd 1.2-1 is crashing making WiFi access impossible
Package: iwd Version: 1.2-1 Severity: grave Justification: renders package unusable Dear Maintainer, I upgraded my Debian testing system to the latest version of Bullseye, and I was completely unable to connect to wireless. In the logs, I see that iwd is crashing: Dec 12 09:40:11 lambda kernel: [55486.381334] iwd[202645]: segfault at 38 ip 55b1995e2056 sp 7ffc966c5360 error 6 in iwd[55b1995c4000+84000] Dec 12 09:40:11 lambda kernel: [55486.381374] Code: 48 83 c4 20 e9 58 fe ff ff 0f 1f 00 3c 21 0f 85 70 ff ff ff 31 c0 80 7c 24 10 00 0f 95 c0 83 c0 01 41 89 47 08 48 8b 44 24 18 <49> 89 46 38 e9 51 ff ff ff 90 41 8b 77 08 85 f6 0f 84 44 ff ff ff The iwd crash happens when a wireless network is selected via NetworkManager. The NetworkManager icon will then show that it is trying to connect to a network, and after a few seconds, it will go back to the "I'm not connected to any network" --- and the logs show that iwd has crashed again. :-( Reverting to iwd 1.1-1 makes my system usable again. I was able to download iwd 1.1-1 by using a USB attached ethernet using the Verizon MiFi hotspot. So that confirms that it's not a DHCP failure, but rather a WiFi association failure and between that and "reverting to the previous version" fixes it, puts the finger of blame pretty squarely on iwd 1.2-1. This failure occured when connecting to three different network. (a) GoogleGuest at GoogleNYC, (b) a hotspot using a Pixel 4 XL handset, and (c) a Verizon LTE MiFi hotspot. I am running a 5.3.0 kernel with minimal changes (the ext4 patches that were pushed to Linus during the recent pre-5.4-rc1 merge window) that should not be relevant to this failure. The hardware is a Dell XPS 13 model 9370, using the ath10k_pci driver. Please let know if there is anything I can do to help debug this failure. -- System Information: Debian Release: bullseye/sid APT prefers testing APT policy: (900, 'testing'), (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.3.0-00068-g7ec6dbcda3db (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages iwd depends on: ii libc6 2.29-3 ii libreadline8 8.0-3 iwd recommends no packages. iwd suggests no packages. -- no debconf information
Bug#944649: e2fsprogs: FTBFS on hurd-i386
On Wed, Nov 13, 2019 at 11:38:55AM +0100, Svante Signell wrote: > Source: e2fsprogs > Version: 1.45.4-1 > Severity: important > Tags: patch > Usertags: hurd > User: debian-h...@lists.debian.org > > Hello, > > The latest version of e2fsprogs in sid, 1.45.4-1, FTBFS on GNU/Hurd due to two > reasons: > 1) is not available. > 2) PATH_MAX is not defined. > > 1) is fixed with the attached patch, configure.ac.diff where a variable MOUNT > is > introduced and checked for. That's not the correct patch. The right fix is to drop the test for sys/mount in AX_CHECK_MOUNT_OPT in acinclude.m4. If sys/mount doesn't exist, this will be interpreted as the system not supporting nosuid and nodev. (Which is fine, since as I recall HURD doesn't support fuse, and HAVE_MOUNT_{NOSUID,NODEV} is only used for fuse2fs.) > 2) is fixed by lib_ext2fs_dirhash.c.diff, which simply defines PATH_MAX, if > not > defined. Instead of using PATH_MAX this could probably be solved more > elegantly > by dynamic allocation of the needed size of buff, but grepping for PATH_MAX in > the code shows plenty of occurrences already, solved in various ways, so I did > not find the motivation for doing that. Thanks, that's look good. I'll apply that for the next release. > Additionally, in order to test the packages on a GNU/Linux system, not using > systemd, patches debian_control.diff and debian_e2fsprogs.install were needed. Yeah, but that's not something I can apply, because we need systemd to be present on the build boxes so we can properly create the e2fsprogs debs to support systemd systems --- and the only way to enforce this is with a build dependency. > Regarding the tests for GNU/Hurd they have to be disabled for now. Plenty of > failing tests are not applicable to Hurd. Patches for the failing tests are in > the works, the list is now 38 entries with some more to fix. That patch is > pending and will be reported in a separate bug report. Please ensure that the any changes to the tests do not cause them to break on Linux, and if possible it would be nice if you could check to make sure it doesn't cause test regressions on FreeBSD as well. Thanks! - Ted
Bug#944033: /usr/lib/x86_64-linux-gnu/e2fsprogs/e2scrub_all_cron: mail from weekly cronjob
On Sun, Nov 03, 2019 at 11:09:00AM -0800, Darrick J. Wong wrote: > > Because if you don't do that, the e2scrub process gets started with fd 0 > mapped to stdout of ls_targets on account of the "ls_targets | while > read tgt" loop. Yay bash. I guess the problem here is that > e2scrub_all's stdin is itself a pipe, so /dev/stdin maps to > /proc/self/fd/0, is a symlink to "pipe:[]" which doesn't help us > any. > > We could amend the e2scrub_all script to do: > > stdin="$(realpath /dev/stdin)" > test -w "${stdin}" || stdin=/dev/null Shouldn't that be 'test -r "${stdin}"'? Or we could just always redirect the input to /dev/null, perhaps? Cheers, - Ted
Bug#944033: /usr/lib/x86_64-linux-gnu/e2fsprogs/e2scrub_all_cron: mail from weekly cronjob
On Sun, Nov 03, 2019 at 05:07:22AM +0100, gregor herrmann wrote: > > Cron sends me the following mail once per week: > > /sbin/e2scrub_all: line 173: /proc/8234/fd/pipe:[90083173]: No such file or > directory Gregor, thanks for the bug report! This is coming from: stdin="$(realpath /dev/stdin)" ... ${DBG} "@root_sbindir@/e2scrub" ${scrub_args} "${tgt}" < "${stdin}" I'm not sure why this hack is there at all. Darrick, can you shed any light? What was the original intent of redirecting stdin to the realpath of /dev/stdin? Thanks, - Ted
Bug#942121: f2fs-tools: Please do not force to FSCK when changing kernel.
On Mon, Oct 14, 2019 at 10:12:34AM -0700, Jaegeuk Kim wrote: > On 10/14, Theodore Y. Ts'o wrote: > > Control: tag 942121 +upstream > > > > Hi Chao, Jaeguk, > > > > Could you take a look at this complaint and let me know if I should > > close the bug as Working As Intended or not? > > We can bypass kernel check by adding an option "--no-kernel-check". > Like this? The challenge is that for desktop and server installations of Linux, in general some program like /sbin/fsck parses /etc/fstab, and then runs the appropriate file-system specific fsck driver, e.g., /sbin/fsck.ext4, or /sbin/fsck.f2fs, etc. for each particular file system. On more modern-day systems systemd will run the /sbin/fsck. for each file system, but the issue remains the same: there in general isn't a good way to configure the OS to pass in file system-specific options, such as --no-kernel-check, to the /sbin/fsck. program. The solution that I have for this is to create a config file. See "man e2fsck.conf" for the documentation for it. Code to parse this WIN.INI-style "profile" is quite small; it was originally written for Kerberos, and then imported into e2fsprogs. There is a single C file[1] and a single header file[2], licensed under a MIT-style permissive free software license, so feel free to take and use it if it's helpful. [1] https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/support/profile.c [2] https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/support/profile.h Cheers, - Ted P.S. In case it isn't obvious, this is the same library code used to parse /etc/mke2fs.conf. See the man page for mke2fs.conf to see why we find it very useful to have a configuration file for mkfs.ext4. P.P.S. I also notice that f2fstools seem to bump the major version number of its shared libraries at every single release. There are ways this can be avoided; in fact, I haven't needed to bump the shared libraries for e2fsprogs in over a decade. Because new shared libraries require new debian packages, my upload of Debian package for f2fstools version 1.12.0 has been hung up in the NEW queue[1] for manual Debian ftpmaster review for over two months. So there some advantage in trying to avoid needing to bump the shared library version unnecessarily (although it does require more care to consider API/ABI backwards compatibility in your design and development). [1] https://ftp-master.debian.org/new.html
Bug#942121: f2fs-tools: Please do not force to FSCK when changing kernel.
Control: tag 942121 +upstream Hi Chao, Jaeguk, Could you take a look at this complaint and let me know if I should close the bug as Working As Intended or not? The concern seems to be that for desktop distros (which I know was not f2fs's original target), some users update their kernels frequently, and for them, they are finding the overhead running fsck.f2fs on every kernel upgrade to be overly burdensome. Thanks, - Ted On Fri, Oct 11, 2019 at 01:30:30AM +0900, Kyuma Ohta wrote: > Package: f2fs-tools > Version: 1.11.0-1.1 > Severity: wishlist > > Dear Maintainer, > > When boot with changing kernel version, force starting FSCK for F2FS > partitions. > So, spend a lot of time (some minutes or longer) at boot time. > > This is upstream's feature issue still not fixed. > See https://bbs.archlinux.org/viewtopic.php?id=245702 . > > Regards, > Ohta > > -- System Information: > Debian Release: bullseye/sid > APT prefers unstable-debug > APT policy: (500, 'unstable-debug'), (500, 'unstable'), (500, 'stable'), > (1, 'experimental-debug'), (1, 'experimental') > Architecture: amd64 (x86_64) > Foreign Architectures: i386 > > Kernel: Linux 5.3.5-homebrew-amd64 (SMP w/12 CPU cores) > Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE > Locale: LANG=ja_JP.UTF-8, LC_CTYPE=ja_JP.UTF-8 (charmap=UTF-8) (ignored: > LC_ALL set to ja_JP.UTF-8), LANGUAGE=ja_JP.UTF-8 (charmap=UTF-8) (ignored: > LC_ALL set to ja_JP.UTF-8) > Shell: /bin/sh linked to /bin/dash > Init: systemd (via /run/systemd/system) > > Versions of packages f2fs-tools depends on: > ii libblkid12.34-0.1 > ii libc62.29-2 > ii libf2fs-format4 1.11.0-1.1 > ii libf2fs5 1.11.0-1.1 > ii libselinux1 2.9-2+b2 > ii libuuid1 2.34-0.1 > > f2fs-tools recommends no packages. > > f2fs-tools suggests no packages. > > -- debconf-show failed >
Bug#941139: CVE-2019-5094: malicious fs can cause buffer overrun in e2fsck
Package: debian Version: 1.44.5-1+deb10u1 Severity: grave Tags: security Justification: user security hole E2fsprogs 1.45.4 contains a bugfix for CVE-2019-5094 / TALOS-2019-0887. We need to backport commit 8dbe7b475ec5: "libsupport: add checks to prevent buffer overrun bugs in quota code" to the versions of e2fsprogs found in Debian Buster and Stretch. The impact of this bug is that if an attacker can tricker the system into running e2fsck on an untrustworthy file system as root, a maliciously crafted file system could result in a buffer overflow that can result in arbitrary userspace memory modification. -- System Information: Debian Release: bullseye/sid APT prefers testing APT policy: (900, 'testing'), (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.3.0-00068-g7ec6dbcda3db (SMP w/8 CPU cores) Kernel taint flags: TAINT_WARN Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system)
Bug#940240: e2scrub_all: File descriptor 3 […] leaked on lvs invocation
On Fri, Sep 20, 2019 at 11:40:30PM +0200, Francesco Poli wrote: > > Hello Thorsten, hello Theodore, > I am another user who began receiving the same error messages from cron > on a box with LVM, but without systemd. It's only going to happen if you aren't using systemd, since if you are using systemd, e2scrub_all gets run out of a systemd timer unit file. The cron job is the fallback which is only used on non-systemd systems. > While waiting for a more radical fix (in cron or in lvm), do you think > this workaround should be included in the next Debian revision of > e2fsprogs? Yes, this will be in the next revision of e2fsprogs. It's unclear if this is serious enough bug to justify an update to Debian stable (that's up to the release managers, not me). It will be in the release of Debian testing and Debian backports, however. Cheers, - Ted
Bug#940240: e2scrub_all: File descriptor 3 […] leaked on lvs invocation
On Sat, Sep 14, 2019 at 02:42:29PM +0200, Thorsten Glaser wrote: > Package: e2fsprogs > Version: 1.45.3-4 > Severity: minor > > From: Cron Daemon > Message-ID: <20190914011004.3afc6220...@tglase.lan.tarent.de> > To: r...@tglase.lan.tarent.de > Date: Sat, 14 Sep 2019 03:10:04 +0200 (CEST) > Subject: Cron test -e /run/systemd/system || SERVICE_MODE=1 > /sbin/e2scrub_all -A -r > > File descriptor 3 (pipe:[24666004]) leaked on lvs invocation. Parent PID > 17610: /bin/bash I believe this is a bug in cron or lvm, in that cron fd 3 open for some unknown reason. And then LVM whines about it. See: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=581339 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=466138 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=432986 The simplest way to work around this should be to add to the following command to the beginning of /sbin/e2scrub_all: exec 3<&- Can you confirm this fixes things for you? Thanks, - Ted
Bug#940092: flowblade doesn't start
Package: flowblade Version: 2.2-1 Severity: grave Justification: renders package unusable Dear Maintainer, I installed flowblade, and then tried to start it. It failed to start; I expected it to start and be usable. :-) % flowblade FLOWBLADE MOVIE EDITOR 2.2 -- Launch script dir: /bin Running from filesystem... MLT found, version: 6.16.0 Failed to import module app.py to launch Flowblade! ERROR: No module named processutils Installation was assumed to be at: /bin/Flowblade -- System Information: Debian Release: bullseye/sid APT prefers testing APT policy: (900, 'testing'), (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.3.0-rc4-15023-g572feee69df4 (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages flowblade depends on: ii frei0r-plugins1.6.1-3 ii gir1.2-gdkpixbuf-2.0 2.38.1+dfsg-1 ii gir1.2-glib-2.0 1.58.3-2 ii gir1.2-gtk-3.03.24.10-1 ii gir1.2-pango-1.0 1.42.4-7 ii gmic 2.4.5-1+b1 ii libmlt-data 6.16.0-3 ii librsvg2-common 2.44.14-1 ii python2.7.16-1 ii python-cairo 1.16.2-1+b1 ii python-dbus 1.2.8-3 ii python-gi 3.32.2-1 ii python-gi-cairo 3.32.2-1 ii python-mlt6.16.0-3 ii python-numpy 1:1.16.2-1 ii python-pil6.1.0-1 ii swh-plugins 0.4.17-2 flowblade recommends no packages. flowblade suggests no packages. -- no debconf information
Bug#878927: fuse2fs: add norecovery option to fuse2fs
Control: tags -1 +pending On Wed, Oct 18, 2017 at 10:31:31AM -0400, Theodore Ts'o wrote: > On Tue, Oct 17, 2017 at 01:56:32PM -0400, Michael Stone wrote: > > I'd like the fuse2fs package to gain the norecovery option so that read-only > > sources with dirty journals can be mounted via fuse. This seems fairly > > straightforward, and I could prepare a patch if desired, but was curious if > > not > > having that option was a conscious decision. This will be in the next feature release of e2fsprogs (1.46). - Ted commit 75e3a9ef4c7a638b91b26dfbfcfc43e5770e9aa2 Author: Theodore Ts'o Date: Sun Aug 18 20:25:53 2019 -0400 fuse2fs: add a norecovery option which suppresses journal replay Teach fuse2fs the "-o norecovery" option, which will suppress any journal replay that might be necessary, and mounts the file system read-only. Addresses-Debian-Bug: #878927 Signed-off-by: Theodore Ts'o diff --git a/misc/fuse2fs.1.in b/misc/fuse2fs.1.in index 3bc7ada3..1a0c9d54 100644 --- a/misc/fuse2fs.1.in +++ b/misc/fuse2fs.1.in @@ -48,6 +48,9 @@ pretend to be root for permission checks \fB-o\fR no_default_opts do not include default fuse options .TP +\fB-o\fR norecovery +do not replay the journal and mount the file system read-only +.TP \fB-o\fR fuse2fs_debug enable fuse2fs debugging .SS "FUSE options:" diff --git a/misc/fuse2fs.c b/misc/fuse2fs.c index be2cd1db..dc7a0392 100644 --- a/misc/fuse2fs.c +++ b/misc/fuse2fs.c @@ -324,6 +324,7 @@ struct fuse2fs { int minixdf; int fakeroot; int alloc_all_blocks; + int norecovery; FILE *err_fp; unsigned int next_generation; }; @@ -3662,6 +3663,7 @@ static struct fuse_opt fuse2fs_opts[] = { FUSE2FS_OPT("fakeroot", fakeroot, 1), FUSE2FS_OPT("fuse2fs_debug",debug, 1), FUSE2FS_OPT("no_default_opts", no_default_opts,1), + FUSE2FS_OPT("norecovery", norecovery, 1), FUSE_OPT_KEY("-V", FUSE2FS_VERSION), FUSE_OPT_KEY("--version", FUSE2FS_VERSION), @@ -3700,6 +3702,7 @@ static int fuse2fs_opt_proc(void *data, const char *arg, "-o minixdf minix-style df\n" "-o fakerootpretend to be root for permission checks\n" "-o no_default_opts do not include default fuse options\n" + "-o norecovery don't replay the journal (implies ro)\n" "-o fuse2fs_debug enable fuse2fs debugging\n" "\n", outargs->argv[0]); @@ -3741,6 +3744,8 @@ int main(int argc, char *argv[]) exit(1); } + if (fctx.norecovery) + fctx.ro = 1; if (fctx.ro) printf("%s", _("Mounting read-only.\n")); @@ -3788,7 +3793,11 @@ int main(int argc, char *argv[]) ret = 3; if (ext2fs_has_feature_journal_needs_recovery(global_fs->super)) { - if (!fctx.ro) { + if (fctx.norecovery) { + printf(_("%s: mounting read-only without " +"recovering journal\n"), + fctx.device); + } else if (!fctx.ro) { printf(_("%s: recovering journal\n"), fctx.device); err = ext2fs_run_ext3_journal(_fs); if (err) {
Bug#935009: e2fsprogs: e2scrub_all doesn't work when vg_free>${snap_size_mb}
Control: tags -1 +pending Control: severity -1 normal On Sun, Aug 18, 2019 at 04:16:25AM +0200, Mikhail Morfikov wrote: > There's no error indicating what could be wrong. > > Looking through the /usr/sbin/e2scrub_all file, I noticed the following line: > > local devices=$(lvs -o lv_path --noheadings -S > "lv_active=active,lv_role=public,lv_role!=snapshot,vg_free>${snap_size_mb}") > > There's vg_free>${snap_size_mb} which causes the problem -- the above lvs > command returns nothing. It should be ">=" instead of ">", or the value in > /etc/e2scrub.conf should be less than what pvscan returns as "free" to make it > work. Thanks for the bug report. The following patch will be in the next release of e2fsprogs. - Ted commit 2e8cb3bebfd72c35922ddd5229fe0117b61ff19d Author: Theodore Ts'o Date: Sun Aug 18 19:23:07 2019 -0400 e2scrub_all: allow scrubbing in vg's whose free space == snapshot size If the volume group's free space is exactly the same as snapshot size, e2scrub_all will skip those logical volumes in those volume groups. Fix this by changing the test from '>' to '>='. Fixes: c120312253 ("e2scrub_all: make sure there's enough free space...") Addresses-Debian-Bug: #935009 Signed-off-by: Theodore Ts'o diff --git a/scrub/e2scrub_all.in b/scrub/e2scrub_all.in index 5bdbd116..2c563672 100644 --- a/scrub/e2scrub_all.in +++ b/scrub/e2scrub_all.in @@ -103,7 +103,7 @@ fi # Find scrub targets, make sure we only do this once. ls_scan_targets() { -local devices=$(lvs -o lv_path --noheadings -S "lv_active=active,lv_role=public,lv_role!=snapshot,vg_free>${snap_size_mb}") +local devices=$(lvs -o lv_path --noheadings -S "lv_active=active,lv_role=public,lv_role!=snapshot,vg_free>=${snap_size_mb}") if [ -z "$devices" ]; then return 0;
Bug#850916: xzgv FTCBFS: uses build architecture build tools (gcc, pkg-config)
On Wed, Jan 11, 2017 at 09:27:26AM +0100, Helmut Grohne wrote: > Source: xzgv > Version: 0.9.1-4 > Tags: patch > User: helm...@debian.org > Usertags: rebootstrap > > xzgv fails to cross build from source, because it uses build > architecture build tools. Simply adding the host architecture triplet as > a prefix to gcc and pkg-config fixes the cross build. Please consider > applying the attached patch. xzgv 0.9.2-2 has been uploaded, and it's been converted to use dh. Can you verify whether or not the cross build problems are still present with xzgv? I *think* dh/debhelper should handle fixing CC automatically, right? I suppose we might need use special handling for pkg-config? Thanks, - Ted
Bug#933764: buster-pu: package e2fsprogs/1.44.5-1+deb10u1
Thanks, Adam! My apologies for screwing up the first build/upload. I've just pushed e2fsprogs/1.44.5-1+deb10u1. I've attached the debdiff below. - Ted diff -Nru e2fsprogs-1.44.5/debian/changelog e2fsprogs-1.44.5/debian/changelog --- e2fsprogs-1.44.5/debian/changelog 2018-12-15 22:46:49.0 -0500 +++ e2fsprogs-1.44.5/debian/changelog 2019-08-02 23:49:00.0 -0400 @@ -1,3 +1,9 @@ +e2fsprogs (1.44.5-1+deb10u1) buster; urgency=medium + + * Fix e4defrag crashes on 32-bit architectures (Closes: #920767) + + -- Theodore Y. Ts'o Fri, 02 Aug 2019 23:49:00 -0400 + e2fsprogs (1.44.5-1) unstable; urgency=medium * New upstream version diff -Nru e2fsprogs-1.44.5/debian/gbp.conf e2fsprogs-1.44.5/debian/gbp.conf --- e2fsprogs-1.44.5/debian/gbp.conf2018-12-15 22:46:49.0 -0500 +++ e2fsprogs-1.44.5/debian/gbp.conf2019-08-02 23:49:00.0 -0400 @@ -1,4 +1,4 @@ [DEFAULT] pristine-tar = True upstream-tag='v%(version)s' -debian-branch=debian/master +debian-branch=debian/stable diff -Nru e2fsprogs-1.44.5/debian/.gitignore e2fsprogs-1.44.5/debian/.gitignore --- e2fsprogs-1.44.5/debian/.gitignore 1969-12-31 19:00:00.0 -0500 +++ e2fsprogs-1.44.5/debian/.gitignore 2019-08-02 23:49:00.0 -0400 @@ -0,0 +1 @@ +!patches diff -Nru e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch --- e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch 1969-12-31 19:00:00.0 -0500 +++ e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch 2019-08-02 23:49:00.0 -0400 @@ -0,0 +1,66 @@ +From: Theodore Ts'o +Date: Thu, 3 Jan 2019 22:27:37 -0500 +X-Dgit-Generated: 1.44.5-1 622e62942104d357912480e49c5b5524588cf45f +Subject: Revert "e4defrag: use 64-bit counters to track # files defragged" + +This reverts commit 3293ea9ecbe1d622f9cf6c41d705d82fbae6a3e3. + +This wasn't really the right fix, since there can't be more than 2**32 +files in a file system. The real issue is when the number of files in +a directory change during the e4defrag run. + +Signed-off-by: Theodore Ts'o + +--- + +--- e2fsprogs-1.44.5.orig/misc/e4defrag.c e2fsprogs-1.44.5/misc/e4defrag.c +@@ -169,13 +169,13 @@ static int block_size; + static intextents_before_defrag; + static intextents_after_defrag; + static intmode_flag; +-static uid_t current_uid; +-static unsigned long long defraged_file_count; +-static unsigned long long frag_files_before_defrag; +-static unsigned long long frag_files_after_defrag; +-static unsigned long long regular_count; +-static unsigned long long succeed_cnt; +-static unsigned long long total_count; ++static unsigned int current_uid; ++static unsigned int defraged_file_count; ++static unsigned int frag_files_before_defrag; ++static unsigned int frag_files_after_defrag; ++static unsigned int regular_count; ++static unsigned int succeed_cnt; ++static unsigned int total_count; + static __u8 log_groups_per_flex; + static __u32 blocks_per_group; + static __u32 feature_incompat; +@@ -1912,9 +1912,9 @@ int main(int argc, char *argv[]) + } + /* File tree walk */ + nftw64(dir_name, file_defrag, FTW_OPEN_FD, flags); +- printf("\n\tSuccess:\t\t\t[ %llu/%llu ]\n", +- succeed_cnt, total_count); +- printf("\tFailure:\t\t\t[ %llu/%llu ]\n", ++ printf("\n\tSuccess:\t\t\t[ %u/%u ]\n", succeed_cnt, ++ total_count); ++ printf("\tFailure:\t\t\t[ %u/%u ]\n", + total_count - succeed_cnt, total_count); + if (mode_flag & DETAIL) { + printf("\tTotal extents:\t\t\t%4d->%d\n", +@@ -1923,10 +1923,12 @@ int main(int argc, char *argv[]) + printf("\tFragmented percentage:\t\t" + "%3llu%%->%llu%%\n", + !regular_count ? 0 : +- (frag_files_before_defrag * 100) / ++ ((unsigned long long) ++ frag_files_before_defrag * 100) / + regular_count, + !regular_count ? 0 : +- (frag_files_after_defrag * 100) / ++ ((unsigned long long) ++ frag_files_after_defrag * 100) / + regular_count); + } + break
Bug#933764: stretch-pu: package e2fsprogs/1.44.5-1+deb9u1
Oh, one more question --- should I be doing a source-only, or binary push when I push to buster-proposed-updates. I'm a bit confused about whether it will be going into the NEW queue, and hence require a binary push, or a source-only build because that's the new hotness and it's required for promotions to testing. Thanks! - Ted
Bug#933764: stretch-pu: package e2fsprogs/1.44.5-1+deb9u1
On Sat, Aug 03, 2019 at 04:08:14PM +0100, Adam D. Barratt wrote: > > I assume this is simply a case of an outdated chroot pointing at > "stable" or similar. The net effect is that the upload ended up in NEW > (presumably as buster's e2fsprogs builds additional binary packages > relative to stretch). I've asked ftp-master to reject that upload. > > I'm not sure whether you were intending to fix this in stretch or > buster, but this should either be 1.43.4-2+deb9u1 for stretch, or > 1.44.5-1+deb10u1 targetted at buster. It's an outdated chroot plus me being confused. It was supposed to be 1.44.5-1+deb10u1 targetted at buster. That's actually what *sources* are; but the changelog and chroot it was built against was stretch. *Sigh*. I'll go away, fix the changelog and rebuild it now. Do you prefer whether we just close this bug as invalid, and I'll open a new one, or should we retitle this bug and append to it? I don't have strong preferences either way. Cheers, - Ted
Bug#933764: stretch-pu: package e2fsprogs/1.44.5-1+deb9u1
Package: release.debian.org Severity: normal Tags: stretch User: release.debian@packages.debian.org Usertags: pu This uplaod is to fix the important bug, #920767. The debdiff is attached below. diff -Nru e2fsprogs-1.44.5/debian/changelog e2fsprogs-1.44.5/debian/changelog --- e2fsprogs-1.44.5/debian/changelog 2018-12-15 22:46:49.0 -0500 +++ e2fsprogs-1.44.5/debian/changelog 2019-08-02 23:49:00.0 -0400 @@ -1,3 +1,9 @@ +e2fsprogs (1.44.5-1+deb9u1) stretch; urgency=medium + + * Fix e4defrag crashes on 32-bit architectures (Closes: #920767) + + -- Theodore Y. Ts'o Fri, 02 Aug 2019 23:49:00 -0400 + e2fsprogs (1.44.5-1) unstable; urgency=medium * New upstream version diff -Nru e2fsprogs-1.44.5/debian/gbp.conf e2fsprogs-1.44.5/debian/gbp.conf --- e2fsprogs-1.44.5/debian/gbp.conf2018-12-15 22:46:49.0 -0500 +++ e2fsprogs-1.44.5/debian/gbp.conf2019-08-02 23:49:00.0 -0400 @@ -1,4 +1,4 @@ [DEFAULT] pristine-tar = True upstream-tag='v%(version)s' -debian-branch=debian/master +debian-branch=debian/stable diff -Nru e2fsprogs-1.44.5/debian/.gitignore e2fsprogs-1.44.5/debian/.gitignore --- e2fsprogs-1.44.5/debian/.gitignore 1969-12-31 19:00:00.0 -0500 +++ e2fsprogs-1.44.5/debian/.gitignore 2019-08-02 23:49:00.0 -0400 @@ -0,0 +1 @@ +!patches diff -Nru e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch --- e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch 1969-12-31 19:00:00.0 -0500 +++ e2fsprogs-1.44.5/debian/patches/revert-e4defrag-use-64-bit-counters-to-t.patch 2019-08-02 23:49:00.0 -0400 @@ -0,0 +1,66 @@ +From: Theodore Ts'o +Date: Thu, 3 Jan 2019 22:27:37 -0500 +X-Dgit-Generated: 1.44.5-1 622e62942104d357912480e49c5b5524588cf45f +Subject: Revert "e4defrag: use 64-bit counters to track # files defragged" + +This reverts commit 3293ea9ecbe1d622f9cf6c41d705d82fbae6a3e3. + +This wasn't really the right fix, since there can't be more than 2**32 +files in a file system. The real issue is when the number of files in +a directory change during the e4defrag run. + +Signed-off-by: Theodore Ts'o + +--- + +--- e2fsprogs-1.44.5.orig/misc/e4defrag.c e2fsprogs-1.44.5/misc/e4defrag.c +@@ -169,13 +169,13 @@ static int block_size; + static intextents_before_defrag; + static intextents_after_defrag; + static intmode_flag; +-static uid_t current_uid; +-static unsigned long long defraged_file_count; +-static unsigned long long frag_files_before_defrag; +-static unsigned long long frag_files_after_defrag; +-static unsigned long long regular_count; +-static unsigned long long succeed_cnt; +-static unsigned long long total_count; ++static unsigned int current_uid; ++static unsigned int defraged_file_count; ++static unsigned int frag_files_before_defrag; ++static unsigned int frag_files_after_defrag; ++static unsigned int regular_count; ++static unsigned int succeed_cnt; ++static unsigned int total_count; + static __u8 log_groups_per_flex; + static __u32 blocks_per_group; + static __u32 feature_incompat; +@@ -1912,9 +1912,9 @@ int main(int argc, char *argv[]) + } + /* File tree walk */ + nftw64(dir_name, file_defrag, FTW_OPEN_FD, flags); +- printf("\n\tSuccess:\t\t\t[ %llu/%llu ]\n", +- succeed_cnt, total_count); +- printf("\tFailure:\t\t\t[ %llu/%llu ]\n", ++ printf("\n\tSuccess:\t\t\t[ %u/%u ]\n", succeed_cnt, ++ total_count); ++ printf("\tFailure:\t\t\t[ %u/%u ]\n", + total_count - succeed_cnt, total_count); + if (mode_flag & DETAIL) { + printf("\tTotal extents:\t\t\t%4d->%d\n", +@@ -1923,10 +1923,12 @@ int main(int argc, char *argv[]) + printf("\tFragmented percentage:\t\t" + "%3llu%%->%llu%%\n", + !regular_count ? 0 : +- (frag_files_before_defrag * 100) / ++ ((unsigned long long) ++ frag_files_before_defrag * 100) / + regular_count, + !regular_count ? 0 : +- (frag_files_after_defrag * 100) / ++ ((unsigned long long) ++ frag_files_after_defrag * 100) / + regular_count); + } + break
Bug#933697: libuuid1 declared to replace e2fsprogs
On Fri, Aug 02, 2019 at 02:09:03PM +1000, Ralph Ronnquist wrote: > Well, > > when I then "agt-get install libuuid1:i386" (on this multiarch) > I get advice about a page full of packages to be removed, and the > following (plus a bit more): > --- > WARNING: The following essential packages will be removed. > This should NOT be done unless you know exactly what you are doing! > e2fsprogs libblkid1 (due to e2fsprogs) libuuid1 (due to e2fsprogs) fdisk > libfdisk1 (due to fdisk) libmount1 (due to fdisk) init > sysvinit-core (due to init) mount util-linux (due to mount) sysvinit-utils > What version of e2fsprogs did you have on the system at that time? - Ted
Bug#933697: libuuid1 declared to replace e2fsprogs
On Fri, Aug 02, 2019 at 11:25:10AM +1000, Ralph Ronnquist wrote: > Package: libuuid1 > Version: 2.34-0.1 > > The package is declared to replace e2fsprogs, which it doesn't do. > Rather, installing it has a fair few ramifications on the installed system. > > The package belongs to the util-linux source, and it seems to be the > same issue with uuid-runtime and uuid-dev. The replaces line in question is: Replaces: e2fsprogs (<< 1.34-1) This was because libuuid1 used to shipped as part of e2fsprogs, and it was split out in 1.34-1: e2fsprogs (1.34-1) unstable; urgency=low * Split shared libraries out of the e2fsprogs package into separate packages: libss2, libcomerr2, libuuid1, and e2fslibs. (Closes: #201155, #201164) This happened in 2003, so this split landed in Debian 3.1 (Sarge). Later on, the uuid and blkid libraries got moved to util-linux, and the Replaces line carried over. Given that Debian 10 (Buster) is now stable, that Replaces line is O-B-S-O-L-E-T-E. That being said, it's harmless. Cheers, - Ted
Bug#921146: Program mksquashfs from squashfs-tools 1:4.3-11 does not make use all CPU cores
Phillip, Peace. You may not like the fact that David Oberhollenzer (GitHub username AgentD) started an effort to implement a new set of tools to generate squashfs images on April 30th, 2019, and called it squashfs-tools-ng. However, it's really not fair to complain that there is a "violation of copyright" given that all of squashfs-tools was released is under the GPL. Using some text from squashfs-tools in the package description or documentation of squashfs-tools-ng is totally allowed under the GPL. You could complain that they didn't include an acknowledgement that text was taken from your program. But then give them time to fix up the acknowledgements. Assuming good faith is always a good default. The other thing that you've complained about is that some folks have (inaccurately, in your view) described squashfs-tools as not being maintained. I'd encourage you to take a step back, and consider how this might be quite understandable how they might have gotten that impression. First of all, let's look on the documentation in kernel source tree, located at Documentation/filesystems/squashfs.txt. It states that squashfs-tools's web site is www.squashfs.org, and the git tree is at git://git.kernel.org/pub/scm/fs/squashfs/squashfs-tools.git. The web site www.squashfs.org is not currently responding, but according to the Internet Archive, it was redirecting to http://squashfs.sourceforge.net/. This web site describes the latest version of squashfs-tools is 4.2, released in February 2011, It apparently wasn't updated when squashfs-tools 4.3 was released in May 2014. The git.kernel.org tree is identical to the sourceforge.net's git tree. That tree's most recent commit is from August 2017, e38956b9 ("squashfs-tools: Add zstd support"). Now, the fascinating thing is that the github tree has a completely different commit-id for the same commit is 61133613 ("squashfs-tools: Add zstd support"). The git commit that the two trees have in common is 9c1db6d1 from September 2014. Reconstructing the git history, you didn't make any commits between September 2014 and March 2017. At that time, you merged a number of github pull requests between 2014 and 2017, but then exported them as patches and applied them on the kernel.org/sourceforge git trees. Why, I'm not sure. In August 2017, you stopped updating the kernel.org and sourceforge git trees, and abandoned them. After that for the rest of 2017, you merged one more pull request, and applied one commit to add the -nold option. In 2018, there were only two commits, in February and June. And then nothing until April 2019 (about the time that squash-tools-ng was started/announced), there has been a flurry of activity, including merging github pull requests from 2017 and 2018, antd you've done a lot of work since then. I say this not to criticize the amount of attention you've paid to squashfs-tools, but to point out that when David started work on squashfs-tools-ng, it's not unreasonable that he might have gotten the impression that development had ceased --- especially if he followed the documentation in the kernel sources, and found an extremely cobwebby website, and a git tree on git.kernel.org that hadn't been updated since 2017, with substantive heavy development basically ending in 2014 (which is also when the last release of squashfs-tools was cut). You don't need to ascribe malice to what might just simply be an impression by looking in the official locations in the official kernel documentation! As a fellow kernel file system developer, let me make a few suggestions. * Don't worry about with "competing" software projects. For a while, a multi-billion dollar company attempted to maintain a BSD-licensed "competition" to some of the programs in e2fsprogs. This was because Andy Rubin was highly allergic to the GPL way back when. I pointed the independent implementation was creating invalid file systems, and was buggy, and in general was making that billion dollar company's life harder, not easier. They eventually gave up on it, and Android uses e2fsprogs these days. The whole point of open source is "may the best code win". If you're convinced that you, as the upstream kernel developer, can do a better job maintaining the userspace tools, then instead of complaining and threatening to sue, just keep your head down, and keep improving your code, and in the end, the best code will win. * I'd suggest that you make sure there is a single canonical git tree. It appears it's the github version of your git tree. So... starting with your github tree, do a "git merge" of the master branch from git.kernel.org, and then push updates to github, git.kernel.org, and git.sf.code.net. It's fine to have multiple mirrors of your git tree. I maintain multiple copies of git e2fsprogs repo on git.kernel.org, github, and sourceforge. * Please consider tagging your releases. There are git tags for squashfs 3.1 and 3.2,
Bug#726953: dgit and submodules
On Wed, Jul 31, 2019 at 04:22:35PM +0100, Ian Jackson wrote: > Submodules are intensely frustrating[1]. One way they are frustrating is > that it is not clear even what it means for a .dsc to be identical to > a git tree which has submodule references. Are the submodules > supposed to be populated ? My inclination is to say the answer is > "yes", but your own practice here seems to be "no" ? Well, from the perspective of the upstream author (in this case dwarves-dfsg), I think what's going on is they want to reuse code, but young'un's these days don't understand how to maintain API stability (never mind the ABI compatibility required for shared libraries). So what they do is to say, "ok, I'm going to use *this* version of lib/bpf" for vN of libdwarves, and at some point, for vN+x of libwarves, "I'll do a "git pull" of lib/bpf, discover that functions have changed arguments for various functions, so I'll have to fix up my source code to deal with this new version of lib/bpf." Because API stability is too hard(tm), they can't depend on having a particular version of libbpf.a installed in a distribution library. So instead, particular versions of lib/bpf are associated with particular version of dwarves, and a "git pull" of the top-level dwarves-dfsg git repository will also update the lib/bpf version of the submodule to the version tied to that version of the top-level git repo. >From the perspective of the source tarball, they distribute the source files for lib/bpf in dwarves-dfsg's source tar.xz file. And they static link lib/bpf in the binaries in the distro package, so the fact that modern open source programs have no idea how to achieve API or ABI stability, it all works. Mostly. So yeah, it's frustrating, and it means that we're shipping 576k of lib/bpf with dwarves-dfsg, and if there is some other source package that also uses lib/bpf, they will also ship their own version. It also means that if there is a security bug fix needed for lib/bpf, each user will have to update to the fixed version of lib/bpf, fix any API breakage, and then do a new source and binary release. The problem is, if we want to build upstream kernels with compressed type information for BPF, we need to use dwarves-dfsg. And the fact that it has the bad taste to use a completely unstable lib/bpf is what it is. But if dgit is supposed to be able to support *all* packages, even packages like lib/bpf and their users, such as dwarves-dfsg, then it's going to have to figure out how to deal with this. And git created submodules to be able to support this workflow; so if dgit is going to be a universal system, it needs to deal with packages that have decided to use this particular mechanism of code reuse. :-/ > [1] I think they are nearly always the wrong answer. Usually they are > the worst answer. Even (especially) to the situation they were > specifically intended to address. They are simply too broken. Of > course this is of no help to you as a downstream if your upstream has > drunk the poison kool-aid. Exactly. Compare and constrast this with e2fsprogs, where I've maintained ABI compatibility for over a decade Discipline! The young'uns don't understand Discpline! And they should also get off my d*mned lawn. :-) - Ted
Bug#931142: Bts#931142: dwarves: New upstream version should be packaged
OK, I've upload 1.15-1, so let's not rewrite the history on Salsa any more. :-) Thanks!! - Ted
Bug#931142: Bts#931142: dwarves: New upstream version should be packaged
On Tue, Jul 30, 2019 at 05:54:37PM +0200, Domenico Andreoli wrote: > Edited history again, just fixed the typo in the only patch. I could > not find any other low hanging fruit, anything else requires some > investigation (included the warnings on debian/copyright) Well, debian-watch-uses-insecure-uri is really trivial. That's just s/http/https/ on the URL. The standards update is also not hard; it's just a matter of following the checklist at: https://www.debian.org/doc/debian-policy/upgrading-checklist.html and looking at the sections starting at: https://www.debian.org/doc/debian-policy/upgrading-checklist.html#version-3-9-4 and moving upwards. The debian/copyright warnings are is just outdated because some of the files it lists simply aren't there any more. I'm also not convinced many of them are needed given the wildcard match. > I'd like to upload this first 1.15-1 so to not block anybody else > playing with recent kernels on unstable. But sure, we can do the cleanup later. I'll do the upload today. Thanks! - Ted
Bug#931142: Bts#931142: dwarves: New upstream version should be packaged
On Mon, Jul 29, 2019 at 10:32:38AM -0400, Theodore Y. Ts'o wrote: > On Mon, Jul 29, 2019 at 02:51:49PM +0200, Domenico Andreoli wrote: > > > > Here it is: https://mentors.debian.net/package/dwarves-dfsg > > > > I found not useful to reuse your git history as-is, although I could > > not drop your changelog entry ;) > > It looks like you did merge in my git changes, though? Hopefully it > *was* useful. :-) Oh, never mind. I got confused when I looked at the git log, and I saw some commits that had me listed as the Author. I see what you did; I guess you figured there was no point keeping my interim packaging of 1.13. - Ted
Bug#931142: Bts#931142: dwarves: New upstream version should be packaged
On Mon, Jul 29, 2019 at 02:51:49PM +0200, Domenico Andreoli wrote: > > Here it is: https://mentors.debian.net/package/dwarves-dfsg > > I found not useful to reuse your git history as-is, although I could > not drop your changelog entry ;) It looks like you did merge in my git changes, though? Hopefully it *was* useful. :-) Did you take a look at the lintian reports on the mentors page? The Warnings and Info reports all look pretty simple to resolve. Any chance you could fix them up? Thanks, - Ted
Bug#933247: e2fsprogs FTCBFS: DEB_BUILD_OPTIONS=nocheck no longer works
On Sun, Jul 28, 2019 at 06:21:57PM +0200, Helmut Grohne wrote: > Hi Ted, > > On Sun, Jul 28, 2019 at 10:04:48AM -0400, Theodore Y. Ts'o wrote: > > Yes, I had noticed that this was breaking some of the ports build as > > well, and so I have a similar patch in my tree already. I was going > > to wait a few days to see if there were any other issues, and to allow > > 1.45.3-3 to enter testing before I was going to do another upload. Is > > it urgent for you such that you would prefer an upload sooner? > > If you defer it, I'll have to add the patch to rebootstrap.git and later > revert it. That's unfortunate, but certainly possible. I'll need some > fix (either in e2fsprogs or rebootstrap), because otherwise I'm blind to > later failures. Knowing your plans is key here. I'll assume that you > defer it unless you mail back with 12h. When I say "defer" I mean for less than a week. 3 days so e2fsprogs can enter testing, and then usually with the greater exposure, more bugs get reported, and then a handul of days for me to fix up problems and re-upload. If I re-upload now, it resets the unstable->testing countdown timer, and it further bloats things like snapshots.debian.org. I've already uploaded 3 releases in 3 days, so I figured it might be considered polite if a batched up some changes instead of continuing to follow a "add a git commit, push a debian package release" pattern. How critical is it for rebootstrap.git to be returning a problem for a handful days? I guess I'm missing something about why a few days of it reporting breakage is such a big issue? > It just occured to me that there might be a simpler implementation > (untested): > > override_dh_auto_test: > dh_auto_test -- V=1 Possibly. I'm still waiting back from the ia-64 porting folks about why the m_hugefile test is failing when run on the ia-64 buildd, but it works just fine when I run it on a ia-64 development machine. So I may need to do something special for ia-64, and I haven't yet decided how to pass in some kind of request (probably using an environment variable, but I haven't decided for sure yet) so that we skip the m_hugefile test on ia-64. (I'm guessing it's due to lack of disk space, since it requires 1.1G of space in /tmp to run the test, but I'm not 100% sure.) So that's another issue that's pending a debian package release, and another reason why it didn't seem like breaking cross builds for a few days is such an unacceptable tragedy? I do like to batch up fixes, instead of rolling new releases every day or two. Cheers, - Ted
Bug#933247: e2fsprogs FTCBFS: DEB_BUILD_OPTIONS=nocheck no longer works
control: -1 tags +pending On Sun, Jul 28, 2019 at 08:09:22AM +0200, Helmut Grohne wrote: > Source: e2fsprogs > Version: 1.45.3-3 > Severity: important > Tags: patch > User: debian-cr...@lists.debian.org > Usertags: ftcbfs > > e2fsprogs fails to cross build from source, since the -3 upload, because > its support for DEB_BUILD_OPTIONS=nocheck is broken. Setting severity to > important, because e2fsprogs is required for bootstrapping. Please > consider applying the attached patch. Yes, I had noticed that this was breaking some of the ports build as well, and so I have a similar patch in my tree already. I was going to wait a few days to see if there were any other issues, and to allow 1.45.3-3 to enter testing before I was going to do another upload. Is it urgent for you such that you would prefer an upload sooner? - Ted commit 20a18d54746731704d2d2dfc28edd9660eb1a296 Author: Theodore Ts'o Date: Sat Jul 27 12:17:06 2019 -0400 debian: skip running "make check" if DEB_BUILD_OPTIONS contains nocheck This was done automatically by debhelper, but it got dropped when override_dh_auto_test was added by commit 7f4c3bb120 ("debian: run "make check" with V=1 to keep blhc happy"). Signed-off-by: Theodore Ts'o diff --git a/debian/rules b/debian/rules index e957d754..2499f6a7 100755 --- a/debian/rules +++ b/debian/rules @@ -170,7 +170,9 @@ override_dh_gencontrol: dh_gencontrol --remaining-packages override_dh_auto_test: +ifeq (,$(findstring nocheck,$(DEB_BUILD_OPTIONS))) $(MAKE) -C ${stdbuilddir} V=1 check +endif test_printenv: printenv | sort
Bug#931142: Bts#931142: dwarves: New upstream version should be packaged
On Thu, Jul 25, 2019 at 07:03:50PM +0200, Domenico Andreoli wrote: > Hi Theodore, > > apologies, I'll prepare a new upload. Would you mind sponsoring it? Sure, I'd be happy to sponsor it. - Ted
Bug#932906: e2fsprogs: FTBFS on x32: Tests failed: f_pre_1970_date_encoding
Control: tags -1 +pending On Wed, Jul 24, 2019 at 05:17:28PM +0200, Thorsten Glaser wrote: > > 355 tests succeeded 1 tests failed > Tests failed: f_pre_1970_date_encoding > > I assume this is because x32 is a 32-bit (ILP32) architecture > with 64-bit time_t. Thanks for the bug report. This will be fixed in the next version of e2fsprogs. The root cause was how we mocked setting the system time into the far future by using the E2FSCK_TIME environment variable during the f_pre_1970_date_encoding test. - Ted commit a368e0cbfb33d3050dcf0bf5a5539d3dac39 Author: Theodore Ts'o Date: Wed Jul 24 22:25:11 2019 -0400 e2fsck: set E2FSCK_TIME correctly on a 32-bit arch with a 64-bit time_t Addresses-Debian-Bug: #932906 Signed-off-by: Theodore Ts'o diff --git a/e2fsck/e2fsck.c b/e2fsck/e2fsck.c index 3770bfcb..929bd78d 100644 --- a/e2fsck/e2fsck.c +++ b/e2fsck/e2fsck.c @@ -37,7 +37,7 @@ errcode_t e2fsck_allocate_context(e2fsck_t *ret) time_env = getenv("E2FSCK_TIME"); if (time_env) - context->now = strtoul(time_env, NULL, 0); + context->now = (time_t) strtoull(time_env, NULL, 0); else { context->now = time(0); if (context->now < 1262322000) /* January 1 2010 */
Bug#932874: logsave: Insufficient Breaks/Replaces on e2fsprogs
On Wed, Jul 24, 2019 at 05:46:50AM +0200, Sven Joachim wrote: > Package: logsave > Version: 1.45.3-1 > Severity: serious > > Installing logsave without upgrading e2fsprogs fails: > > , > | Preparing to unpack .../logsave_1.45.3-1_amd64.deb ... > | Unpacking logsave (1.45.3-1) ... > | dpkg: error processing archive > /var/cache/apt/archives/logsave_1.45.3-1_amd64.deb (--install): > | trying to overwrite '/sbin/logsave', which is also in package e2fsprogs > 1.45.2-1 > ` > > There are a Replaces/Breaks relationships on e2fsprogs (<< 1.45.2-1) > which need to be bumped to (<< 1.45.3-1). ... and I missed this in the blizzard of Debian bugs e-mails re: logsave as well. Will fix in the next upload... - Ted
Bug#932876: logsave: should be Multi-Arch: foreign
On Wed, Jul 24, 2019 at 05:52:48AM +0200, Sven Joachim wrote: > Package: logsave > Version: 1.45.3-1 > > Like the package it partially replaces (e2fsprogs), logsave should be > Multi-Arch: foreign. The initramfs-tools-core package will have to > depend on logsave, which makes cross-grades (say, from i386 to amd64) > difficult, as logsave and e2fsprogs cannot be cross-graded before dpkg. Ack. Sorry, I missed this before doing the upload for 1.45.3-2, since I wanted to make sure I fixed the problem of people upgrading to e2fsprogs and then not being able to reboot their system, and so I missed this. - Ted
Bug#932855: Bug#932881: add dependency on logsave
Control: tags 932855 +pending Control: tags 932859 +pending Control: tags 932861 +pending Control: tags 932881 +pending Control: tags 932888 +pending My apologies, I hadn't realized initramfs had a dependency on logsave. I guess I should have known that, but it had slipped my mind. This will be fixed in the next release of e2fsprogs. - Ted commit f3223c5fa2b7e0e3e10c96dea0fce2048910ff98 Author: Theodore Ts'o Date: Wed Jul 24 12:20:11 2019 -0400 debian: add a hard dependency on logsave to e2fsprogs The initramfs created by the initramfs-tools package needs logsave and assumes it comes along with e2fsprogs. If it is not present, the result systems which will fail to boot. Fix this by adding the dependency. In the future initramfs-tools will explicitly ask for logsave (tracked in Debian Bug: #932854), but we'll need to keep this dependency until the next stable release of Debian. Addresses-Debian-Bug: #932855 Addresses-Debian-Bug: #932859 Addresses-Debian-Bug: #932861 Addresses-Debian-Bug: #932881 Addresses-Debian-Bug: #932888 Signed-off-by: Theodore Ts'o diff --git a/debian/control b/debian/control index 3ea0b404..f20e9228 100644 --- a/debian/control +++ b/debian/control @@ -200,8 +200,9 @@ Description: ext2/ext3/ext4 file system libraries - headers and static libraries Package: e2fsprogs XB-Important: yes Pre-Depends: ${shlibs:Depends}, ${misc:Depends}, libblkid1, libuuid1 +Depends: logsave Multi-Arch: foreign -Suggests: gpart, parted, fuse2fs, e2fsck-static, logsave +Suggests: gpart, parted, fuse2fs, e2fsck-static Recommends: e2fsprogs-l10n Architecture: any Description: ext2/ext3/ext4 file system utilities
Bug#932622: Cronjobs doesn't check the presence of the executable
control: -1 +pending On Sun, Jul 21, 2019 at 01:27:38PM +0200, Laurent Bigonville wrote: > > The cronjob (/etc/cron.d/e2scrub_all) file is a conffile, that means > that if the package is removed without being purge, the cronjob will > still be installed, but the executable will not. > > The cronjob should test the presence of the executable. Thanks for pointing this out. The following will be in the next e2fsprogs release. - Ted commit 2e0ad4432898e13a21db3b8f76c629b19e01cadc Author: Theodore Ts'o Date: Sun Jul 21 13:13:24 2019 -0400 e2scrub_all_cron: check to make sure e2scrub_all Since e2scrub_all.cron is marked as a config file, it can hang around after the package is removed, in which case e2scrub_all might not be present. So check to make sure e2scrub_all exists before trying to execute it. Addresses-Debian-Bug: #932622 Signed-off-by: Theodore Ts'o Reported-by: Laurent Bigonville diff --git a/scrub/e2scrub_all_cron.in b/scrub/e2scrub_all_cron.in index f9cff878..fcfe415f 100644 --- a/scrub/e2scrub_all_cron.in +++ b/scrub/e2scrub_all_cron.in @@ -62,6 +62,7 @@ on_ac_power() { return 0 } +test -e @root_sbindir@/e2scrub_all || exit 0 test -e /run/systemd/system && exit 0 on_ac_power || exit 0
Bug#931847: Bogus package-supports-alternative-init-but-no-init.d-script test?
This Lintian check is also a false positive for e2fsprogs, where it's triggering 4 false positive Lintian errors. Per https://lintian.debian.org/tags/package-supports-alternative-init-but-no-init.d-script.html: This lintian tag has: Emitted (non-overridden): 787, overridden: 22, total: 809 An update to override these 4 false positives for e2fsprogs is currently stuck in the NEW queue (due to a new binary package), and is not included in these statistics. Certainly Lintian's claim that this Lintian error is "certain" is completely false. - Ted
Bug#932181: e2tools: New potential upstream for e2tools available?
Package: e2tools Version: 0.0.16-6.1+b2 Severity: wishlist The original source for e2tools has disappeared (the home.earthlink.net pages is dead), but it looks like there is activity at: https://github.com/ndim/e2tools Hans has modernized the package, fixing the FSF's address in the copyright notices, updated the autoconf file, fixed various compiler warnings, added man pages (taken from Debian, in fact), etc. Also note that e2tools actually *does* work on ext4 file systems (although e2cp is creating indirect-mapped blocks by default, which is unfortunate). So the package description should be updated as well. I'd suggest reaching out to Hans and Keith Sheffield (assuming his pobox.com e-mail is still working) to make sure they are amenable to upstream switch. -- System Information: Debian Release: bullseye/sid APT prefers testing APT policy: (900, 'testing'), (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.1.0-00062-gc804857673ae (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages e2tools depends on: ii e2fslibs1.45.3-1 ii libc6 2.28-10 ii libcomerr2 1.45.3-1 e2tools recommends no packages. e2tools suggests no packages. -- no debconf information
Bug#923372: e2fsprogs: sharing of /sbin/logsave
Control: tags -1 +pending On Wed, Feb 27, 2019 at 03:01:52AM +, Dmitry Bogatov wrote: > > Package: e2fsprogs > Version: 1.44.5-1 > Severity: normal > > Dear Maintainer, > > for long time e2fsprogs were essential, and bin:initscripts used > /sbin/logsave from e2fsprogs. Nowdays, e2fsprogs are not essential, > so initscripts switched to using /sbin/logsave on best-effort basis (use > if present), which is suboptimal. > > Upstream maintainer of sysvinit imported sources of logsave from > e2fsprogs into sysvinit, and now initscripts can again be sure, that > logsave is present, if we settle question, who will provide > /sbin/logsave. > > So here I propose, that e2fsprogs no longer installs logsave, and it > switch owner to bin:sysvinit-utils. Not the best solution, given there > is desire (but not much of action) of dropping essential flag from > bin:sysvinit-utils. > > Alternatively, one of us could build bin:logsave. I've separated logsave out into its own (Priority: optional) package[1]; it's currently pending in the NEW queue as part of the e2fsprogs 1.45.3 release. [1] https://browse.dgit.debian.org/e2fsprogs.git/commit/?id=bb788de9b021d21686d08366f02415a4c0a91f5e - Ted
Bug#932098: gcc-8: LTO with -fdebug-prefix-map results in unreproducible build
Package: gcc-8 Version: 8.3.0-19 Severity: normal Dear Maintainer, The e2fsprogs package is currently not reproducible. See: https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/e2fsprogs.html This is caused by an unfortunate interaction between LTO and the flags from dpkg-buildflags which are meant to try to create reproducible builds in the debug symbols. % dpkg-buildflags --get CFLAGS -g -O2 -fdebug-prefix-map=/tmp/gbp/e2fsprogs-1.45.3=. -fstack-protector-strong -Wformat -Werror=format-security While this does a good thing in that it maps away the build pathname in the debugsym files, to prevent reproducible build problems, it results in an extremely perverse result for LTO builds, because this debug option --- complete with build pathname --- is encoded in the gnu.lto_.opts section. :-( This is leaving me in a difficult position, since it means I have to decide which is more important to Debian --- LTO builds, or reproducible builds. I suspect I'll eventually decide that reproducible builds are more important, since the bloat to e2fsprogs is only 60k or so. But I figure I'll file a bug against gcc-8 in the hopes that it's not too hard to fix this problem. Thanks!! -- System Information: Debian Release: 10.0 APT prefers stable APT policy: (900, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.1.0-00062-gc804857673ae (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system)
Bug#931142: dwarves: New upstream version should be packaged
Package: dwarves Version: 1.12-2 Severity: normal Tags: patch Dear Maintainer, There is a newer version of dwarves upstream (1.14). Version 1.13 is needed to build the latest kernels with CONFIG_DEBUG_INFO_BTF. Please consider merging the debian/master branch from: https://salsa.debian.org/tytso/dwarves.git It builds using gbp-buildpackage for me to create dwarves 1.14-1 and it can be used to successfully build Linux v5.2-rc2 with CONFIG_DEBUG_INFO_BTF. -- System Information: Debian Release: 10.0 APT prefers testing APT policy: (900, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 5.1.0-00062-gc804857673ae (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) Versions of packages dwarves depends on: ii libc62.28-10 ii libdw1 0.176-1.1 ii libelf1 0.176-1.1 ii zlib1g 1:1.2.11.dfsg-1 dwarves recommends no packages. dwarves suggests no packages. -- no debconf information
Bug#901289: New upstream home?
Regarding in-kernel recovery "being good enough". The reason why some file systems and system administrators prefer to run fsck at boot, even you can there is "in-kernel recovery", is that journal/log replay only works on an unclean shutdown. However, sometimes there are can be file system inconsistencies errors caused by hardware problems, software bugs (e.g., an Nvidia binary-only driver dereferencing a wild pointer and causing random memory corruption leading to file system damage), etc. The question is what to do then? Some file systems will just fail the boot, leaving the server down until a system administrator can look at things. Other file systems will try to automatically repair "obvious" problems for which there is only one thing a human would have done anyway, so the system can be brought back to life much faster. In those cases, it is useful if a log of the repairs that were done to fix the file system can be logged automatically. "logsave" was an initial attempt at doing this. Programs like "logsave" are also really useful if you are trying to run the repair from an initial ramdisk, since you want to save the output of the boot log someplace useful, before the root file system has been mounted. Of course, systemd's journald daemon also serves this need, although logsave predated system by decades. For people who aren't worried about large numbers servers in production, perhaps logsave really isn't that necessary. This is especially true if they are choosing to use a file system that does not attempt automated recovery in the presence of hardware or software failures, not just automated log replays; those systems will just stop the boot dead in the water if there is any kind of unexpected file system corruption, so logsave won't buy those sysadmins anything, anyway. So my recommendation for sysvinit is to make logsave optional; if logsave is not installed, it's not critical for the functioning of sysvinit to save the output somewhere. I'd also suggest that sysvinit might want consider a mode where if logsave *is* available, that it be used to save the output of the full init.d boot sequence, and not just the fsck output. This will give sysvinit roughly similar debuggability as journald, which is something that system administrators could also find extremely useful. If sysvinit *really* wants to take over logsave, I won't really object. For one thing, if you only care about ext4, we actually now have a much more sophisticated way of saving fsck logs. See the LOGGING section of the e2fsck.conf man page. Logsave was designed back when I was worried about enterprise-grade Reliability, Availability, and Serviceability, and I worked at a company that cared about such things at scales up to a handful of mainframes. What we now have built-into fsck.ext4 (aka e2fsck) was designed after I started working for a company that has to deal with several orders of magnitudes more servers and file systems in data centers all over the world, with a very small staff of Site Reliability Engineers to take care of them all. :-) However, I really don't think migrating logsave between packages so it's provided by sysvinit is worth it. Just make it be optional, for most people running a desktop or a handful of servers, and if they are using file systems that don't try to do automated recovery, it's not going to buy them much anyway. Cheers, - Ted
Bug#911768: pinentry-gnome3 fails to open a window with 'No Gcr System Prompter available, falling back to curses'
On Thu, Dec 20, 2018 at 03:17:03PM -0500, Daniel Kahn Gillmor wrote: > > I wonder whether we can rule out any interaction with gpg-agent itself > -- does "echo getpin | pinentry-gnome3" itself fall back to curses on > your system when nfs-kernel-server is installed? I can confirm that that I did this experiment before I uninstalled nfs-kernel-server --- and it fell back to curses. The next experiment to do would be to reinstall nfs-kernel-server and reboot --- and see if it falls back to curses again. - Ted
Bug#914087: fixed in e2fsprogs 1.44.5-1
On Sun, Dec 16, 2018 at 10:35:55AM +, Simon McVittie wrote: > >* Fix mk_cmds so it works on a usrmerge system when e2fsprogs is built > > on a non-usrmerge system (Closes: #914087) > > In the interests of avoiding inaccurate information, this bug was actually > the other way round: the previous e2fsprogs version didn't work on a > non-usrmerge system if built on a usrmerge system. Oops, thanks for the correction. I'll adjust the debian changelog so it will be correct in future releases. - Ted
Bug#915094: Additional information.
On Sat, Dec 01, 2018 at 12:23:51AM +, Gong S. wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > >Can you tell what version of the kernel you are using? > 4.19.0-trunk-amd64. I guess that I am not getting any semblance of support > with an experimental kernel. > However, when I upgrade from 4.19.0-rc7 to 4.19.0-trunk, the problem is still > there, so it does not look like a kernel problem. That certainly doesn't mean that it's not a kernel bug; it just means that it's a kernel bug that wasn't fixed between 4.19-rc7 and 4.19. The inline_data feature isn't enabled by default because we're not 100% confidence that it's fully rock solid. We are regularly running regression testing on that configuration, and while there are some test failures with inline_data enabled that aren't there with the default options, none of them have caused processes to get stuck: ext4/4k: 444 tests, 2 failures, 42 skipped, 4442 seconds Failures: ext4/034 generic/388 ext4/encrypt: 512 tests, 1 failures, 123 skipped, 2638 seconds Failures: ext4/034 ext4/dioread_nolock: 443 tests, 2 failures, 42 skipped, 4338 seconds Failures: ext4/034 generic/388 vs ext4/adv: 448 tests, 4 failures, 48 skipped, 4149 seconds Failures: ext4/034 generic/399 generic/477 generic/519 So basically, at the moment I can't really recommend this feature for general use --- although this particular failure which you've reported is a new one for me. > >How/when did you enable this feature? > Just me poking around options in the man page. I assume that I can save some > space with small files. > >It looks like you were trying to upgrade some packages when you ran into > >this issue. > It happened to random processes. It happened to "mandb", "perl" and > "chromium" most frequently. Sometimes it happens to very basic processes like > "ls" or "rm" (happened once when I try to remove Chromium's cache folder). > Also, if a process is stuck, the processes will be stuck again if I invoke > that program again. Is this true even if you reboot? What if you force an fsck check on the file system? - Ted
Bug#916188: misc-manuals: Fixes
tags 916188 +pending thanks Thanks for sending the patch. I've applied it to the e2fsprogs maint branch. - Ted
Bug#915942: libext2fs2: ships empty directory /usr/share/doc/libext2fs
tags 915942 +pending thanks On Sat, Dec 08, 2018 at 04:18:38AM +0100, Andreas Beckmann wrote: > Package: libext2fs2 > Version: 1.44.4-2 > Severity: minor > User: debian...@lists.debian.org > Usertags: piuparts > > due to a typo in debian/rules, line 388, the libext2fs2 package ships > the empty directory /usr/share/doc/libext2fs (missing the SOVERSION > in the name). Oops!! Thanks for pointing this out! > PS: and now I still need to understand why this disappears on some > stretch->buster upgrade paths ... For stretch these documents were installed in /usr/share/e2fslibs. This was a screw up as part of the rename of e2fslibs to libext2fs2. - Ted
Bug#915204: ss-dev: reproducible build (usrmerge): embeds path of sed found via PATH
On Sat, Dec 01, 2018 at 06:57:08PM +0100, Andreas Henriksson wrote: > > Debdiff adding SED=/bin/sed in debian/rules attached for your > convenince. Thanks for your patch. I already have a fix in queued up for the next release of e2fsprogs: commit b7bb80dc7033776149bb1f33c81a753fe21a2f89 Author: Theodore Ts'o Date: Thu Nov 22 18:01:56 2018 -0500 mk_cmds: don't use explicit pathname for sed $AWK doesn't use an explicit pathname, and it's perfectly fine to assume that awk and sed are in the user's PATH. The problem with using an explicit pathname is that Debian currently allows merged and non-merged /usr. Avoid using an explicit pathname to prevent potential problems. Addresses-Debian-Bug: #914087 Signed-off-by: Theodore Ts'o diff --git a/lib/ss/mk_cmds.sh.in b/lib/ss/mk_cmds.sh.in index 6d4873582..53282f4dd 100644 --- a/lib/ss/mk_cmds.sh.in +++ b/lib/ss/mk_cmds.sh.in @@ -4,7 +4,7 @@ DIR=@datadir@/ss AWK=@AWK@ -SED=@SED@ +SED=sed Cheers, - Ted
Bug#915094: Processes stuck in "D" state with inline_data feature enabled.
This is a kernel bug, not an e2fsprogs bug. Can you tell what what version of the kernel you are using? Inline_data is a feature I generally don't recommend using unless you have a specific reason. It's not on by default. How/when did you enable this feature? And what were you hoping to achieve by enabling it? It looks like you were trying to upgrade some packages when you ran into this issue. Was that something you were doing manually? Are you automatically running "apt-get upgrade" out of cron? Regards, - Ted
Bug#914087: mk_cmds: wrong SED path when built on a merged-/usr system and run on a non-merged-/usr system
tags 914087 +pending thanks Thanks for reporting this; the following patch will be in the next version of e2fsprogs. - Ted commit b7bb80dc7033776149bb1f33c81a753fe21a2f89 Author: Theodore Ts'o Date: Thu Nov 22 18:01:56 2018 -0500 mk_cmds: don't use explicit pathname for sed $AWK doesn't use an explicit pathname, and it's perfectly fine to assume that awk and sed are in the user's PATH. The problem with using an explicit pathname is that Debian currently allows merged and non-merged /usr. Avoid using an explicit pathname to prevent potential problems. Addresses-Debian-Bug: #914087 Signed-off-by: Theodore Ts'o diff --git a/lib/ss/mk_cmds.sh.in b/lib/ss/mk_cmds.sh.in index 6d4873582..53282f4dd 100644 --- a/lib/ss/mk_cmds.sh.in +++ b/lib/ss/mk_cmds.sh.in @@ -4,7 +4,7 @@ DIR=@datadir@/ss AWK=@AWK@ -SED=@SED@ +SED=sed for as_var in \ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \
Bug#408954: checkroot.sh: should not skip running fsck with JFS root
On Tue, Nov 13, 2018 at 11:46:31PM +0100, Adam Borowski wrote: > what would you say about getting rid of fsck at boot for most filesystems? The reason why it's important to run fsck at boot is because for many file systems if a file system consistency problem is detected at run time (this might be caused by a kernel bug; or a hardware problem; or a cosmic ray). If that happens a flag in the superblock is set indicating that file system really needs to be checked. For ext4, what happens after the flag is set in the superblock depends on how the file system is configured (via mount options or by flags set via tune2fs -e. We can either ignore the fact that there was an error (the "don't worry, be happy" mode), we can remount the file system read-only --- or we can immediately force a reboot. At which point, when the system reboots, the file system checker will run, and in preen mode, will automatically force a full check. So the assertion in the bug report, "running fsck at boot is harmful for any modern file system" falls into the same trap as ZFS did when they asserted, "we're a modern file system, we don't need a fsck program at all!" They very quickly learned that in the real world, there are cosmic rays hitting DRAM; there are hardware bugs; there are kernel bugs. And sending angry customers to ZFS developers to manually fix corrupted file systems (because ZFS didn't have an fsck) didn't scale. :-) So running fsck at boot is absolutely required. > For the few that actually need it, being on battery shouldn't skip it. It was never a good idea for checkroot.sh to be checking whether or it was on battery. That check needs to be done in the file system checker. So for ext4, if you do want to enable time-based or mount count-based checks, e2fsck will check whether or not the system was on battery, and skip the check if the reason for the check was the last check time or mount count was triggering the check. HOWEVER, if the file system is marked as having some corruption found by the kernel, e2fsck will always try to fix the problem, on the assumption that most users care about the data not getting lost more than they care about battery life. :-) Regards, - Ted
Bug#912087: openssh-server: Slow startup after the upgrade to 7.9p1
On Fri, Nov 02, 2018 at 01:24:25AM +0100, Kurt Roeckx wrote: > Anyway, on my laptop I get: > [ 12.675935] random: crng init done > > If the TPM is enabled, I also have an /etc/hwrng, but rng-tools is > started later after the init is done. > > On my desktop (with a chaos key attached) > [3.844484] random: crng init done > [5.312406] systemd[1]: systemd 239 running in system mode. Starting with the 3.17 kernel, the kernel will automatically pull from hardware random number generators without needing to install a user space daemon, such as rng-tools. For most hardware devices, it is not enabled by default, so you have to enable by adding something like "rng_core.default_quality=700" to the kernel boot line. There are *two* devices which are an exception to this rule. The first is virtio_rng, since the assumption is if you are using a VM, you had better trust the host infrastructure or you have much worse problems. The second is the driver for the Chaos Key. That appears to be because the author of the driver for the Chaos Key wasn't aware of the general policy that hardware rng's shouldn't be trusted by default, and the driver was coded violating that policy. This is why (with a chaos key attached) you see the "crng init done" message so early, *before* the root file system is mounted. (The root file system gets mounted after the "systemd running in system mode" message is logged.) This is better than relying on rng-toonls, since we can initialize the CRNG must earlier in the boot process. (It should have been the case that this would only happen if you configured by setting the rng_core.default_quality parameter, but see above about how the Chaos Key driver is currently violating policy.) In the future I should change the kernel so you can explicitly specify something like tpm.rng_quality=500 and chaos_key.rng_quality=1000 on the boot command line. That way the system administrator can be very explicit about which hwrng they trust; right now what we have is not ideal since it's not clear which hwrng the system administrator wanted to configure as trusted, and if you have more than one hwnrg in the system (say, a closed source, proprietary tpm, and an open hardware Chaos Key) you can't say which one you want to have trusted. Cheers, - Ted
Bug#912087: openssh-server: Slow startup after the upgrade to 7.9p1
On Thu, Nov 01, 2018 at 11:18:14PM +0100, Sebastian Andrzej Siewior wrote: > Okay. So you wrote what can be done for a system with HW-RNG/kvm. On > bare metal with nothing fancy I have: > [3.544985] systemd[1]: systemd 239 running in system mode. (+PAM… > [ 10.363377] r8169 :05:00.0 eth0: link up > [ 41.966375] random: crng init done > > which means I have to wait about half a minute until I can ssh into. And > there is no way to speed it up? So that surprises me. Can you tell me more about the hardware? Is it something like a Rasberry Pi? Or is it an x86 server or desktop? In my experience for most x86 platforms this isn't an issue. The main reason why I've talked about VM system is because this is where it where most of the problems that people ahve reported to me. Here's the problem: if we "speed it up" inappropriately, you're risking the security of the ssh. If people who are making a print server or Wifi Rounter who screw it up, they're the ones who are at fault. (And this isn't hypothetical. See https://factorable.net) So if I make a blanket recommendation, and it causes Debian to ship some kind of default that causes Debian users to be insecure, I'm going to be feel really bad. This is why I'm very cautious about what I say. If you want to do whatever you want on your own system, hey consulting adults can do whatever they want. :-) > You did not oppose RNDADDTOENTCNT/RNDADDENTROPY but you wanted to make > it configureable and not default, correct? I'd want to see a full design doc, or a git repository, or set of changes before I give it an unqualified endorsement, but there *are* configurations where such a thing would be sane. That's the problem with security recommendations. It's much like a lawyer giving legal advice. They're very careful about doing that in an unstructured circumstances. If it gets taken in the wrong way, they could be legally liable and people might blame/sue them. And then on top of that, there are the political considerations. Suppose I told you, "just use RDRAND and be happy". Some people who sure that RDRAND has been backdoored would claim that I'm in the pocket of the NSA and/or Intel. That's why all I'm going to say is, "I'm comfortable turning RDRAND on my own systems; you can do what you want." Cheers, - Ted P.S. Although if I were going to generate a high-value key, I *would* plug in my handy-dandy Chaos Key[1] first. Keith gave a presentation[2] about it at Debconf 16. [1] https://keithp.com/blogs/chaoskey/ [2] https://debconf16.debconf.org/talks/94/ And certainly if you were doing something where you had millions of dollars at risk, or where the EU might fine you into oblivion for millions of Euros due to some privacy exposure of your users, I certainly would recommend that you spend the $40 USD to get a Chaos Key and just be *done* with it.
Bug#912087: openssh-server: Slow startup after the upgrade to 7.9p1
On Wed, Oct 31, 2018 at 11:21:59AM +, Sebastian Andrzej Siewior wrote: > On October 30, 2018 8:51:36 PM UTC, "Theodore Y. Ts'o" wrote: > > > >So it's complicated. It's not a binary trusted/untrusted sort of > >thing. > > What about RNDRESEEDCRNG? Would it be reasonable to issue it after writing > the seed as part of the boot process? No, that's for debugging purposes only. When there is sufficient entropy added (either through a hw_random subsystem, or because RDRAND is trusted, or the RNDADDENTORPY ioctl), the crng is automatically reseeded by credit_entropy_bits(). So it's not needed to use RNDRESEEDCRNG. - Ted
Bug#912087: openssh-server: Slow startup after the upgrade to 7.9p1
On Tue, Oct 30, 2018 at 07:37:23PM +0100, Kurt Roeckx wrote: > > So are you saying that the /var/lib/random/seed is untrusted, and > should never be used, and we should always wait for fresh entropy? > > Anyway, I think if an attacker somehow has access to that file, > you have much more serious problems. So it's complicated. It's not a binary trusted/untrusted sort of thing. We should definitely use it, and the fact we have it saved us (at least after the system is installed) when there is a kernel bug such as CVE-2018-1108 where we screwed up and treated the DMI table as 100% random and counted it towards required 256 bits of entropy needed to consider the CRNG to be fully initialized. If the attacker has access to the file, whether or not it matters really depends on how the rest of the system is put together. So for example, if you have secure boot (via a secured bootloader and a signed kernel), and the root file system is protected using dm-verity, the fact that seed file might be compromisable by an external attacker is bad, but it's not necessarily catastrophic. (This is essential the situation for ChromeOS and modern Android handsets, BTW.) OTOH, there are definitely scenarios where you are correct, and if the attacker has access to the files, you probably are toast, and so therefore relying on it makes sense. Whether or not you think that is more or less safer than relying on RDRAND is going to be a judgement call, and very much depends on your assumptions of the threat environment. (Suppose in the future the Chinese come up with a 100% chinese made CPU, that has a RDRAND equivalent; the US military might not be comfortable relying on that CPU or its RDRAND unit, but the Chinese Military might be perfectly comfortable relying on it; what a Debian-provided kernel should when we're trying to be a "Universal Operating System" is a very interesting question --- and that's why random.trust_cpu is a boot command line option.) In any case, if Debian wants to ship a program which reads a seed file and uses it to initialize the random pull assuming that it's trustworthy via the RNDADDENTROPY ioctl, that's not an insane thing to do. My recommendation would be to make it be configurable, however, just as whether we trust RDRAND should be trusted (in isolation) to initialize the CRNG. The point is that everyone is going to have a different opinion about what entropy source is fully trusted, by itself, to initialize the kernel's CRNG. We should mix in everything; but what we should consider as trustworthy enough to give entropy credit is going to vary from one sysadmin/system designer/system security officer to another. Personally, I'm comfortable to run my personal kernel with CONFIG_RANDOM_TRUST_CPU. I'm not willing to impose my beliefs on the all Linux users, however. Cheers, - Ted
Bug#912087: openssh-server: Slow startup after the upgrade to 7.9p1
On Tue, Oct 30, 2018 at 01:18:08AM +0100, Sebastian Andrzej Siewior wrote: > Using ioctl(/dev/urandom, RNDADDENTROPY, ) instead writting to > /dev/urandom would do the trick. Or using RNDADDTOENTCNT to increment > the entropy count after it was written. Those two are documented in > random(4). Or RNDRESEEDCRNG could be used to force crng to be reseeded. > It does also the job, too. > > Ted, is there any best practise what to do with the seed which as > extrected from /dev/urandom on system shutdown? Using RNDADDTOENTCNT to > speed up init or just write to back to urandom and issue RNDRESEEDCRNG? The reason why writing to /dev/[u]random via something like: cat /var/lib/random/seed > /dev/random Dosn't bump the the entropy counter is because it's possible that an attacker could read /var/lib/random/seed. Even if the seed file is refreshed on shutdown, (a) the attacker could have read the file while the system is down, or (b) the system could have crashed so the seed file was not refreshed and the attacker could have read the file before the crash. If you are using a VM, if the host has virtio-rng, using a kernel that has virtio-rng support will solve the problem. For qemu, this means you can enable via something like this: -object rng-random,filename=/dev/urandom,id=rng0 \ -device virtio-rng-pci,rng=rng0 If you are using Google Compute Engine, I can't comment about future product features, but I would encourage you to file a feature request bug with Google requesting virtio-rng support ASAP. On any VM (cloud or on-prem), since you have to trust the host *anyway*, with v4.19, you can add random.trust_cpu=on to the boot command-line, or build the kernel with CONFIG_RANDOM_TRUST_CPU. For the Debian 4.18 kernel, this can be backported via commits 39a8883a2b98 and 9b25436662d5. - Ted
Bug#907634: [PATCH] e4defrag: handle failure to open the file system gracefully
If e4defrag is run by root, it will try to open the underlying file system for files that it is trying to defrag so it can get the file system parameters. It's currently doing this by searching /etc/mtab. This isn't the best way to go about doing things, but we'll leave it for now, at least for a maintenance release. (The better way to do things would be to look up the device using the blkid library, but that's a more involved change.) Since the file system parameters isn't strictly speaking necessary (after all we get by without them when not running as root), we'll allow e4defrag to continue running if we can't find the file system. This can happen if /etc/mtab is pointing at /proc/mounts, and the kernel can't properly identify the root file system, it is reported as "/dev/root". Addresses-Debian-Bug: #907634 Signed-off-by: Theodore Ts'o --- misc/e4defrag.c | 28 +--- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/misc/e4defrag.c b/misc/e4defrag.c index 5ac251dc5..9d237da24 100644 --- a/misc/e4defrag.c +++ b/misc/e4defrag.c @@ -1016,7 +1016,9 @@ static int get_best_count(ext4_fsblk_t block_count) int ret; unsigned int flex_bg_num; - /* Calculate best extents count */ + if (blocks_per_group == 0) + return 1; + if (feature_incompat & EXT4_FEATURE_INCOMPAT_FLEX_BG) { flex_bg_num = 1 << log_groups_per_flex; ret = ((block_count - 1) / @@ -1508,10 +1510,7 @@ static int file_defrag(const char *file, const struct stat64 *buf, goto out; } - if (current_uid == ROOT_UID) - best = get_best_count(blk_count); - else - best = 1; + best = get_best_count(blk_count); if (file_frags_start <= best) goto check_improvement; @@ -1805,17 +1804,16 @@ int main(int argc, char *argv[]) block_size, unix_io_manager, ); if (ret) { if (mode_flag & DETAIL) - com_err(argv[1], ret, - "while trying to open file system: %s", - dev_name); - continue; + fprintf(stderr, + "Warning: couldn't get file " + "system details for %s: %s\n", + dev_name, error_message(ret)); + } else { + blocks_per_group = fs->super->s_blocks_per_group; + feature_incompat = fs->super->s_feature_incompat; + log_groups_per_flex = fs->super->s_log_groups_per_flex; + ext2fs_close_free(); } - - blocks_per_group = fs->super->s_blocks_per_group; - feature_incompat = fs->super->s_feature_incompat; - log_groups_per_flex = fs->super->s_log_groups_per_flex; - - ext2fs_close_free(); } switch (arg_type) { -- 2.18.0.rc0
Bug#907634: e2fsprogs: e4defrag fails on ARM, works on AMD64
On Fri, Aug 31, 2018 at 01:06:00AM +1200, Stuart Richards wrote: > > Attempting to defrag a file with e4defrag -v filetobedefragged on a Raspberry > Pi gives the error "-v: No such file or directory while trying to open file > system: /dev/root". The same command works as expected on an AMD64 system. Thanks for reporting this bug! When e4defrag is run as root, it tries to open the file system to get some file system parameters; this isn't strictly needed; it uses the information to improve the decision about whether or not the file is already optimally defragged or not. The problem is that it does this by trying to parse /etc/mtab (which is often a symlink to /proc/self/mounts). If on that particular system, the kernel reports the root file system as "/dev/root", then e4defrag will try to use it. If /dev/root does not exist (which ia likely) then e4defrag will print the above mentioned error message and bail out. There are a couple of bugs hiding here. (a) The error message which is printed uses argv[1] instead of argv[0], which is where "-v: No such file or directory" is coming from. (b) E4defrag should not bail out; since it's not fatal if the file system parameters can't be fetched (in fact e4defrag will never try if not running as root). (c) The way e4defrag is trying to find the file system is not really the most intelligent; for one thing it will get misled with bind mounts, and it can try to use the blkid library to find the device name. (d) If a large number of files are specified on the comamnd line, e4defrag not cache the result of the file parameters, but instead fetch them over and over again. I'll fix (a) and (b) for now, but e4defrag really needs to be cleaned up eventually. Regards, - Ted
Bug#910086: correction / additional info
On Wed, Oct 03, 2018 at 07:08:41PM +0200, ro...@seffner.de wrote: > > Using user or session based keys suggests me no other session/user is able > to take advantage of them. It seems to me as the following > - permissions/ACL's controls the access rights to en-/decrypted filesystem > objects > - each object (file/directory) hast o be decrypted by the keyowner before > other (permission/ACL's enabled) users can access encrypted content > Did I understand it right now? That's about how things work right now, but the truer answer is that fscrypt was *not* designed for the use case where encrypted files which are shared between multiple users. And the keyring infrastructuer in the kernel doesn't have the concept of global keys (again because it doesn't actually make that sense from a keying perspective --- what use are keys if everyone on the system can use them, at least in the general case)? > My usecase is a crypted folder on an external storage shared by local and > remote samba users. So I have to add the decryption-key to one user an link > it to all th others. For that use case, I'd argue that fscrypt is simply not the right solution. What actually are you trying to protect? Since it's on a file server, the keys have to be available any time the file server is up. So what is your security model? Who are potential attackers, and what capabilities do they have, and what do you hope to have the file system encryption provide? Using dm-crypt to encrypt the entire file system is probably a closer match, but again, what do you hope to achieve by using encryption in the first place? If the file server has to come up automatically after a reboot, and the keys are located permanently on the file server --- what point is the encryption? Especially since CIFS/SMB doesn't have any protocol level encryption, so sending the file data unencrypted across your network is probably a **much** bigger threat than whatever security properties you might have for keeping the bits on the platter encrypted (and the key permanently installed in the server memory, if not on some server boot files). I don't have the whole story, but from what you've told me, the picture appears to be one of vault doors and paper maiche walls. Was the encryption only to provide paper-level certification for "encryption at rest" without actually trying to provide any real security? And I don't say that as a criticism; we have security theater every time we fly in airports; the security measures don't really provide *real* security, but it makes the passengers feel good, which is an important business objective for the airlines, even if it isn't really all that security relevant. :-) - Ted
Bug#904286: f2fs-tools: Please package f2fs-tools v1.11.0
I've submitted version 1.11.0-1 of f2fs-tools; it's currently in the NEW queue. The git tree for f2fs-tools is in Salsa: https://salsa.debian.org/debian/f2fs-tools The branches in the git tree are laid out to be DEP-14 compliant. The master branch contains the debian packaging, and there is a gbp.conf file in the debian directory. The f2fs-tools_1.11.0.orig.tar.gz was generated using: git archive --format tar --prefix f2fs-tools-1.11.0/ v1.11.0 | \ gzip -9n > f2fs-tools_1.11.0.orig.tar.gz and is stored using pristine-tar in the pristine-tar branch. The 1.11.0-1 f2fs-tools packages were build using git-buildpackage, using the command "gbp buildpackage". I'm using sbuild as my builder so I have in my ~/.gbp.conf: [buildpackage] builder = sbuild -A -s -v -d unstable export-dir = /tmp/gbp purge = False [tag] sign-tags = True And I have in my ~/.sbuildrc: $build_arch_all = 1; $distribution = 'unstable'; $source_only_changes = 1; I fixed quite a few packaging issues while I was at it: f2fs-tools (1.11.0-1) unstable; urgency=medium * New upstream release. (Closes: #904286) - add sg_write_buffer for UFS firmware update in Android - wanted_sector_size to specify sector size explicitly - support fsverity feature bit - support lost+found feature * Install the library link files in /usr/lib where they belong * Replace the libf2fs0 package with libf2fs5 and libf2fs-format4 * Fixed missing libblkid dependency in the shared library * Updated Standards compliance to 4.2.0 * Added Theodore Ts'o as an uploader for the package -- Theodore Y. Ts'o Fri, 24 Aug 2018 03:32:49 -0400 Cheers, - Ted
Bug#906900: libcom-err2 breaks fsck.ext4 (1.43.4) on upgrade
On Wed, Aug 22, 2018 at 03:51:23AM +0200, hi...@abwesend.de wrote: > Package: libcom-err2 > Version: 1.44.3-1 > > When updating libcomerr2 to the testing version, libcom-err2 (1.44.3-1) > gets installed. This removes libcom_err2.so.2 from the initramfs and > breaks fsck.ext4 for the root file system, unless e2fsprogs is upgraded. > Which is difficult, if the system does not boot. How does that happen? libcom-err2 *supplies* libcom_err2.so.2 and installs it in the tree. And it has no postinstall scripts that would touch the initramfs? So who or what is rebuilding the initramfs? And why can't it just get the copy of libcom_err2.so.2 that is installed on the system? Do you have a dpkg.log that shows what happened? - Ted
Bug#904286: f2fs-tools: Please package f2fs-tools v1.11.0
On Tue, Aug 21, 2018 at 01:01:10PM -0700, Vincent Cheng wrote: > > I can't find a reference right now, but I seem to recall that one of > the Alioth admins pointed out that mailing lists specifically for > package/bug tracking purposes (i.e. not used for discussion) shouldn't > be migrated to lists.d.o. I don't know what other alternatives there > are, however. I haven't really kept up with the Alioth > migration/deprecation as you can probably tell. :) I thought there a difference between package-specific mailing lists and groups that maintain a large number of packages (e.g., python, X, etc.) But I could be wrong. I thought the lists.alioth.debian.org was only guaranteed to be around for a year, but we do have time to figure out what to do. > Well, the shared library being split into a separate package was > intentional (#793863), but having never updated the package name is > not (I must have overlooked this somehow...). I wonder how I never got > any bug reports about this, because in theory that should mean that > android-libf2fs-utils (src:android-platform-system-extras) is flat out > broken (I never initiated any transitions or binNMU requests for > android-platform-system-extras after f2fs-tools updates). I think the right thing to do is to create separate packages for libf2fs and libf2fs_format. (And the separate -dev packages, of course). They have different so version numbers, and so there is no guarantee they will be both incremented in a particular release. I'll work on that as part of the f2fs-tools 1.11 release. - Ted
Bug#904286: f2fs-tools: Please package f2fs-tools v1.11.0
OK, I've created a new project in Salsa for f2fs-tools: https://salsa.debian.org/debian/f2fs-tools and I've uploaded git repo with a work-in-progress for a f2fs-tools v1.11.0 packaging. A couple of things which I've noted: 1) The maintainers is listed as: filesystems-de...@lists.alioth.debian.org I wonder if it's worth it to migrate the mailing list over to lists.debian.org, and leave a forwarding pointer behind at the lists.alioth.debian.org address? It will take a while to update all of the various file system utility packages to use a non-Alioth address, but I wonder if we should get started. Not high priority, so we don't have to do this on this particular upload. 2) In f2fs-tools 1.10.0-1, there is the shared libary package libf2fs0, which contains the shared libaries: libf2fs.so.4.0.0 libf2fs_format.so.3.0.0 In f2fs-tools 1.11.0 upstream, these have been bumped to: libf2fs.so.5.0.0 libf2fs_format.so.4.0.0 I very much doubt any other packages are actually depending on libf2fs0, but it seems wrong that we're using libf2fs0, as opposed to say, libf2fs4 and now libf2fs5. Was this intentional, or just one of those things that had never been noticed/fixed earlier? Cheers, - Ted
Bug#904286: f2fs-tools: Please package f2fs-tools v1.11.0
On Sun, Aug 12, 2018 at 02:19:29AM -0700, Vincent Cheng wrote: > > Sorry, I haven't had time lately to properly care for my packages. > Please go ahead with the NMU (bonus points if you have time to move > everything to salsa, extra bonus points if you're willing to > co-maintain the package too). Thanks! Was there a previous git repo for f2fs-tools on Alioth? I didn't base my last upload of f2fs to stretch-backports (just started with the tree from "apt-get source f2fs-tools"), but if you want me to move it to Salsa, it would probably be a good idea to preserve the git repo (if any) you were of the Debian packaging. Unfortunately I can't seem to find where the Alioth backups of the repos that were stored there can be found, and while I can try to ask for them, if you have a local git repo that you can push up to github or gitlab, that would be great. Otherwise I can just start a new git repo --- I already have f2fs-tools v1.11.0 already packaged up for {kvm,gce,android}-xfstests, and it was a desire to allow others to reproduce the VM image completely from sources and debian snapshots w/o having to manually compile f2fs-tools which is why I've been interested in keeping f2fs-tools updated in stable backports. So as far as co-maintenance, I'm happy to help, although I don't actually do much with f2fs myself (other as part of the kernel file systems regression testing). Cheers, - Ted
Bug#904286: f2fs-tools: Please package f2fs-tools v1.11.0
On Sun, Jul 22, 2018 at 01:49:17PM -0400, Theodore Y. Ts'o wrote: > > Please consider packaginging f2fs-tools 1.11.0 from upstream. This > release includes: > > - add sg_write_buffer for UFS firmware update in Android > - wanted_sector_size to specify sector size explicity > - support fsverity feature bit > - support lost+found feature > - some critical bug fixes Hi Vincent, Do you think you will have time to update f2fs-tools to 1.11? If not, would you be OK if I were to submit an NMU to update f2fs-tools? Many thanks!! - Ted
Bug#905478: Fwd: Re: Debian´s change of "su" to the one in util-linux
On Fri, Aug 10, 2018 at 01:00:03PM +0200, Martin Steigerwald wrote: > > As it turns out, I do something very differnt which is my .bashrc will > > run ~/.ssh-setup, which looks for existing ssh-agents or gpg-agents, > > and if it one doesn't exist, it will start one, e.g.: > > I would not do this for the root user. I so not think it is wise to run > a ssh-agent or gpg-agent as root. To avoid that is the whole point of my > change to tell sudo to take over the environment for the SSH agent of > the user. I don´t even know why I would like searching for any running > SSH agent. Starting a ssh-agent in practice never happens for the root user. There's an ssh-agent or a gpg-agent running before I su or sudo as root, so that part of the script never runs. The reason why searching for an SSH agent makes sense is for when I ssh into my desktop, and I want to use the pre-existing ssh-agent running on my desktop. Cheers, - Ted
Bug#905478: Fwd: Re: Debian´s change of "su" to the one in util-linux
On Thu, Aug 09, 2018 at 09:10:57PM +0200, Martin Steigerwald wrote: > > Thing is here: It breaks existing workloads. And I have the gut feeling, > not *just* mine. So no matter what long-standing, under-communicated, > probably mostly undocumented best practices are in place in your > opinion, it IMO is likely to produce an uproar with users once next > Debian version is released. Lots of changes break workloads. The question is how common is a particular change. Heck, people tolerate random perl and pythons scripts breaking when new versions are released, and that's considered... OK. Given that other Linux distributions have been using the "new" su, I very much doubt that many people will notice. For that matter, I set my PATH in .bashrc, so the PATH is *always* reset in a new shell, and in fact, I make sure I know I'm root so my .bashrc sets the prompt like this: {/usr/projects/e2fsprogs/e2fsprogs} (next) 1130# (And in fact it does this whether I use su or sudo su. So I didn't notice at all.) Anyway, it's ultimately going to be up to Andreas as the Maintainer, but perhaps you should try to craft some suggested changes to the News.Debian.gz file, keeping in mind needs to be *short*. You may find that it is harder than it seems to write something that is generally applicable and useful for most users. > For example how to make available certain environment variables via > other means: > > % cat /etc/sudoers.d/defaults > Defaultsenv_keep+=SSH_AUTH_SOCK This doesn't belong in documentation for util-linux, and is *extremely* specific to what you are trying to do. As it turns out, I do something very differnt which is my .bashrc will run ~/.ssh-setup, which looks for existing ssh-agents or gpg-agents, and if it one doesn't exist, it will start one, e.g.: ssh-add -l >& /dev/null if test $? = 2 ; then echo "Starting gpg-agent" /bin/rm -rf /tmp/ssh-$USER gpg-agent --daemon --enable-ssh-support --sh > $HOME/.gpg-agent-info . $HOME/.gpg-agent-info 2>&1 > /dev/null fi (This is only part of a 40+ line script; just to give you a flavor.) So there lots and lots of different ways of solving these sorts of problems, depending on what sort of requirements you might have. (Mine are designed to work in a very large set of environments, not all of them running Debian, and for that matter, not all of them are running Linux) We can't really give these sorts of tips in the util-linux Documentation. Cheers, - Ted
Bug#905195: e2fslibs-dev: unhandled symlink to directory conversion: /usr/share/doc/PACKAGE
Thanks for the report. I've checked in a fix for this into the e2fsprogs git repository, and it will be in the next release of e2fsprogs. - Ted
Bug#904286: f2fs-tools: Please package f2fs-tools v1.11.0
Package: f2fs-tools Version: 1.10.0-1 Severity: normal Dear Maintainer, Please consider packaginging f2fs-tools 1.11.0 from upstream. This release includes: - add sg_write_buffer for UFS firmware update in Android - wanted_sector_size to specify sector size explicity - support fsverity feature bit - support lost+found feature - some critical bug fixes -- System Information: Debian Release: buster/sid APT prefers unstable-debug APT policy: (500, 'unstable-debug'), (500, 'testing-debug'), (500, 'unstable'), (500, 'testing'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 4.18.0-rc2-00147-ga1bc5014edfd (SMP w/8 CPU cores) Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), LANGUAGE=en_US.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages f2fs-tools depends on: ii libc62.27-5 ii libf2fs0 1.10.0-1 ii libselinux1 2.8-1+b1 ii libuuid1 2.32-0.1 f2fs-tools recommends no packages. f2fs-tools suggests no packages. -- no debconf information
Bug#803744: Still not fixed in Raspbian GNU/Linux 9 (stretch)
On Fri, Jul 20, 2018 at 09:35:53PM +0200, Alexander Dahl wrote: > Hei hei, > > I can confirm that behaviour for Raspbian GNU/Linux 9 (stretch), quote > from syslog: > > Jul 20 17:17:04 darcy systemd-fsck[96]: fsck.f2fs: invalid option -- 'y' > > If you have any hints on how to solve that, I would happily test it. > f2fs-tools in Debian Stretch is version 1.07 (released July 2016). This was fixed in f2fs-tools version 1.10, which is in Debian Testing and Debian Unstable. f2fs-tools version 1.10 is also available in Debian Backports, if you want to use it with Debian Stretch. For information, please see: https://backports.debian.org/Instructions/ - Ted
Bug#903150: tune2fs: No method to change MMP paramter(s)
On Sat, Jul 07, 2018 at 11:22:27PM -0700, Elliott Mitchell wrote: > It is easy to build wildly incorrect mental models of how a feature works > if you haven't looked at the code, but observed problems due to bugs... > > Particularly if the documentation doesn't talk about the implementation. > At which point confirming details identifies which are the crucial bits > to report. Again, it's much easier if you tell me what you *see*, and not what your mental model might happen to be. A detailed reproduction is the most useful thing that can be in a bug report. In fact, it's what *I* do when I am first trying to diagnose and fix a bug, since a reliable repro means I can test to see whether or not I understand the problem, and whether or not a proposed fix can solve the problem. So if you give me a detailed, reliable repro, it saves me time, and this doesn't require a detailed mental model of how things work. In fact, if you had tried to create a reliable repro, with detailed instructions on how to reproduce it, you probably would have found the flaws in your proposed model > I'll be waiting for that news. I'm a bit unsure of where the bug should > be reassigned to... (hrmm, that is more kernel source packages than I > knew of) That was my point when I was trying to set expectations. You'll probably be better off applying the patch yourself, and probably, while you're at it, using a newer kernel than 4.9. Assigning bugs and expecting other people to do work for you may not work that well when you are trying to use exotic featues like MMP. You're free to do that, of course, but don't expect fast turnaround. > > (For example, if you want the fix right away, you may need to compile > > a new kernel yourself. And while an enterpise distro might be willing > > to backport MMP status feature in dumpe2fs to an ancient e2fsprogs, > > that's not going to happen for Debian Stable. You'll probably have to > > build e2fsprogs 1.44.3 for yourself and install it on your system.) > > Yawn. Been there. Done that. Not like I've built 1.2.13 and examples > of most kernel series since then (oh wait! I *have*!). Nor that I've > been paid to do system administration or software development (oh > wait...). > > I may end up fighting libtool, at which point a number of library > dependancies may start getting downgraded from "Depends" to "Recommends" > or "Suggests". Fortunately, build your own is pretty easy for both e2fsprogs and the kernel. Precisely because I don't use libtool. :-P Just check out the latest version from git, and run "sudo apt-get build-dep e2fsprogs" followed by "dpkg-buildpackage -us -uc -b". That's all that is necessary if you are using Debian. For the kernel, "make bindeb-pkg" will create for you Debian package files. They aren't built *precisely* the same way the Debian kernels are set up, but it's how kernel developers who use Debian build their own kernels. Sample kernel configuration files to build basic kernels sufficient for GCE and KVM can be found here: https://github.com/tytso/xfstests-bld/tree/master/kernel-configs See step (3) here if you need more details, but it's quite straight forward: https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md These configs are for non-modular kernels, which means you don't need to even install them if you are using KVM. You can just specify to the qemu/kvm binary an argument such as: --kernel /build/ext4-64/arch/x86/boot/bzImage and it will boot it without needing to install a kernel package (or messing about with grub). Or you can use "make bindeb-pkg" if you want a more tradiational installation. Cheers, - Ted P.S. You may find that if you do need to create reliable repros, using "kvm-xfstests shell" (see the kvm-quickstart instructions above) is a great way to experiment in a simple and fast sandbox. Also, if you are doing a lot of work with VM's, the technique for generating VM appliance images in a completely reproducible fashion may also be of interested. There are turn-key scripts and documentation for creating test appliance VM images for KVM and GCE images. (See slide 7 at https://thunk.org/gce-xfstests, entitled ``"Interesting technology bits inside gce-xfstests (or, “I’m not a file system developer, why should I care?”'')
Bug#903150: tune2fs: No method to change MMP paramter(s)
On Sat, Jul 07, 2018 at 09:21:47AM -0700, Elliott Mitchell wrote: > > This then suggests this bug should be against linux-source-4.9 instead of > e2fsprogs. > > A few command sequences and notes on how fast the command executes on the > machine: > > mount / -o rw,remount => slow > mount / -o ro,remount => fast > mount / -o rw,remount => slow > mount / -o ro,remount => fast > > By your explanation, the first ro,remount should set the sequence to 0 > (or appropriate equivalent value). OK, thanks for more clearly describing what it was you were do. (Pro tip: it saves a lot of time if you describe in great detail how you reproduced the problem, instead of explaining what you think the problem was. The original bug report had all sorts of misconceptions of how MMP worked. If you have a direction for a clean reproduction, you would have saved yourself and me a lot of time.) Yes, the bug is that the kernel doesn't actually clear the MMP sequence when the file system is remounted readu-only. The original use case for MMP was for high-end HPC clusters using Lustre, and they never used it on the root file system, nor did they remount file systems read-only. So you found an issue that the original author and users of the MMP feature hadn't considered or tripped up against. The fix is relatively simple[1], but it may be a while before it gets backported to the 4.9 kernel. In particular, I want to wait for Andreas Dilger, who originally implemented the MMP feature, to review the patch. [1] http://patchwork.ozlabs.org/patch/940892/ > > MMP_block: > > mmp_magic: 0x4d4d50 > > mmp_check_interval: 5 > > mmp_sequence: 0xff4d4d50 > > mmp_update_date: Sat Jul 7 10:50:26 2018 > > mmp_update_time: 1530975026 > > mmp_node_name: cwcc > > ummp_device_name: loop0 > > > > is purely for debugging purposes. All of the MMP mechanism is done > > via the mmp_sequence number. > > `dumpe2fs` 1.43.4-2 fails to produce that output. "MMP block number" and > "MMP update interval" appear in the output, but none of the other values > This is a new feature that will be in dumpe2fs 1.44.3. Debian unstable has e2fsprogs 1.44.3~rc2-1 uploaded as of July 3rd. 1.44.3 will be released shortly. > I'm very suspicious e2fsprogs still has some sort of issue lurking. Why > does `tune2fs -f -E clear_mmp` clear the issue for 1 and only 1 rw mount? Because -E clear_mmp simply resets the MMP sequence number. When you remount it read-write it sets the MMP sequence number again. The bug, as explained in [1] above, is when the file system is remounted read-only, the MMP sequence number isn't getting reset. Using tune2fs -f -E clear_mmp is simply doing what the kernel should have been doing automatically. Finally, please note that MMP is an advanced feature, and Debian Stable (sometimes not-so-fondly referred to as "Debian Obsolete") is maintained by volunteers and not by paid engineers who are willing to backport all manner of features and bug fixes to ancient, obsolete kernels such as Linux 4.9. There is a *reason* why enterprise companies pay Red Hat and SuSE the big $$$. If you want to use advanced features, you may find that it is better to use newer kernels and engage directly with the upstream developers. I just want to make sure your expectations are set correctly. This was an easy bug, so I didn't mind spending a few minutes while I am on vacation to investigate it. In general, though, especially when you try to use advanced features on a community distro, there may be more self-help required. (For example, if you want the fix right away, you may need to compile a new kernel yourself. And while an enterpise distro might be willing to backport MMP status feature in dumpe2fs to an ancient e2fsprogs, that's not going to happen for Debian Stable. You'll probably have to build e2fsprogs 1.44.3 for yourself and install it on your system.) Regards, - Ted
Bug#901427: Unable to enable ext4 journaled quota
On Fri, Jun 15, 2018 at 07:37:19PM -0700, Elliott Mitchell wrote: > > During the process I ended up running `e2fsck -f` multiple times. I > ended up running `e2fsck -f` after enabling each feature (`tune2fs` > really didn't like enabling multiple features at once). OK, so I really need a reliable repro to be able to do anything with this report. I will note that if you are starting with an ext2/ext3 file system (one without extents and the uninit_bg or metadata_csum features), you will get much better performance in the long run if you create a new ext4 file system and then copy all of the data from the ext2 file system to ext4. My experience from when we did an ext2 upgrade to ext4 (in no-journal mode) some six or seven years ago, for our workload, which was for a cluster file system, was comparing the performance of an existing file system upgrading to ext4 in place, versus the performance of a freshly created ext4 file system, was that the upgrade-in-place experience resulted in roughly half the performance improvement compared to a fresh ext4 file system. So if you are going to need to run tune2fs and e2fsck -f multiple times what are you trying to do? What's the goal here? Don't get me wrong; if you can get me a reproducible test case, I'm happy to look at it. But Red Hat doesn't support upgrading file systems via tune2fs. And that's because there are all sorts of corner cases and it's very hard to do regression testing or bug reproductions. We have used tune2fs to upgrade file systems at Google, but it's something we testetd extensively, and we only did it where it makes sense. > > And do you really need to enable user, group, and project quota > > tracking? Please keep in mind there is a performance cost for > > enabling quota tracking; it is definitely not free > > I definitely want user quota with journal. I'm unsure of group and > project. For each quota type that you enable, you basically end up paying for an extra random write, and potentially a random read, associated with block or inode allocation for a given user or group id, every five seconds. So if you don't need group or project tracking, don't turn it on. We only track usage based on a group basis, so we only enable group quotas. As far as the turning on the quota feature and that terrible error message --- did the file system have any quota files before, and did you check to see if the file system was full (e.g., not enough blocks to allocate the quota file)? - Ted