Bug#1019545: closed by Michael Tokarev (Re: Bug#1019545: samba: Permission/ownership issue in /var/lib/samba results in repeated panic or segfault after upgrading from Buster to Bull
One more factor that may be relevant. This is a really old Debian install. It was first installed sometime in 2014 and thus would have used Wheezy . So, it's been upgraded from Wheezy > Jessie > Stretch > Buster > Bullseye. /var/lib/samba/usershares contains files dated from January 2015 through September 13, 2022 -rw-r--r-- 1 root sambashare 104 Jan 17 2015 backups On Thu, Sep 15, 2022 at 10:16 AM Jason Wittlin-Cohen < jwittlinco...@gmail.com> wrote: > > > On Thu, Sep 15, 2022 at 9:58 AM Jason Wittlin-Cohen < > jwittlinco...@gmail.com> wrote: > >> -- Forwarded message -- >> >>> From: Michael Tokarev >>> To: Jason Cohen , 1019545-d...@bugs.debian.org >>> Cc: >>> Bcc: >>> Date: Thu, 15 Sep 2022 13:17:28 +0300 >>> Subject: Re: Bug#1019545: samba: Permission/ownership issue in >>> /var/lib/samba results in repeated panic or segfault after upgrading from >>> Buster to Bullseye >>> 11.09.2022 19:28, Jason Cohen wrote: >>> > Package: samba >>> > Version: 2:4.16.4+dfsg-2~bpo11+1 >>> > Severity: normal >>> > >>> > Dear Maintainer, >>> > >>> > *** Reporter, please consider answering these questions, where >>> appropriate *** >>> > >>> > * What led up to the situation? >>> > >>> > I upgraded my system from Buster to Bullseye. As part of that >>> process, Samba was upgraded to 4.16.4. After the upgrade, I began receiving >>> emails reporting a Panic or segfault in Samba everytime a user tried to >>> access a file share after going idle. >>> >>> Just to clarify: 4.16[.4] is not part of bullseye, but is available in >>> bullseye-backports. >>> It's okay, it is just that from your statement one might conclude that >>> upgrade to bullseye >>> caused samba to be updated to 4.16.4, - no, it is not, you explicitly >>> installed samba from >>> backports. >>> >>> Also note that across-release upgrades has never been supported in >>> debian. You can upgrade >>> from buster version to bullseye version and only after that you can >>> upgrade to bookworm >>> version (which is what the bullseye-backport essentially is), but not >>> from buster directly >>> to bookworm. >>> >> >> Yes, that's correct. When I upgraded from Buster to Bullseye, the version >> in Bullseye-sec was installed (2:4.13.13+dfsg-1~deb11u5). I manually >> installed the bullseye-backports version to see if it would rectify the >> issue, but it didn't. >> >> Start-Date: 2022-08-31 22:50:37 >> Commandline: apt install -t bullseye-backports samba >> Requested-By: jason (1000) >> Upgrade: python3-samba:amd64 (2:4.13.13+dfsg-1~deb11u5, >> 2:4.16.4+dfsg-2~bpo11+1), libldb2:amd64 (2:2.2.3-2~deb11u2, >> 2:2.5.2+samba4.16.4-2~bpo11+1), libtevent0:amd64 (0.10.2-1, >> 0.11.0-1~bpo11+1), samba-vfs-modules:amd64 (2:4.13.13+dfsg-1~deb11u5, >> 2:4.16.4+dfsg-2~bpo11+1), samba:amd64 (2:4.13.13+dfsg-1~deb11u5, >> 2:4.16.4+dfsg-2~bpo11+1), libwbclient0:amd64 (2:4.13.13+dfsg-1~deb11u5, >> 2:4.16.4+dfsg-2~bpo11+1), libsmbclient:amd64 (2:4.13.13+dfsg-1~deb11u5, >> 2:4.16.4+dfsg-2~bpo11+1), samba-dsdb-modules:amd64 >> (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-common-bin:amd64 >> (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), python3-talloc:amd64 >> (2.3.1-2+b1, 2.3.3-4~bpo11+1), libtdb1:amd64 (1.4.3-1+b1, 1.4.6-3~bpo11+1), >> python3-ldb:amd64 (2:2.2.3-2~deb11u2, 2:2.5.2+samba4.16.4-2~bpo11+1), >> python3-tdb:amd64 (1.4.3-1+b1, 1.4.6-3~bpo11+1), samba-libs:amd64 >> (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-common:amd64 >> (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), ctdb:amd64 >> (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), libtalloc2:amd64 >> (2.3.1-2+b1, 2.3.3-4~bpo11+1) >> End-Date: 2022-08-31 22:52:40 >> >> >>> >>> Tho, I don't think this matters here, - it is just a side note. >>> >>> > * What exactly did you do (or not do) that was effective (or >>> > ineffective)? >>> > >>> > After enabling debug logging, I saw that the panic/segfault was >>> preceeded by the following error:L "stat of /var/lib/samba/usershare/data >>> failed. Permission denied." >>> > >>> > In order to fix the issue, I changed the file ownership of all files >>> in the above directory to root:sambashare and added my user (jason) to the >>> samb
Bug#1019545: closed by Michael Tokarev (Re: Bug#1019545: samba: Permission/ownership issue in /var/lib/samba results in repeated panic or segfault after upgrading from Buster to Bull
On Thu, Sep 15, 2022 at 9:58 AM Jason Wittlin-Cohen wrote: > -- Forwarded message -- > >> From: Michael Tokarev >> To: Jason Cohen , 1019545-d...@bugs.debian.org >> Cc: >> Bcc: >> Date: Thu, 15 Sep 2022 13:17:28 +0300 >> Subject: Re: Bug#1019545: samba: Permission/ownership issue in >> /var/lib/samba results in repeated panic or segfault after upgrading from >> Buster to Bullseye >> 11.09.2022 19:28, Jason Cohen wrote: >> > Package: samba >> > Version: 2:4.16.4+dfsg-2~bpo11+1 >> > Severity: normal >> > >> > Dear Maintainer, >> > >> > *** Reporter, please consider answering these questions, where >> appropriate *** >> > >> > * What led up to the situation? >> > >> > I upgraded my system from Buster to Bullseye. As part of that process, >> Samba was upgraded to 4.16.4. After the upgrade, I began receiving emails >> reporting a Panic or segfault in Samba everytime a user tried to access a >> file share after going idle. >> >> Just to clarify: 4.16[.4] is not part of bullseye, but is available in >> bullseye-backports. >> It's okay, it is just that from your statement one might conclude that >> upgrade to bullseye >> caused samba to be updated to 4.16.4, - no, it is not, you explicitly >> installed samba from >> backports. >> >> Also note that across-release upgrades has never been supported in >> debian. You can upgrade >> from buster version to bullseye version and only after that you can >> upgrade to bookworm >> version (which is what the bullseye-backport essentially is), but not >> from buster directly >> to bookworm. >> > > Yes, that's correct. When I upgraded from Buster to Bullseye, the version > in Bullseye-sec was installed (2:4.13.13+dfsg-1~deb11u5). I manually > installed the bullseye-backports version to see if it would rectify the > issue, but it didn't. > > Start-Date: 2022-08-31 22:50:37 > Commandline: apt install -t bullseye-backports samba > Requested-By: jason (1000) > Upgrade: python3-samba:amd64 (2:4.13.13+dfsg-1~deb11u5, > 2:4.16.4+dfsg-2~bpo11+1), libldb2:amd64 (2:2.2.3-2~deb11u2, > 2:2.5.2+samba4.16.4-2~bpo11+1), libtevent0:amd64 (0.10.2-1, > 0.11.0-1~bpo11+1), samba-vfs-modules:amd64 (2:4.13.13+dfsg-1~deb11u5, > 2:4.16.4+dfsg-2~bpo11+1), samba:amd64 (2:4.13.13+dfsg-1~deb11u5, > 2:4.16.4+dfsg-2~bpo11+1), libwbclient0:amd64 (2:4.13.13+dfsg-1~deb11u5, > 2:4.16.4+dfsg-2~bpo11+1), libsmbclient:amd64 (2:4.13.13+dfsg-1~deb11u5, > 2:4.16.4+dfsg-2~bpo11+1), samba-dsdb-modules:amd64 > (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-common-bin:amd64 > (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), python3-talloc:amd64 > (2.3.1-2+b1, 2.3.3-4~bpo11+1), libtdb1:amd64 (1.4.3-1+b1, 1.4.6-3~bpo11+1), > python3-ldb:amd64 (2:2.2.3-2~deb11u2, 2:2.5.2+samba4.16.4-2~bpo11+1), > python3-tdb:amd64 (1.4.3-1+b1, 1.4.6-3~bpo11+1), samba-libs:amd64 > (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-common:amd64 > (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), ctdb:amd64 > (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), libtalloc2:amd64 > (2.3.1-2+b1, 2.3.3-4~bpo11+1) > End-Date: 2022-08-31 22:52:40 > > >> >> Tho, I don't think this matters here, - it is just a side note. >> >> > * What exactly did you do (or not do) that was effective (or >> > ineffective)? >> > >> > After enabling debug logging, I saw that the panic/segfault was >> preceeded by the following error:L "stat of /var/lib/samba/usershare/data >> failed. Permission denied." >> > >> > In order to fix the issue, I changed the file ownership of all files in >> the above directory to root:sambashare and added my user (jason) to the >> sambashare group. After making these changes, the errors went away. I am >> reporting this as it's a change in behavior. I did not experience these >> segfaults in Buster. It appears that the expected ownership of this >> directory changed, causing my issue. >> >> That's lovely. It is definitely a bug that samba *crashes* when usershare >> dir is not accessible. >> >> But I don't know if this is actually a bug in the behavior change as you >> describe. It seems to >> be, but the thing is that usershare permission model always worked like >> this. At least as far >> as I know. >> >> In 4.16 I changed the way how usershare directory is handled during >> install indeed, with this >> commit: >> >> >> https://salsa.debian.org/samba-team/samba/-/commit/5f67d36ff617fa7e960
Bug#1019545: closed by Michael Tokarev (Re: Bug#1019545: samba: Permission/ownership issue in /var/lib/samba results in repeated panic or segfault after upgrading from Buster to Bull
-- Forwarded message -- > From: Michael Tokarev > To: Jason Cohen , 1019545-d...@bugs.debian.org > Cc: > Bcc: > Date: Thu, 15 Sep 2022 13:17:28 +0300 > Subject: Re: Bug#1019545: samba: Permission/ownership issue in > /var/lib/samba results in repeated panic or segfault after upgrading from > Buster to Bullseye > 11.09.2022 19:28, Jason Cohen wrote: > > Package: samba > > Version: 2:4.16.4+dfsg-2~bpo11+1 > > Severity: normal > > > > Dear Maintainer, > > > > *** Reporter, please consider answering these questions, where > appropriate *** > > > > * What led up to the situation? > > > > I upgraded my system from Buster to Bullseye. As part of that process, > Samba was upgraded to 4.16.4. After the upgrade, I began receiving emails > reporting a Panic or segfault in Samba everytime a user tried to access a > file share after going idle. > > Just to clarify: 4.16[.4] is not part of bullseye, but is available in > bullseye-backports. > It's okay, it is just that from your statement one might conclude that > upgrade to bullseye > caused samba to be updated to 4.16.4, - no, it is not, you explicitly > installed samba from > backports. > > Also note that across-release upgrades has never been supported in debian. > You can upgrade > from buster version to bullseye version and only after that you can > upgrade to bookworm > version (which is what the bullseye-backport essentially is), but not from > buster directly > to bookworm. > Yes, that's correct. When I upgraded from Buster to Bullseye, the version in Bullseye-sec was installed (2:4.13.13+dfsg-1~deb11u5). I manually installed the bullseye-backports version to see if it would rectify the issue, but it didn't. Start-Date: 2022-08-31 22:50:37 Commandline: apt install -t bullseye-backports samba Requested-By: jason (1000) Upgrade: python3-samba:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), libldb2:amd64 (2:2.2.3-2~deb11u2, 2:2.5.2+samba4.16.4-2~bpo11+1), libtevent0:amd64 (0.10.2-1, 0.11.0-1~bpo11+1), samba-vfs-modules:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), libwbclient0:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), libsmbclient:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-dsdb-modules:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-common-bin:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), python3-talloc:amd64 (2.3.1-2+b1, 2.3.3-4~bpo11+1), libtdb1:amd64 (1.4.3-1+b1, 1.4.6-3~bpo11+1), python3-ldb:amd64 (2:2.2.3-2~deb11u2, 2:2.5.2+samba4.16.4-2~bpo11+1), python3-tdb:amd64 (1.4.3-1+b1, 1.4.6-3~bpo11+1), samba-libs:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), samba-common:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), ctdb:amd64 (2:4.13.13+dfsg-1~deb11u5, 2:4.16.4+dfsg-2~bpo11+1), libtalloc2:amd64 (2.3.1-2+b1, 2.3.3-4~bpo11+1) End-Date: 2022-08-31 22:52:40 > > Tho, I don't think this matters here, - it is just a side note. > > > * What exactly did you do (or not do) that was effective (or > > ineffective)? > > > > After enabling debug logging, I saw that the panic/segfault was > preceeded by the following error:L "stat of /var/lib/samba/usershare/data > failed. Permission denied." > > > > In order to fix the issue, I changed the file ownership of all files in > the above directory to root:sambashare and added my user (jason) to the > sambashare group. After making these changes, the errors went away. I am > reporting this as it's a change in behavior. I did not experience these > segfaults in Buster. It appears that the expected ownership of this > directory changed, causing my issue. > > That's lovely. It is definitely a bug that samba *crashes* when usershare > dir is not accessible. > > But I don't know if this is actually a bug in the behavior change as you > describe. It seems to > be, but the thing is that usershare permission model always worked like > this. At least as far > as I know. > > In 4.16 I changed the way how usershare directory is handled during > install indeed, with this > commit: > > > https://salsa.debian.org/samba-team/samba/-/commit/5f67d36ff617fa7e9609ff2e3baa6ed1a533f5a5 > > This means I only create this dir and specify its permissions only at > first install, not every > install. But again, this is not really relevant. > > Now, I don't know which permissions/ownership you files *had* before you > changed them. Please > note how this directory is being created: > > install -d -m 1770 -g sambashare /var/lib/samba/usershares > > The "1" "sticky" bit tells the kernel to use the same group for all > subdirectories and files > created within. So you should not need to change the group ownership in > the first place. If > you had to, it means this sticky bit hasn't been there in your case. > Which, in turn, means > some local modification you did, probably. > > Either way, after you actually changed the
Bug#1019544: Additional Information
As there were several changes in 5.10.140 to the kernel I/O code which could be the cause of my issue, I downloaded the vanilla source code for the 5.10.139 kernel and built it using my Debian kernel config from /boot. I installed the resulting kernel and kernel headers and DKMS built the required ZFS modules. Upon rebooting, all my ZFS pools are working as expected. In particular, the main pool that consistently showed six missing drives (A1-A6) under 5.10.140 is now showing all drives as online, just as it does with 5.10.136-1. In total, this system has 48 3.5" 7200 RPM SATA drives, two 1.92TB Samsung enterprise SATA SSDs, and three NVMe SSDs. The impacted drives are 16TB 3.5" drives, which are in two 4U DS4246 JBOD enclosures, and attached to a Dell R730xd server via an LSI 9207-8e HBA running P20 firmware in IT mode. I'm running ZFS 2.1.5 from bullseye-backports. Note that SMART data for the impacted drives is normal with no bad sectors. The only change I made was booting into a different kernel. Otherwise, it's running all the updates from the 11.5 point release. I will try to bisect 5.10.140 tomorrow to determine more precisely which commit(s) are causing my issue. NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 A1 ONLINE 0 0 0 A2 ONLINE 0 0 0 A3 ONLINE 0 0 0 A4 ONLINE 0 0 0 A5 ONLINE 0 0 0 A6 ONLINE 0 0 0 A7 ONLINE 0 0 0 A8 ONLINE 0 0 0 A9 ONLINE 0 0 0 A10 ONLINE 0 0 0 A11 ONLINE 0 0 0 A12 ONLINE 0 0 0 special mirror-1 ONLINE 0 0 0 nvme-HP_SSD_EX920_1TB_HBSE48481800144-part1 ONLINE 0 0 0 nvme-HP_SSD_EX920_1TB_HBSE48481800847-part1 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 nvme-HP_SSD_EX920_1TB_HBSE48481800144-part2 ONLINE 0 0 0
Bug#879900: apparmor-profiles-extra: Totem segfaults when apparmor profile is enforced
Woops. The second line should read: "As for the totem profile on Stretch, simply adding #include to /etc/apparmor.d/local/usr.bin.totem and reloading the profile did not fix the issue:"
Bug#879900: apparmor-profiles-extra: Totem segfaults when apparmor profile is enforced
Hi, I would be happy to help. I have several machines running Stretch with a variety of hardware and uses (desktop/server, Intel/NVIDIA GPUs etc.). Are there specific apparmor profiles you wish to test? As for the totem profile on Stretch, simply adding #include to /etc/apparmor.d/local/usr.bin/totem and reloading the profile did not fix the issue: jason@jason-desktop:/etc/apparmor.d$ /usr/bin/totem (totem:9153): Cogl-WARNING **: driver/gl/cogl-util-gl.c:96: GL error (1281): Invalid value (totem:9153): Cogl-WARNING **: driver/gl/cogl-util-gl.c:96: GL error (1281): Invalid value (totem:9153): Cogl-WARNING **: driver/gl/cogl-util-gl.c:96: GL error (1281): Invalid value (totem:9153): Cogl-WARNING **: driver/gl/cogl-util-gl.c:96: GL error (1281): Invalid value (totem:9153): Cogl-WARNING **: driver/gl/cogl-util-gl.c:96: GL error (1281): Invalid value Segmentation fault The audit log shows continued errors related to the NVIDIA driver: Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.329:300): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/dev/nvidia-modeset" pid=9153 comm="totem" requested_mask="rw" denied_mask="rw" fsuid=1000 ouid=0 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.329:301): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/dev/nvidia-modeset" pid=9153 comm="totem" requested_mask="rw" denied_mask="rw" fsuid=1000 ouid=0 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.349:302): apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" name="/tmp/.glVcerPq" pid=9153 comm="totem" requested_mask="m" denied_mask="m" fsuid=1000 ouid=1000 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.349:303): apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" name="/tmp/.glVcerPq" pid=9153 comm="totem" requested_mask="m" denied_mask="m" fsuid=1000 ouid=1000 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.349:304): apparmor="DENIED" operation="mkdir" profile="/usr/bin/totem" name="/home/jason.nv/" pid=9153 comm="totem" requested_mask="c" denied_mask="c" fsuid=1000 ouid=1000 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.353:305): apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" name="/tmp/.gl6sStVi" pid=9153 comm="totem" requested_mask="m" denied_mask="m" fsuid=1000 ouid=1000 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.353:306): apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" name="/tmp/.gl6sStVi" pid=9153 comm="totem" requested_mask="m" denied_mask="m" fsuid=1000 ouid=1000 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.353:307): apparmor="DENIED" operation="mkdir" profile="/usr/bin/totem" name="/home/jason.nv/" pid=9153 comm="totem" requested_mask="c" denied_mask="c" fsuid=1000 ouid=1000 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.397:308): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/var/lib/flatpak/exports/share/icons/hicolor/index.theme" pid=9153 comm="totem" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0 Oct 31 10:26:56 kernel: audit: type=1400 audit(1509460016.397:309): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/var/lib/flatpak/exports/share/icons/hicolor/icon-theme.cache" pid=9153 comm="totem" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0 jason@jason-desktop:/etc/apparmor.d$ I also tried using the usr.bin.totem profile from sid, but that also failed: jason@jason-desktop:/etc/apparmor.d/local$ /usr/bin/totem (totem:11884): Cogl-WARNING **: driver/gl/cogl-util-gl.c:96: GL error (1281): Invalid value (totem:11884): Grilo-WARNING **: [bookmarks] grl-bookmarks.c:255: Could not open database '/home/jason/.local/share/grilo-plugins/grl-bookmarks.db': Failed to open database at /home/jason/.local/share/grilo-plugins/grl-bookmarks.db (totem:11884): GVFS-WARNING **: can't init metadata tree /home/jason/.local/share/gvfs-metadata/root: open: Permission denied (totem:11884): GVFS-WARNING **: can't init metadata tree /home/jason/.local/share/gvfs-metadata/root: open: Permission denied (totem:11884): GrlPodcasts-CRITICAL **: Failed to open database '': unable to open database file (totem:11884): Grilo-WARNING **: [thetvdb] grl-thetvdb.c:390: Could not open database '/home/jason/.local/share/grilo-plugins/grl-thetvdb.db': Failed to open database at /home/jason/.local/share/grilo-plugins/grl-thetvdb.db Segmentation fault The audit log still contains NVIDIA related errors: Oct 31 10:41:52 kernel: audit: type=1400 audit(1509460912.787:317): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/dev/nvidia-modeset" pid=11884 comm="totem" requested_mask="rw" denied_mask="rw" fsuid=1000 ouid=0 Oct 31 10:41:52 kernel: audit: type=1400 audit(1509460912.787:318): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/dev/nvidia-modeset" pid=11884 comm="totem" requested_mask="rw" denied_mask="rw" fsuid=1000 ouid=0 Oct 31 10:41:52 kernel: audit:
Bug#879900: apparmor-profiles-extra: Totem segfaults when apparmor profile is enforced
Accidentally replied rather than replying all. On Fri, Oct 27, 2017 at 10:30 AM, Jason Wittlin-Cohen < jwittlinco...@gmail.com> wrote: > Thanks for the quick reply! > > Adding #include to /etc/apparmor.d/local/usr.bin.totem > fixed the issue. I am now able to open Totem and play videos. I still see > some apparmor DENY messages in the logs, but they seem unrelated. > > > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.373:2948): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.glE98VL2" pid=6719 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.373:2949): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.glE98VL2" pid=6719 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.373:2950): > apparmor="DENIED" operation="mkdir" profile="/usr/bin/totem" > name="/home/jason.nv/" pid=6719 comm="totem" requested_mask="c" > denied_mask="c" fsuid=1000 ouid=1000 > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.377:2951): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.gldPWDHt" pid=6719 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.377:2952): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.gldPWDHt" pid=6719 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.377:2953): > apparmor="DENIED" operation="mkdir" profile="/usr/bin/totem" > name="/home/jason.nv/" pid=6719 comm="totem" requested_mask="c" > denied_mask="c" fsuid=1000 ouid=1000 > Oct 27 10:09:45 kernel: audit: type=1400 audit(1509113385.447:2954): > apparmor="DENIED" operation="exec" profile="/usr/bin/totem" > name="/bin/dash" pid=6778 comm="totem" requested_mask="x" denied_mask="x" > fsuid=1000 ouid=0 > Oct 27 10:16:04 kernel: audit: type=1400 audit(1509113764.487:2956): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.glph14DP" pid=12243 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:16:04 kernel: audit: type=1400 audit(1509113764.487:2957): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.glph14DP" pid=12243 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:16:04 kernel: audit: type=1400 audit(1509113764.487:2958): > apparmor="DENIED" operation="mkdir" profile="/usr/bin/totem" > name="/home/jason.nv/" pid=12243 comm="totem" requested_mask="c" > denied_mask="c" fsuid=1000 ouid=1000 > Oct 27 10:16:04 kernel: audit: type=1400 audit(1509113764.492:2959): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.glnEQ3yX" pid=12243 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:16:04 kernel: audit: type=1400 audit(1509113764.492:2960): > apparmor="DENIED" operation="file_mmap" profile="/usr/bin/totem" > name="/tmp/.glnEQ3yX" pid=12243 comm="totem" requested_mask="m" > denied_mask="m" fsuid=1000 ouid=1000 > Oct 27 10:16:04 kernel: audit: type=1400 audit(1509113764.492:2961): > apparmor="DENIED" operation="mkdir" profile="/usr/bin/totem" > name="/home/jason.nv/" pid=12243 comm="totem" requested_mask="c" > denied_mask="c" fsuid=1000 ou > > > > As an aside, I think I am hitting a similar issue when attempting to add > apparmor integration to the google-chrome profile in Firejail (firejail > ships with its own apparmor profile which allows for additional hardening > that is not possible when running firejail alone). When I enable apparmor > integration in the Chrome profile, GPU rendering and accelera
Bug#879900: Acknowledgement (apparmor-profiles-extra: Totem segfaults when apparmor profile is enforced)
I failed to mention earlier but I saw the same behavior on my Buster system running version 1.14 and 1.15.. I am also seeing the same behavior on my Stretch install: jason@jason-desktop:/etc/apparmor.d$ /usr/bin/totem (totem:14579): GLib-CRITICAL **: g_strsplit: assertion 'string != NULL' failed Segmentation fault Syslog: Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-0: disconnected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-0: Internal TMDS Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-0: 330.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-1: disconnected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-1: Internal TMDS Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-1: 330.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Acer XB271HU (DFP-2): connected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Acer XB271HU (DFP-2): Internal DisplayPort Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Acer XB271HU (DFP-2): 1440.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-3: disconnected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-3: Internal TMDS Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-3: 330.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DELL U2713HM (DFP-4): connected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DELL U2713HM (DFP-4): Internal DisplayPort Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DELL U2713HM (DFP-4): 1440.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-5: disconnected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-5: Internal TMDS Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-5: 330.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-6: disconnected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-6: Internal DisplayPort Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-6: 1440.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-7: disconnected Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-7: Internal TMDS Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): DFP-7: 330.0 MHz maximum pixel clock Oct 27 00:29:25 jason-desktop /usr/lib/gdm3/gdm-x-session[3332]: (--) NVIDIA(GPU-0): Oct 27 00:29:25 jason-desktop kernel: [ 96.503531] audit_printk_skb: 10 callbacks suppressed Oct 27 00:29:25 jason-desktop kernel: [ 96.503533] audit: type=1400 audit(1509078565.921:86): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/proc/modules" pid=5467 comm="totem" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0 Oct 27 00:29:25 jason-desktop kernel: [ 96.504412] audit: type=1400 audit(1509078565.921:87): apparmor="DENIED" operation="exec" profile="/usr/bin/totem" name="/usr/bin/nvidia-modprobe" pid=5470 comm="totem" requested_mask="x" denied_mask="x" fsuid=1000 ouid=0 Oct 27 00:29:25 jason-desktop kernel: [ 96.507159] audit: type=1400 audit(1509078565.925:88): apparmor="DENIED" operation="open" profile="/usr/bin/totem" name="/proc/modules" pid=5467 comm="totem" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0 Oct 27 00:29:25 jason-desktop kernel: [ 96.507855] audit: type=1400 audit(1509078565.925:89): apparmor="DENIED" operation="exec" profile="/usr/bin/totem" name="/usr/bin/nvidia-modprobe" pid=5471 comm="totem" requested_mask="x" denied_mask="x" fsuid=1000 ouid=0 Oct 27 00:29:25 jason-desktop
Bug#879900: apparmor-profiles-extra: Totem segfaults when apparmor profile is enforced
Package: apparmor-profiles-extra Version: 1.15 Severity: important Dear Maintainer, Totem suffers a segmentation fault upon startup when its respective apparmor profile is set to enforce mode. It starts fine when the apparmor profile is set to complain mode. I have not modified the /etc/apparmor.d/usr.bin.totem profile. *** Reporter, please consider answering these questions, where appropriate *** * What led up to the situation? I set /usr/bin/totem to "enforce" mode and then attempted to start /usr/bin/totem from a terminal in order to display the error. I see the same behavior if I open Totem from my GNOME menu. jason@debian-testing:~$ /usr/bin/totem (totem:29696): GLib-CRITICAL **: g_strsplit: assertion 'string != NULL' failed Segmentation fault * What exactly did you do (or not do) that was effective (or ineffective)? Placing /usr/bin/totem in "complain" mode resolves the issue. * What outcome did you expect instead? I expected Totem to work properly with its apparmor profile in enforce mode. Relevant Output from Syslog: Oct 27 00:00:16 debian-testing kernel: [139095.152218] audit: type=1400 audit(1509076816.705:1330): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/bin/totem" pid=29508 comm="apparmor_parser" Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-0: disconnected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-0: Internal TMDS Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-0: 330.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-1: disconnected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-1: Internal TMDS Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-1: 330.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Acer XB271HU (DFP-2): connected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Acer XB271HU (DFP-2): Internal DisplayPort Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Acer XB271HU (DFP-2): 1440.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-3: disconnected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-3: Internal TMDS Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-3: 330.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DELL U2713HM (DFP-4): connected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DELL U2713HM (DFP-4): Internal DisplayPort Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DELL U2713HM (DFP-4): 1440.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-5: disconnected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-5: Internal TMDS Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-5: 330.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-6: disconnected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-6: Internal DisplayPort Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-6: 1440.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-7: disconnected Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-7: Internal TMDS Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): DFP-7: 330.0 MHz maximum pixel clock Oct 27 00:00:22 debian-testing /usr/lib/gdm3/gdm-x-session[20279]: (--) NVIDIA(GPU-0): Oct 27 00:00:22 debian-testing kernel: [139101.193078] audit: type=1400 audit(1509076822.746:1331): apparmor="DENIED" operation="open"
Bug#878827: Auto-login no longer works as of 3.26.0-1 [Regression]
Package: gdm3 Version: 3.26.1-3 Severity: normal Dear Maintainer, * What led up to the situation? I noticed that auto-login stopped working in GDM after the package was updated from 3.25.90.1-2 to 3.26.0-1. I did not make any configuration changes between versions. * What exactly did you do (or not do) that was effective (or ineffective)? I use ZFS for my root filesystem. To test my theory that the GDM update broke auto-login, I cloned a snapshot with gdm3 3.25.90.1-2 and confirmed that auto- login worked. I then held gdm3 and upgraded all remaining packages to ensure the system was otherwise up-to-date. After a reboot, auto-login continued to function. I then updated gdm3 and its libraries to the latest version (3.26.1-3). To determine when auto-login ceased functioning, I cloned a later snapshot with 3.26.0-1. This version also fails to auto-login. Thus, it appears that the regression I am seeing was introduced in 3.26.0-1. The 3.26.0-1 release also contains a seemingly relevant change: "Fix for unauthenticated unlock when autologin is enabled (CVE-2017-12164)" * What was the outcome of this action? As of gdm3 3.26.0-1, when I restart my machine I am presented with a GDM login screen, as if auto-login was disabled. * What outcome did you expect instead? I expected to be automatically logged into GNOME with my selected user account. In other words, I expected the behavior to be the same as prior versions. -- System Information: Debian Release: buster/sid APT prefers testing APT policy: (900, 'testing'), (800, 'unstable') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 4.13.0-1-amd64 (SMP w/4 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages gdm3 depends on: ii accountsservice 0.6.45-1 ii adduser 3.116 ii dconf-cli 0.26.0-2+b1 ii dconf-gsettings-backend 0.26.0-2+b1 ii debconf 1.5.63 ii gir1.2-gdm-1.03.26.1-3 ii gnome-session [x-session-manager] 3.24.1-2 ii gnome-session-bin 3.24.1-2 ii gnome-settings-daemon 3.24.3-1 ii gnome-shell 3.22.3-3 ii gnome-terminal [x-terminal-emulator] 3.26.1-1 ii gsettings-desktop-schemas 3.24.1-1 ii libaccountsservice0 0.6.45-1 ii libaudit1 1:2.7.8-1 ii libc6 2.24-17 ii libcanberra-gtk3-00.30-3 ii libcanberra0 0.30-3 ii libgdk-pixbuf2.0-02.36.11-1 ii libgdm1 3.26.1-3 ii libglib2.0-0 2.54.1-1 ii libglib2.0-bin2.54.1-1 ii libgtk-3-03.22.24-1 ii libkeyutils1 1.5.9-9 ii libpam-modules1.1.8-3.6 ii libpam-runtime1.1.8-3.6 ii libpam-systemd234-3 ii libpam0g 1.1.8-3.6 ii librsvg2-common 2.40.18-1 ii libselinux1 2.7-2 ii libsystemd0 234-3 ii libwrap0 7.6.q-26 ii libx11-6 2:1.6.4-3 ii libxau6 1:1.0.8-1+b2 ii libxcb1 1.12-1 ii libxdmcp6 1:1.1.2-3 ii lsb-base 9.20170808 ii mutter [x-window-manager] 3.22.4-2 ii policykit-1 0.105-18 ii ucf 3.0036 ii x11-common1:7.7+19 ii x11-xserver-utils 7.7+7+b1 Versions of packages gdm3 recommends: ii at-spi2-core2.26.0-2 ii desktop-base9.0.5 ii x11-xkb-utils 7.7+3+b1 ii xserver-xephyr 2:1.19.3-2 ii xserver-xorg1:7.7+19 ii zenity 3.24.0-1 Versions of packages gdm3 suggests: ii gnome-orca3.26.0-1 ii libpam-gnome-keyring 3.20.1-1 -- Configuration Files: /etc/gdm3/daemon.conf changed: [daemon] AutomaticLoginEnable=True AutomaticLogin=jason [security] [xdmcp] [chooser] [debug]
Bug#871619: zfs-dkms: Please package ZFS 0.7.0
For those too inpatient to wait for an official Debian build, I managed to compile kmod packages from the ZoL source. See here for details: https://github.com/zfsonlinux/zfs/issues/6606 On Wed, Aug 9, 2017 at 8:42 PM, Jason Cohenwrote: > Package: zfs-dkms > Version: 0.6.5.9-5 > Severity: wishlist > > Dear Maintainer, > > Please consider packaging ZFS 0.7.1. The new 0.7.0 release, which > released on > July 26, 2017, includes a number of valuable new features including better > memory management (ARC uses scatter lists rather than virtual memory, > minimizing fragmentation and Compressed ARC), improved performance > (vectorized > RAID-Z and fletcher4 math, faster resilvering, improved metadata > performance), > resumable and compressed 'zfs send' (invaluable for very large sends!), and > improved statistics (SMART data, IO latency, average request size). > > Thanks for packing ZFS for Debian, > > Jason > > > > -- System Information: > Debian Release: 9.1 > APT prefers stable > APT policy: (500, 'stable') > Architecture: amd64 (x86_64) > Foreign Architectures: i386 > > Kernel: Linux 4.9.0-3-amd64 (SMP w/4 CPU cores) > Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8), > LANGUAGE=en_US.utf8 (charmap=UTF-8) > Shell: /bin/sh linked to /bin/dash > Init: systemd (via /run/systemd/system) > > Versions of packages zfs-dkms depends on: > ii debconf [debconf-2.0] 1.5.61 > ii dkms 2.3-2 > ii lsb-release9.20161125 > ii spl-dkms 0.6.5.9-1 > > Versions of packages zfs-dkms recommends: > ii zfs-zed 0.6.5.9-5 > ii zfsutils-linux 0.6.5.9-5 > > zfs-dkms suggests no packages. > > -- debconf information: > * zfs-dkms/note-incompatible-licenses: > zfs-dkms/stop-build-for-unknown-kernel: true > zfs-dkms/stop-build-for-32bit-kernel: true > >
Bug#874182: Regression breaks hourly, daily, weekly, and monthly snapshots
Package: zfs-auto-snapshot Version: 1.2.2-1 Severity: Important Dear Maintainer, The update to zfs-auto-snapshot 1.2.2-1 has caused a regression preventing hourly, daily, weekly, and monthly auto snapshots from running. Frequent snapshots still work. Reverting the scripts to those used by the prior version, 1.2.1-1, fixed the issue. New script which does not work: !/bin/sh # Only call zfs-auto-snapshot if it's available exec which zfs-auto-snapshot > /dev/null && \ zfs-auto-snapshot --quiet --syslog --label=daily --keep=31 // Old Script that works: #!/bin/sh exec zfs-auto-snapshot --quiet --syslog --label=daily --keep=31 // * What led up to the situation? I installed zfs-auto-snapshot on my system running Debian Buster, and enabled frequent, hourly, and daily snapshots. I noticed that only frequent snapshots were running. In contrast, with the same settings, my Debian Stretch system had functional frequent, hourly, and daily snapshots. Syslog only showed the zfs-auto-snapshot running for frequent snapshots. There was no indication that the system attempted to run hourly or daily snapshots, as instructed. * What exactly did you do (or not do) that was effective (or ineffective)? I was able to resolve the issue by replacing the zfs-auto-snapshot scripts in /etc/cron.daily and /etc/cron.hourly with the scripts from 1.2.1-1. * What was the outcome of this action? Hourly and Daily snapshots now work as expected. -- System Information: Debian Release: buster/sid APT prefers testing APT policy: (900, 'testing'), (800, 'unstable') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 4.12.0-1-amd64 (SMP w/4 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages zfs-auto-snapshot depends on: ii cron3.0pl1-128+b1 ii zfsutils-linux 0.6.5.11-1 zfs-auto-snapshot recommends no packages. zfs-auto-snapshot suggests no packages. -- Configuration Files: /etc/cron.daily/zfs-auto-snapshot changed: exec zfs-auto-snapshot --quiet --syslog --label=daily --keep=31 // /etc/cron.hourly/zfs-auto-snapshot changed: exec zfs-auto-snapshot --quiet --syslog --label=hourly --keep=24 //
Bug#863389: Fwd: Bug#863389: linux-image-4.9.0-0.bpo.3-amd64: After update to linux-image-4.9.0.0-0.bpo.3, no route to host via IPV4
Hi Ben, Thanks for the reply. You're absolutely right about the ordering of the iface statements. I modified my interfaces file such that the bonding parameters followed "iface bond0 inet static" and now it loads the bonding module and adds a default gateway for IPV4. The configuration below works properly. I'm pretty sure I know what happened. I've been using bonding with IPV4-only for quite a while; I used the Debian howto to set it up. I recently added a Hurricane Electric 6in4 tunnel to my pfsense router and therefore added the "iface bond0 inet6 static" section. Restarting the bond using "ifdown bond0 && ifup bond0" worked, perhaps because the bonding module was already loaded. I wasn't able to find much documentation about how to setup a bond with both IPV4 and IPV6. The bonding guide is IPV4 only, and the IPV6 guide doesn't discuss bonding. However, I still think there's something wrong with my configuration. If I attempt to use "iface bond0 inet6 dhcp", I get an IP from the DHCPv6 server, but no default gateway is created, so attempting to access the internet results in no route to host. I know Router Advertisements and DHCPv6 are working as other Windows and Linux clients are working (tested with both SLAAC and DHCPv6 w/RA). Do you know why I'm not getting a default route when using DHCPv6? auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet dhcp slaves eth0 eth1 bond_mode 802.3ad bond_miimon 100 bond_downdelay 200 bond_updelay 200 mtu 9000 iface bond0 inet6 static address 2001:470:8:1141::2 netmask 64 gateway 2001:470:8:1141::1 auto eth2 iface eth2 inet static address 10.0.0.1 netmask 255.255.255.0 mtu 9000 Thanks, Jason On Fri, May 26, 2017 at 7:30 PM, Ben Hutchingswrote: > Control: tag -1 moreinfo > > I don't see any changes in the kernel bonding driver, or any relevant > changes to kernel networking. > > On Fri, 2017-05-26 at 00:31 -0400, Jason Cohen wrote: > > ** Network interface configuration: > [...] > > auto bond0 > > iface bond0 inet static > > address 192.168.1.200 > > netmask 255.255.255.0 > > network 192.168.1.0 > > gateway 192.168.1.1 > > iface bond0 inet6 static > > address 2001:470:8:1141::2 > > netmask 64 > > gateway 2001:470:8:1141::1 > > slaves eth0 eth1 > > bond_mode 802.3ad > > bond_miimon 100 > > bond_downdelay 200 > > bond_updelay 200 > > mtu 9000 > [...] > > I'm pretty sure this configuration never worked, and that you just > found that out by rebooting after the upgrade. > > The problem is that ifupdown treats each 'iface' statement as > introducing a separate interface configuration, so the above is parsed > as: > > iface bond0 inet static > address 192.168.1.200 > netmask 255.255.255.0 > network 192.168.1.0 > gateway 192.168.1.1 > > iface bond0 inet6 static > address 2001:470:8:1141::2 > netma > sk 64 > gateway 2001:470:8:1141::1 > slaves eth0 eth1 > > bond_mode 802.3ad > bond_miimon 100 > bond_downdelay 200 > > bond_updelay 200 > mtu 9000 > > I think it will first try to apply the first interface configuration. > There are no bonding parameters so the code to create a bonding > interface (and load the bonding module) doesn't run. As there is no > such interface, the IPv4 configuration can't be applied either. > > I don't know whether ifupdown tries to process the second interface > configuration after this failure, but I would guess not. > > If you move all the bonding and mtu parameters into the first interface > configuration, does the bonding interface start working again? > > Ben. > > -- > Ben Hutchings > The generation of random numbers is too important to be left to chance. >- Robert Coveyou >
Bug#863389: Missing default Route
It appears that the "no route to host" issue was caused by a missing default route. "route add default gw 192.168.1.1 dev bond0" fixed the issue. The question is why my routing table was missing such a default route. My /etc/network/interfaces file specifically references the gateway, as indicated by https://wiki.debian.org/NetworkConfiguration#Configuring_the_interface_manually. While I know the issue, I'm not sure what is causing it. My interfaces setting should automatically add a default route. I've added the above command to /etc/rc.local for the time being. /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual #This is my LACP Bond Interface with manual IPV4 and IPV6 addresses auto bond0 iface bond0 inet static address 192.168.1.200 netmask 255.255.255.0 network 192.168.1.0 gateway 192.168.1.1 iface bond0 inet6 static address 2001:470:8:1141::2 netmask 64 gateway 2001:470:8:1141::1 slaves eth0 eth1 bond_mode 802.3ad bond_miimon 100 bond_downdelay 200 bond_updelay 200 mtu 9000 #Point-to-Point 10Gbe Link for backups auto eth2 iface eth2 inet static address 10.0.0.1 netmask 255.255.255.0 mtu 9000
Bug#852799: mcelog: Mismatch of structure `mce`
On Mon, 6 Feb 2017 15:09:08 +0500 Andrey Rahmatullinwrote: > On Mon, Feb 06, 2017 at 10:50:18AM +0100, Paul Menzel wrote: > > Will you package v148, and will it get into Debian 9 (Stretch)? > No, stretch is frozen for issues below important severity. > > -- > WBR, wRAR The upstream bug report suggests that this bug renders mcelog non-functional. Wouldn't that make this bug at least important ("a bug which has a major effect on the usability of a package"), if not grave, "makes the package in question unusable or mostly so"? I'm seeing the same issue in Jessie with the jessie-backports kernel (4.9.0-0.bpo.2-amd64 #1 SMP Debian 4.9.18-1~bpo8+1 (2017-04-10) x86_64 GNU/Linux).