[This comment was seemingly hidden?]
> The real question is, why is "[ $(date +\%w) -eq 0 ]" in there, when
cron can do day-of-week like:
>
> 24 0 8-14 * 0 root [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-
linux/scrub
This is because if you specify the "day of month" and the "day of week"
fie
Note that fail2ban is in universe, not main. This was surprising to me,
and something I only realized because of this bug. I too think of
fail2ban as a core security component. I wish Ubuntu would promote it to
main, but that's a different conversation.
Traditionally, being in universe has meant t
I tested (rebuilt in a PPA) the version from:
https://launchpadlibrarian.net/731722634/fail2ban_1.0.2-3_1.0.2-3ubuntu1.24.04.1.diff.gz
It works for me. I can't mark this verification-done, as I didn't use
the actual version from -proposed (since it isn't available there yet).
--
You received thi
@ghadi-rahme:
The version in the changelog is wrong. You have "1.0.2-ubuntu1", which
should presumably be "1.0.2-3ubuntu1". You are missing the "3" after the
dash.
Also, configure-setup-to-install-fail2ban.compat.patch does not apply
cleanly. Your version has spaces throughout the whole patch (bo
Upon further investigation, I see that the systemd networkd settings
have similar documentation only listing true and unset. But the systemd
NEWS file explicitly talks about disabling and the settings are parsed
in networkd using config_parse_tristate, so I think networkd properly
handles =0 on the
This change does NOT fix the issue from the [Impact] statement. The
[Impact] talks about disabling offload, but the test case talks about
enabling offload. The patch only implements enabling offload, not
disabling it.
** Changed in: netplan
Status: Fix Committed => Confirmed
--
You receiv
** Tags removed: verification-needed-focal
** Tags added: verification-done-focal
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940916
Title:
Incorrectly excludes tmpfs filesystems
To manage notif
** Bug watch added: Debian Bug tracker #1004709
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004709
** Also affects: ieee-data (Debian) via
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004709
Importance: Unknown
Status: Unknown
--
You received this bug notification be
On the stock version, tmpfs filesystems do not show up, even if I
specify -X:
$ /usr/lib/nagios/plugins/check_disk -w 10 -c 10
DISK OK - free space: /dev 5944 MB (100% inode=99%); / 2357 MB (28% inode=75%);
/srv 17618 MB (94% inode=99%); /boot/efi 498 MB (98% inode=-);|
/dev=0MB;5934;5934;0;5944
** Patch added: "An updated version of the patch with my alternative solution"
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1958481/+attachment/736/+files/exclude-tmpfs-squashfs-tracefs.patch
--
You received this bug notification because you are a member of Ubuntu
Bug
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958481
Title:
check_disk forcibly ignores tmpfs
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/monitoring-plug
Public bug reported:
check_disk ignores tmpfs filesystems due to an Ubuntu patch
(debian/patches/exclude-tmpfs-squashfs-tracefs.patch) added for LP
#1827159. This is a bad idea.
On my servers, I have a tmpfs mounted at /tmp. Last night, /tmp filled
up on one of them, resulting in significant brea
I was able to verify this is fixed in iputils-ping 20210202-1. That is,
I saw this same problem, grabbed those sources from Debian, built them,
and tested again. Accordingly, this should already be fixed in Ubuntu
impish.
--
You received this bug notification because you are a member of Ubuntu
Bu
> I will also write back in a few days time with feedback from a user,
> who is testing this fixed package in production.
That user is me. I've been running 1:9.16.1-0ubuntu2.7 on a ISP
production recursive server "since Fri 2021-02-19 17:44:17 CST; 5 days
ago" (per systemd). The system remains st
listsnaps is an alias of listsnapshots, but you're right that it's on
the pool.
Can you file this upstream:
https://github.com/openzfs/zfs/issues/new/choose
If you want, you could take a stab at submitting a pull request. It's a
pretty simple sounding change. The repo is here:
https://github.com/
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908473
Title:
rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state whic
I tested this on Focal. I installed librelp0 and restart rsyslog. Prior
to the change, sockets were stacking up in CLOSE-WAIT (both from normal
use and from the netcat test). After the change, sockets are being
closed correctly.
** Tags removed: verification-needed-focal
** Tags added: verificatio
** Changed in: bind9 (Ubuntu)
Status: Confirmed => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1909950
Title:
TCP connections never close
To manage notifications about this bug go to:
The test package fixes the issue for me.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908473
Title:
rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
leads to file descriptor
The limit in the code does seem to be 64 MiB. I'm not sure why this
isn't working. I am not even close to an expert on this part of OpenZFS,
so all I can suggest is to file a bug report upstream:
https://github.com/openzfs/zfs/issues/new
--
You received this bug notification because you are a mem
I hit this bug. The analysis here appears correct to me. PIDFILE is a
static string (via a preprocessor define). The suggested fix of calling
str_dup() sounds correct.
Adding this to the top of a stunnel config file is a work-around:
pid = /var/run/stunnel4.pid
--
You received this bug notificat
device_removal only works if you can import the pool normally. That is
what you should have used after you accidentally added the second disk
as another top-level vdev. Whatever you have done in the interim,
though, has resulted in the second device showing as FAULTED. Unless you
can fix that, devi
Why is the second disk missing? If you accidentally added it and ended
up with a striped pool, as long as both disks are connected, you can
import the pool normally. Then use the new device_removal feature to
remove the new disk from the pool.
If you've done something crazy like pulled the disk an
You could shrink the DDT by making a copy of the files in place (with
dedup off) and deleting the old file. That only requires enough extra
space for a single file at a time. This assumes no snapshots.
If you need to preserve snapshots, another option would be to send|recv
a dataset at a time. If
Did you destroy and recreate the pool after disabling dedup? Otherwise
you still have the same dedup table and haven’t really accomplished
much.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
T
The "natural start" succeeded on all 4 of my systems. The start times
were 01:41, 10:50, 18:11, and 21:43.
** Tags removed: verification-needed verification-needed-focal
** Tags added: verification-done verification-done-focal
--
You received this bug notification because you are a member of Ubu
I repeated my same test procedure. Everything worked as expected.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747
Title:
mdcheck_start.service trying to start unexisting file
To manage notifi
It might be mad about the extra space after the equals. Note that is is
complaining about the empty string. If it is splitting by spaces, that
would explain it.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net
Yeah, I can confirm that's broken too. Here is the fix:
https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=6636788aaf4ec0cacaefb6e77592e4a68e70a957
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net
It was trivial, so I sent in the patches. I didn't change `...` to
$(...) as I don't care to argue with them about that. We'll see what
upstream says.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852
I installed the update on 4 basically identical systems (note to self:
hostnames starting with g, k, r, w):
I enabled -proposed and installed the package:
sudo vi /etc/apt/sources.list.d/ubuntu-proposed.list
sudo apt update
sudo apt install mdadm=4.1-5ubuntu1.1
I tested the scrub on one system
[The following is probably outside the scope of this SRU, but since this
will be the first time that people see this logging, maybe you do want
to improve it now.]
The existing log statements are:
logger -p daemon.info mdcheck start checking $dev
logger -p daemon.info mdcheck continue checking $d
I have tested the fix on Focal and confirmed it works. Here is a link to the
diff in our PPA:
https://launchpadlibrarian.net/498490932/mdadm_4.1-5ubuntu1_4.1-5ubuntu1.1~wiktel1.20.04.1.diff.gz
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to U
Unfortunately, we are past the DebianImportFreeze for groovy. Can you
apply the one-line bug fix to Groovy so that it can then SRU into Focal?
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960132#15
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subsc
That sounds like a missing dependency on python3-distutils.
But unless you're running a custom kernel, Ubuntu is shipping the ZFS module
now:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1884110
--
You received this bug notification because you are a member of Ubuntu
Bugs, which i
Likewise, it's been stable 24 hours here.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118
Title:
[SRU] DHCP Cluster crashes after a few hours
To manage notifications about this bug go to:
htt
First I reverted isc-dhcp-server back to the original focal version, since I
had an updated version from the PPA:
$ sudo apt install isc-dhcp-server=4.4.1-2.1ubuntu5
isc-dhcp-common=4.4.1-2.1ubuntu5
Then I install the update packages:
$ sudo apt update
$ sudo apt install libdns-export1109/focal
Andrew, 1:9.11.16+dfsg-3~build1 is wrong. The correct version is
1:9.11.16+dfsg-3~ubuntu1 (~ubuntu1 instead of ~build1).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118
Title:
[SRU] DHCP Clust
Excellent. I'm available to test the -proposed update for focal whenever
it is ready.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118
Title:
[SRU] DHCP Cluster crashes after a few hours
To ma
Jorge, I agree with Gianfranco Costamagna that a rebuild of isc-dhcp is
NOT required. Why do you think it is?
Presumably BIND also uses these libraries? If so, it seems like the Test
Case should involve making sure BIND still seems to work, and that BIND
should be mentioned in the Regression Poten
No crashes to report.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118
Title:
DHCP Cluster crashes after a few hours
To manage notifications about this bug go to:
https://bugs.launchpad.net/dh
Jorge, it sounds like ISC might think there is a more fundamental issue here:
https://gitlab.isc.org/isc-projects/dhcp/-/issues/121#note_152804
** Bug watch added: gitlab.isc.org/isc-projects/dhcp/-/issues #121
https://gitlab.isc.org/isc-projects/dhcp/-/issues/121
--
You received this bug not
Jorge, I have been running for 25 hours on the patched version with no
crashes on either server.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118
Title:
DHCP Cluster crashes after a few hours
I ran:
sudo apt install \
isc-dhcp-server=4.4.1-2.1ubuntu6~ppa1 \
libdns-export1109=1:9.11.16+dfsg-3~ppa1 \
libirs-export161=1:9.11.16+dfsg-3~ppa1 \
libisc-export1105=1:9.11.16+dfsg-3~ppa1 && \
sudo systemctl restart isc-dhcp-server
The restart at the end was just for extra
** Bug watch added: gitlab.isc.org/isc-projects/dhcp/issues #128
https://gitlab.isc.org/isc-projects/dhcp/issues/128
** Also affects: dhcp via
https://gitlab.isc.org/isc-projects/dhcp/issues/128
Importance: Unknown
Status: Unknown
--
You received this bug notification because you
I was able to reproduce this with 4.4.2 plus the Ubuntu packaging. I did
not try with stock 4.4.2 from source.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118
Title:
DHCP Cluster crashes after
I've posted this upstream (as a draft PR, pending testing) at:
https://github.com/openzfs/zfs/pull/10662
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1888405
Title:
zfsutils-linux: zfs-volume-wait.
Here is a completely untested patch that takes a different approach to
the same issue. If this works, it seems more suitable for upstreaming,
as the existing list_zvols seems to be the place where properties are
checked. Can either of you test this? If this looks good, I'll submit it
upstream.
**
Public bug reported:
rsyslogd: error during parsing file /etc/rsyslog.d/FILENAME.conf, on or
before line 22: imrelp: librelp does not support input parameter
'tls.tlscfgcmd'; it probably is too old (1.5.0 or higher should be
fine); ignoring setting now. [v8.2001.0 try
https://www.rsyslog.com/e/220
See also this upstream PR: https://github.com/openzfs/zfs/pull/9414
and the one before it: https://github.com/openzfs/zfs/pull/8667
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1718761
Title:
It's
I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577
Title:
Encrypted swap won't load on 20.04 with zfs root
To manage not
Public bug reported:
grub-initrd-fallback.service should have:
[Unit]
RequiresMountsFor=/boot/grub
If /boot/grub is on a separate filesystem, this can run before that
filesystem is mounted and cause problems.
** Affects: grub2 (Ubuntu)
Importance: Undecided
Status: New
--
You re
seth-arnold, the ZFS default is actltype=off, which means that ACLs are
disabled. (I don't think the NFSv4 ACL support in ZFS is wired up on
Linux.) It's not clear to me why this is breaking with ACLs off.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is sub
There is another AES-GCM performance acceleration commit for systems
without MOVBE.
--
Richard
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881107
Title:
zfs: backport AES-GCM performance accell
I have confirmed that the fix in -proposed fixes the issue for me.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872863
Title:
QEMU/KVM display is garbled when booting from kernel EFI stub due to
Can you share a bit more details about how you have yours setup? What
does your partition table look like, what does the MD config look like,
what do you have in /etc/fstab for swap, etc.? I'm running into weird
issues with this configuration, separate from this bug.
@didrocks: I'll try to get thi
I think it used to be the case that zfsutils-linux depended on zfs-dkms
which was then provided by the kernel packages. That seems like a way to
solve this. Given that dkms is for dynamic kernel modules, it was always
a bit weird to see the kernel providing that. It should probably be that
zfsutils
I didn't get a chance to test the patch. I'm running into unrelated
issues.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577
Title:
Encrypted swap won't load on 20.04 with zfs root
To manage n
John Gray: Everything else aside, you should mirror your swap instead of
striping it (which I think is what you're doing). With your current
setup, if a disk dies, your system will crash.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
This is a tricky one because all of the dependencies make sense in
isolation. Even if we remove the dependency added by that upstream
OpenZFS commit, given that modern systems use zfs-mount-generator,
systemd-random-seed.service is going to Require= and After= var-
lib.mount because of its Requires
brian-willoughby (and pranav.bhattarai):
The original report text confirms that "The exit code is 0, so update-
grub does not fail as a result." That matches my understanding (as
someone who has done a lot of ZFS installs maintaining the upstream
Root-on-ZFS HOWTO) that this is purely cosmetic.
I
The AES-GCM performance improvements patch has been merged to master. This also
included the changes to make encryption=on mean aes-256-gcm:
https://github.com/zfsonlinux/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393
--
You received this bug notification because you are a member of Ubuntu
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862661
Title:
zfs-mount.service and others fail inside unpriv containers
To manage
What was the expected result? Are you expecting to be able to just
install ZFS in a container (but not use it)? Or are you expecting it to
actually work? The user space tools can’t do much of anything without
talking to the kernel.
--
You received this bug notification because you are a member of
** Bug watch added: Github Issue Tracker for ZFS #9443
https://github.com/zfsonlinux/zfs/issues/9443
** Also affects: zfs via
https://github.com/zfsonlinux/zfs/issues/9443
Importance: Unknown
Status: Unknown
--
You received this bug notification because you are a member of Ubuntu
There does seem to be a real bug here. The problem is that we don’t know
if it is on the ZoL side or the FreeBSD side. The immediate failure is
that “zfs recv” on the FreeBSD side is failing to receive the stream. So
that is the best place to start figuring out why. If it turns out that
ZoL is gene
The FreeBSD bug report:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730
Like I said, boiling this down to a test case would likely help a lot.
Refusing to do so and blaming the people giving you free software and
free support isn’t helpful.
** Bug watch added: bugs.freebsd.org/bugzilla/
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982
Title:
Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0
To
In terms of a compact reproducer, does this work:
# Create a temp pool with large_dnode enabled:
truncate -s 1G lp1854982.img
sudo zpool create -d -o feature@large_dnode=enabled lp1854982
$(pwd)/lp1854982.img
# Create a dataset with dnodesize=auto
sudo zfs create -o dnodesize=auto lp1854982/ldn
So, one of two things is true:
A) ZFS on Linux is generating the stream incorrectly.
B) FreeBSD is receiving the stream incorrectly.
I don't have a good answer as to how we might differentiate those two.
Filing a bug report with FreeBSD might be a good next step. But like I
said, a compact reprodu
The last we heard on this, FreeBSD was apparently not receiving the send
stream, even though it supports large_dnode:
https://zfsonlinux.topicbox.com/groups/zfs-
discuss/T187d60c7257e2eb6-M14bb2d52d4d5c230320a4f56/feature-
incompatibility-between-ubuntu-19-10-and-freebsd-12-0
That's really bizarr
I think there are multiple issues here. If it's just multipath, that
issue should be resolved by adding After=multipathd.service to zfs-
import-{cache,scan}.service.
For other issues, I wonder if this is cache file related. I'd suggest
checking that the cache file exists (I expect it would), and t
@gustypants: Sorry, the other one is scan, not pool. Are you using a
multipath setup? Does the pool import fine if you do it manually once
booted?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850130
zfs-linux (0.6.5.6-2) unstable; urgency=medium
...
* Scrub all healthy pools monthly from Richard Laager
So Debian stretch, but not Ubuntu 16.04.
Deleting the file should be safe, as dpkg should retain that. It sounds
like you never deleted it, as you didn’t have it before this upgrade. So
it
This was added a LONG time ago. The interesting question here is: if you
previously deleted it, why did it come back? Had you deleted it though?
It sounds like you weren’t aware of this file.
You might want to edit it in place, even just to comment out the job.
That would force dpkg to give you a
You original scrub took just under 4.5 hours. Have you let the second
scrub run anywhere near that long? If not, start there.
The new scrub code uses a two-phase approach. First it works through
metadata determining what (on-disk) blocks to scrub. Second, it does the
actual scrub. This allows ZFS
We discussed this at the January 7th OpenZFS Leadership meeting. The
notes and video recording are now available.
The meeting notes are in the running document here (see page 2 right now, or
search for this Launchpad bug number):
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoL
> It is not appropriate to require the user to type a password on every
> boot by default; this must be opt-in.
Agreed.
The installer should prompt (with a checkbox) for whether the user wants
encryption. It should default to off. If the user selects the checkbox,
prompt them for a passphrase. Se
New debdiff attached.
** Patch added: "icingaweb2_2.4.1-1ubuntu0.1.debdiff"
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+attachment/5318697/+files/icingaweb2_2.4.1-1ubuntu0.1.debdiff
** Description changed:
[Impact]
icingaweb2 does not work on PHP 7.2 or higher, e.g
Progress.
** Changed in: icingaweb2 (Ubuntu Bionic)
Status: New => In Progress
** Changed in: icingaweb2 (Ubuntu Bionic)
Assignee: (unassigned) => Richard Laager (rlaager)
** Description changed:
[Impact]
icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in
The proposed fix seems to be incomplete?
I'm still getting this:
Fatal error: Declaration of Icinga\Web\Form\Element\Note::isValid($value) must
be compatible with Zend_Form_Element::isValid($value, $context = NULL) in
/usr/share/php/Icinga/Web/Form/Element/Note.php on line 0
I'm unsubscribing u
Attached is a debdiff that backports the fix from the Debian package.
** Description changed:
[Impact]
icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu
18.04.
[Test Case]
Steps to reproduce:
# apt install mariadb
# mysql_secure_installation
# apt insta
** Changed in: icingaweb2 (Ubuntu)
Status: Confirmed => Fix Released
** Description changed:
- Release: 18.04 - Bionic
+ [Impact]
+ icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu
18.04.
- # apt-cache policy icingaweb2
- icingaweb2:
- Installed: 2.4.1-1
+ [Te
I've given this a lot of thought. For what it's worth, if it were my
decision, I would first put your time into making a small change to the
installer to get the "encryption on" case perfect, rather than the
proposal in this bug.
The installer currently has:
O Erase disk an install Ubuntu
War
Try adding "After=multipathd.service" to zfs-import-cache.service and
zfs-import-pool.service. If that fixes it, then we should probably add
that upstream.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs
I put these questions to Tom Caputi, who wrote the ZFS encryption. The
quoted text below is what I asked him, and the unquoted text is his
response:
> 1. Does ZFS rewrite the wrapped/encrypted master key in place? If
>not, the old master key could be retrieved off disk, decrypted
>with the
I have come up with a potential security flaw with this design:
The user installs Ubuntu with this fixed passphrase. This is used to
derive the "user key", which is used to encrypt the "master key", which
is used to encrypt their data. The encrypted version of the master key
is obviously written t
Here are some quick performance comparisons:
https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997
In summary, "the GCM run is approximately 1.15 times faster than the CCM
run. Please also note that this PR doesn't improve AES-CCM performance,
so if this gets merged, the speed differe
This is an interesting approach. I figured the installer should prompt
for encryption, and it probably still should, but if the performance
impact is minimal, this does have the nice property of allowing for
enabling encryption post-install.
It might be worthwhile (after merging the SIMD fixes) to
Should it set KEYMAP=y too, like cryptsetup does?
I've created a PR upstream and done some light testing:
https://github.com/zfsonlinux/zfs/pull/9723
Are you able to confirm that this fixes the issue wherever you were
seeing it?
--
You received this bug notification because you are a member of
I received the email of your latest comment, but oddly I’m not seeing it
here.
Before you go to all the work to rebuild the system, I think you should
do some testing to determine exactly what thing is breaking the send
stream compatibility. From your comment about your laptop, it sounds
like you
I'm not sure if userobj_accounting and/or project_quota have
implications for send stream compatibility, but my hunch is that they do
not. large_dnode is documented as being an issue, but since your
receiver supports that, that's not it.
I'm not sure what the issue is, nor what a good next step wo
This is probably an issue of incompatible pool features. Check what you
have active on the Ubuntu side:
zpool get all | grep feature | grep active
Then compare that to the chart here:
http://open-zfs.org/wiki/Feature_Flags
There is an as-yet-unimplemented proposal upstream to create a features
If the pool has an _active_ (and not "read-only compatible") feature
that GRUB does not understand, then GRUB will (correctly) refuse to load
the pool. Accordingly, you will be unable to boot.
Some features go active immediately, and others need you to enable some
filesystem-level feature or take
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852854
Title:
Update of zfs-linux fails
To manage notifications about this bug go t
Which specific filesystems are failing to mount?
Typically, this situation occurs because something is misconfigured, so
the mount fails, so files end up inside what should otherwise be empty
mountpoint directories. Then, even once the original problem is fixed,
the non-empty directories prevent Z
> I think "zfs mount -a" should NOT try to mount datasets with
> mountpoint "/"
There is no need for this to be (confusingly, IMHO) special-cased in
zpool mount.
You should set canmount=noauto on your root filesystems (the ones with
mountpoint=/). The initramfs handles mounting the selected root
The fix here seems fine, given that you're going for minimal impact in
an SRU. I agree that the character restrictions are such that the pool
names shouldn't actually need to be escaped. That's not to say that I
would remove the _proper_ quoting of variables that currently exists
upstream, as it's
> "com.sun:auto-snapshot=false" do we need to add that or does our zfs
not support it?
You do not need that. That is used by some snapshot tools, but Ubuntu is
doing its own zsys thing.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Also affects: ubiquity (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847628
Title:
When using swap in ZFS, system stops when you start using
1 - 100 of 1059 matches
Mail list logo