@stgraber: If this is something you can reproduce (e.g. in a VM) using zfs-dkms
rather than the pre-compiled zfs.ko from linux-image, can you please test from
this PPA:
https://launchpad.net/~rlaager/+archive/ubuntu/zfs
The package there has the patch from upstream.
If you do test from my PPA,
@svde-tech: Can you please test from this PPA:
https://launchpad.net/~rlaager/+archive/ubuntu/zfs
Here's what I think should happen. Right now, you're seeing /dev/sdX
names. If you reboot with this package, you'll still see /dev/sdX names.
Reboot again and edit your GRUB command line (at boot,
It turns out that this is still necessary. zfsutils-linux installs
/usr/share/man/man8/zed.8.gz, which is also installed by zfs-zed. That
needs to be fixed at the same time the Replaces is versioned. If I'm
understanding correctly, Policy says a versioned Breaks should be added
too.
I've made
@svde-tech: Can you please test from this PPA:
https://launchpad.net/~rlaager/+archive/ubuntu/zfs
The init script is slightly different from my last iteration. I removed
one unrelated change. I also applied your fix, but stopped suppressing
stdout (since that's probably unnecessary). Besides
If you get into the broken state, what happens if you run:
zpool list -H -o health "$ZFS_RPOOL"
zpool export "$ZFS_RPOOL"
Try that literally first. If $ZFS_RPOOL is unset, then replace it by
hand with the name of your pool.
** Changed in: zfs-linux (Ubuntu)
Status: Invalid => Incomplete
When it breaks, are any datasets mounted? Run, `cat /proc/mounts`.
Is a delay of 1 second sufficient (in your limited testing)?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577057
Title:
zfs
@pitti: zfs-import-cache.service doesn't "load the ZFS cache". It
imports zpools which are listed in the /etc/zfs/zpool.cache file. It is
conditioned (ConditionPathExists) on the existence of
/etc/zfs/zpool.cache. It seems to me that upstream ZoL is tending to
deprecate the zpool.cache file.
In
I'm away from my computer at the moment, or I'd test more myself. Did
you initially have the pool imported using the by-id names? Is the
problem that the initrd needs a zpool.cache file?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Your testing confirms this is not a regression (or at least not one
caused by my changes to the initramfs script), as I suspected. I am
still doing a plain zpool import just as it did before.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
@smoser: If this is a fresh install with no zpools, you shouldn't have a
zpool.cache file, and so your chain should have zfs-import-scan.service,
not zfs-import-cache.service. You'd still have the same problem, I
expect, but that detail is a bit worrisome.
--
You received this bug notification
[quoted blocks trimmed and reordered for ease of reply]
> I'm curious, how are these actually being used? They can't be relevant
> for the root file system as that already needs to be set up in the
> initrd.
Correct. The root pool (with all of its filesystems) is handled in the initrd.
These
> Ah, I see. So do file systems from ZFS have a separate equivalent to
> /etc/fstab?
Yes, each filesystem has a "mountpoint" property. This can be set directly,
and is also inherited automatically. For example, I have rpool's mountpoint set
to / and then rpool/home automatically inherits /home
Attached is a backport of the fix from upstream. If I have a pool which
was last imported using /dev/disk/by-id names, the patch causes zpool
import to import it using /dev/disk/by-id names.
Note that this fix has not yet been merged upstream.
Also, note that the patch did not apply *perfectly*
/dev/disk/by-id isn't the only answer. Other people use other things, at
least in some cases. And I don't think that using /dev/disk/by-id as a
hard-coded default is acceptable. I'm pretty sure it's possible (though
rare) to have drives that show up in /dev, but not /dev/disk/by-id.
Richard Yao
Indeed, you need my changes to debian/zfsutils-linux.install as well.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1532198
Title:
[MIR] zfs-linux
To manage notifications about this bug go to:
@slashd, did Debian drop the .py extensions per Policy? Did they write
man pages?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1574342
Title:
Ship arcstat.py and arc_summary.py with zfsutils-linux
Also note that this is the exact PATH from /etc/crontab, which is thus
also the PATH under which /etc/cron.{hourly,daily,weekly,monthly} run.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1548009
chiluk is looking to work on this for upstream. I might jump in too.
https://github.com/zfsonlinux/zfs/issues/4680
** Bug watch added: Github Issue Tracker for ZFS #4680
https://github.com/zfsonlinux/zfs/issues/4680
--
You received this bug notification because you are a member of Ubuntu
This works for me, using the first test case. If this feels good enough,
feel free to change the tag yourself. Otherwise, I'll do so after
verifying the unmodified cron configuration works on the 14th.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
** Description changed:
+ [Impact]
+
+ Xenial shipped with a cron job to automatically scrub ZFS pools, as
+ desired by many users and as implemented by mdadm for traditional Linux
+ software RAID. Unfortunately, this cron job does not work, because it needs
a PATH line for /sbin, where the
This works for me "in the wild" (i.e. waiting until today, which was the
second Sunday of the month). I had two servers, one with
0.6.5.6-0ubuntu11 and one with 0.6.5.6-0ubuntu12.
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs,
cyphermox: Any progress? It's been three months since your last comment
where it was close to ready.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1527727
Title:
grub-probe for zfs assumes all
Public bug reported:
cryptsetup does not currently support ZFS.
For example, this happens on 16.04:
$ sudo update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-4.4.0-34-generic
cryptsetup: WARNING: could not determine root device from /etc/fstab
update-initramfs: Generating
Any chance of an ecryptfs update in time for Yakkety?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1574174
Title:
ecryptfs-setup-private fails with ZFS
To manage notifications about this bug go
This is a problem in Xenial.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1268466
Title:
virt-manager does not include the python-spice-client-gtk dependency
for Spice
To manage notifications
Well, it's actually slightly different on Xenial. I guess gir1.2-spice-
client-gtk-3.0 is required, which is currently a Recommends. So maybe
that's okay then.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Bug watch added: Debian Bug tracker #830824
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830824
** Also affects: zfs-linux (Debian) via
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830824
Importance: Unknown
Status: Unknown
--
You received this bug notification
The same applies to libuutil1linux, libzfs2linux, and libzpool2linux.
They are all required by the zfs and zpool commands.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1596835
Title:
It turns out this isn't actually working. The problem is the PATH in
cron, which does not include /sbin, where the zpool command lives. I've
attached a debdiff to fix this. I've tested to confirm that the PATH
thing fixes it, but on Sunday, I'll know absolutely 100% by verifying
that it works "in
** Changed in: zfs-linux (Ubuntu)
Status: Fix Released => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1548009
Title:
[FFe] ZFS pools should be automatically scrubbed
To manage
*** This bug is a duplicate of bug 1548009 ***
https://bugs.launchpad.net/bugs/1548009
** This bug has been marked a duplicate of bug 1548009
ZFS pools should be automatically scrubbed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
I can confirm that the .debdiff I attached in comment #11 fixes the
problem. My system ran the scrub today.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1548009
Title:
ZFS pools should be
Public bug reported:
Note: This is different from Launchpad bug #1557151. This is another,
similar, bug.
Bug description from Matt Ahrens at OpenZFS:
"If a ZFS object contains a hole at level one, and then a data block is
created at level 0 underneath that l1 block, l0 holes will be created.
** Description changed:
Note: This is different from Launchpad bug #1557151. This is another,
similar, bug.
- This is a very unfortunate bug because the fix only helps you moving
- forward.
-
Bug description from Matt Ahrens at OpenZFS:
"If a ZFS object contains a hole at level one,
** Patch added: "zfs-fix-filenames.debdiff"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1574342/+attachment/4709820/+files/zfs-fix-filenames.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I've attached a .debdiff to fix the filenames to comply with Policy
10.4, as @adconrad and I mentioned.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1574342
Title:
Ship arcstat.py and
Sure, but ZoL is not bound by Ubuntu Policy. Ubuntu is. That said, I'll
suggest they be fixed upstream too.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1574342
Title:
Ship arcstat.py and
Attached is the debian directory from another approach to updating the
package. This is based off the package in Yakkety, with the imapproxy
SVN changes broken out into individual patches in debian/patches. (I
used git-svn + git format-patch for this, and stripped the git
markings).
--
You
Attached is the debdiff from another approach to updating the package.
This is based off the package in Yakkety, with the imapproxy SVN changes
broken out into individual patches in debian/patches. (I used git-svn +
git format-patch for this, and stripped the git markings).
** Patch added:
Public bug reported:
The version of imapproxy packaged, 1.2.7, is the last released version.
Unfortunately, this version is from 2010. There have been several good
changes to imapproxy, but no new release has been cut. Many of these
changes have security implications.
Here's a list of selected
** Bug watch added: Debian Bug tracker #834591
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=834591
** Also affects: up-imapproxy (Debian) via
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=834591
Importance: Unknown
Status: Unknown
--
You received this bug notification
** Changed in: pidgin (Ubuntu)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/75850
Title:
Pidgin should support OS keyrings
To manage notifications
Can you share the output of `zfs list` and `mount` (when everything is
mounted properly)?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1614859
Title:
zfsutils-linux fails to configure "directory
It's not mounted correctly, though. Note that, in the mount output,
there is no "storage/media on /storage/media".
Do this:
sudo zfs umount storage/media/anime
sudo zfs umount storage/media/comedy
sudo zfs umount storage/media/comics
sudo zfs umount storage/media/documentaries
sudo zfs umount
I've marked it as Invalid, as there's nothing to be done at this point.
But, there may still be some bug here. After all, how did this happen in
the first place? That may have been user error, or it may not have. But,
at this point, it likely doesn't matter. If we see more, similar
reports, or if
Colin, with grub2 in yakkety being patched, we should be ready for
zfsutils-linux in yakkety to drop /lib/udev/rules.d/69-vdev.rules and
"Conflicts: grub2 << 2.02~beta2-36ubuntu5".
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
[My previous comment was incorrect. I had the wrong file in the wrong
package.]
Colin, with grub2 in yakkety being patched, we should be ready for zfs-
initramfs in yakkety to drop /lib/udev/rules.d/60-zpool.rules and
"Conflicts: grub2 << 2.02~beta2-36ubuntu5".
--
You received this bug
First, let me say I'm not an Ubuntu developer.
Second, is this still a problem for you?
On a system running Precise, this seems to work for me:
Linux yak 3.2.0-107-generic #148-Ubuntu SMP Mon Jul 18 20:22:08 UTC 2016 x86_64
x86_64 x86_64 GNU/Linux
** Patch added: "A fix for xenial"
https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1618726/+attachment/4731337/+files/ifupdown-fix-1618726-xenial.debdiff
** Description changed:
- This is a trivially reproducible crash in ifup/ifdown.
+ This is a trivially reproducible crash in
** Patch added: "A fix for yakkety"
https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1618726/+attachment/4731336/+files/ifupdown-fix-1618726-yakkety.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
This is a trivially reproducible crash in ifup/ifdown.
Steps to reproduce:
1) echo no-scripts foo bar >> /etc/network/interfaces
2) ifup baz
Expected results:
Unknown interface baz
Actual results:
Segmentation fault (core dumped)
It's irrelevant whether the second
The package from proposed works. I tested version 0.8.10ubuntu1.1. The
diff looks correct.
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I did not forward it. I sometimes do, sometimes don't, depending on a
lot of factors. In this case, even though I assumed it would affect
Debian, I was hesitant to claim the existence of a high priority bug if
I hadn't personally verified it.
--
You received this bug notification because you are
I think that's unrelated, but I'm not sure. I filed Debian bug #838001
for this, with my patch, and linked it here.
** Bug watch added: Debian Bug tracker #838001
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=838001
** Also affects: cryptsetup (Debian) via
The fix was merged upstream here:
https://github.com/zfsonlinux/zfs/commit/792517389fad5c495a2738b61c2e9c65dedaaa9a
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607920
Title:
zfs services fail on
The previous patch was here:
https://launchpadlibrarian.net/275709492/zfs-fix-filenames.debdiff
It probably still applies cleanly (aside for the changelog).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I think I got the numbers right, but in any case, this is the gist of the error:
PANIC: blkptr at 88040b993640 DVA 1 has invalid OFFSET 36830022909049856
What was the last version of ZFS-on-Linux (that you were using) that
works?
Was that the only version of ZFS you had ever used?
Can you
If you have a separate boot drive, you should be able to install
zfsutils-linux. Worst case, disconnect the ZFS drives.
If necessary, disable the zfs-import-scan and zfs-import-cache services.
You should reach a point where you can boot, and zfsutils-linux is
installed, and the ZFS drives are
Does ZFS actually have any translations? There are no .po files in the
source and no .mo files in the binary package.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1607920
Title:
zfs services fail
This is installing the files using the old names, and symlinking using
the new ones. Is that intentional? I expected it would install using the
new names and symlinks for the old ones.
** Tags removed: verification-done
** Tags added: verification-needed
--
You received this bug notification
This isn't a huge problem. The decision to not depend on spl was
intentional, to avoid dragging spl into main.
There's been some discussion on pkg-zfsonlinux-devel about how to handle
hostid in Debian, which will likely be inherited by Ubuntu. If Debian's
solution lands in spl instead of
@pitti, the ID_FS_TYPE is zfs_member, not zfs. The service is, as you
listed, zed.service. Modifying your rule then is:
ACTION!="remove", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="zfs_member",
ENV{SYSTEMD_WANTS}+="zed.service"
However, if zed.service is going to exit if there is no pool imported,
That fixes the mounting problem. You've WONTFIXed the security options
(and I understand your reasoning there).
One last question: Since this requires the pool to be empty at `lxd
init` time, is that going to cause a problem for reinstalls? That is, if
I have a working system, then I reinstall it
The patch was already accepted upstream and a new release cut.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1624394
Title:
mimedefang: md-mx-ctrl reread does not work
To manage notifications
** Description changed:
(I'm putting on the SRU template, but regardless of whether this is SRU-
eligible, fixing this in the development version is obviously the first
step.)
[Impact]
- This is a regression from Trusty.
+ This is a regression.
While the main purpose of this is to
You are not booting off ZFS? You no longer have a copy of the
zpool.cache from when the problem occurred?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1624844
Title:
Ubuntu 16.04 breaks boot with
If this is a non-root pool which is not available at boot, then you should do
this:
sudo zpool set cachefile=none POOL
That will inhibit the creation of a zpool.cache file for that pool.
The other option, if this is always plugged in, would be to configure it
to unlock automatically on boot.
Public bug reported:
(I'm putting on the SRU template, but regardless of whether this is SRU-
eligible, fixing this in the development version is obviously the first
step.)
[Impact]
This is a regression from Trusty.
While the main purpose of this is to "reread the filter rules", the
biggest use
I verified that linux-image-4.4.0-9136-generic in Yakkety still has
0.6.5.6.
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1588632
Yakkety has 0.6.5.7. If you're asking about Xenial, you should read up
on Ubuntu's SRU process.
** Changed in: zfs-linux (Ubuntu)
Status: New => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I just talked to Brian Behlendorf (since I'm sitting next to him at the
OpenZFS Developer Summit). He doesn't have strong feelings either way,
but is concerned about breaking people who are already relying on those
names.
My thought is we should just fix it in Debian/Ubuntu regardless of
whether
I tested 0.6.5.6-0ubuntu13 on a Xenial system with no python2 installed;
only python3. I did this by first upgrading the ZFS packages and then
purging the python2 packages. All three scripts work as expected. I did
a basic run of each with no arguments.
For the record, in case anyone has this
FWIW, these are fixed in Debian. So hopefully we'll see this fix in
Ubuntu 17.04.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1596835
Title:
libnvpair.so.1 should be in /lib/, not /usr/lib/
To
This is clearly a bug, but 15.10 is EOL now. In 16.04+, the module is
pre-built, so there is no more DKMS building.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
*** This bug is a duplicate of bug 1574069 ***
https://bugs.launchpad.net/bugs/1574069
** This bug has been marked a duplicate of bug 1574069
zfs dkms kernel module failed to build
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
That should be "-o readonly=on".
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1628553
Title:
Ubuntu 16.04.1 Install zfsutils-linux Panic Error, endless loop
To manage notifications about this bug
Try loading the ZFS module with zfs_recover=1, and then try a read-only
import.
rmmod zfs
modprobe zfs zfs_recover=1
zpool import -o readonly -f -R /mnt POOL
If it imports, at least then you can see how your data looks.
--
You received this bug notification because you are a member of Ubuntu
The ZFS module is loaded automatically by zpool-import-scan/zpool-
import-cache.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1624540
Title:
please have lxd recommend zfs
To manage notifications
Once this is in Yakkety, I think we should SRU it to Xenial. The sooner
we change this, the less likelihood we have people relying on the old
names (with .py) of these utilities.
@kirkland, it'd be really nice to address this before ZFS lands in the
default Server Seed for 16.04.
--
You
For the record, this is also posted upstream:
https://github.com/zfsonlinux/zfs/issues/5173
** Bug watch added: Github Issue Tracker for ZFS #5173
https://github.com/zfsonlinux/zfs/issues/5173
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
It is important that zed run in all cases where a pool (with real disks)
exists. This will only get more important over time, as the fault
management code is improved. (Intel is actively working on this.)
It seems reasonable to not start zed in a container, though.
For the second piece, only
Public bug reported:
Currently, lxd is not explicitly creating the intermediate "containers"
and "images" datasets. It is allowing them to be created implicitly by
calling "zfs create -p".
I would like to see lxd always (regardless of whether lxd is creating
the pool) create these datasets
I see that you're only setting compression=on as part of zpool create,
so this only applies when lxd creates the pool. I think that's a
reasonable decision (figuring that if the admin created the pool,
they've already made a policy decision on compression).
--
You received this bug notification
I don't see how turning on a new feature is a bug.
** Changed in: e2fsprogs (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1601997
Title:
Ubuntu 16.10
This is fixed in 16.10. Is there a plan to backport this? I'm guessing
not, because of the risk of regressions. If there's no plans to SRU,
then this bug should probably be closed.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
As we discussed, this should just be a matter of changing the package
dependency. The actual scripts support Python 3.
This assumes that the python3 package is using the alternatives system
such that /usr/bin/python will still exist on a python3-only system.
--
You received this bug
Public bug reported:
`lxd init` does not currently set any options for ZFS. It really should
set compression=lz4.
LZ4 compression is so fast that it is almost always a win. The general
upstream ZFS guidance is that unless you are sure most/all of your data
is uncompressible, you should always
Cool. You disabling atime too, or don't you want to?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1629118
Title:
lxd init should set compression=lz4 on ZFS
To manage notifications about this bug
I'm not sure how the workload is a factor. Is there ever a case where
lxd needs the atime value for these files?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1629118
Title:
lxd init should set
Right, because these are *containers*, so it's a filesystem tree, not a
single device. I'm so used to full VMs.
So this all sounds great and fixed. Sorry for the noise!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
What in particular is problematic about the solution I have proposed?
The code needs testing, and it should be converted to a loop, and any
other datasets need to be added to the list. But what is wrong in
concept?
A sufficiently educated admin could create everything perfectly for lxd,
but why
> The fact that LXD doesn't put /var/lib/lxd itself in the ZFS pool is
> intentional.
I wasn't suggesting that in the general case. Creating a dataset at
/var/lib/lxd has more to do with the root-on-ZFS setup, and is
independent of where to place the lxd storage pool(s). I can see how
that's
@tomposmiko, the official way is to 1) file a bug report, 2) get that
fixed in the current development branch, and then 3) follow the SRU
(stable release update) process to get it backported into the LTS
release.
The hole birth thing is a mess, and I've got bug #1600060 open for that
in Ubuntu.
And also:
https://github.com/zfsonlinux/zfs/issues/4996
fix pending here:
https://github.com/zfsonlinux/zfs/pull/5061
Debian is shipping a patch to add an ignore_hole_birth tunable (and defaulting
to on). We should get that in Ubuntu, except that we also want to rename it as
per here:
Once the resilver completed, I was able to grub-install successfully.
Looking at the grub-install debug output, it seems that GRUB looks
primarily at the first device. I have a hunch that my patch would fix
the case where the first device is not the one being resilvered. But we
still need to fix
** Attachment added: "The debug output from grub-install after `zpool offline
grape1 sdg1`."
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1639209/+attachment/4772858/+files/grub-install2.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
** Attachment added: "The debug output from grub-install."
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1639209/+attachment/4772857/+files/grub-install.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
The pools are imported by either zfs-import-scan.service or zfs-import-
cache.service. (Which service runs depends on whether
/etc/zfs/zpool.cache exists.) They both call `zpool import -a` plus some
other arguments. In other words, `zpool import -a` is being run
unconditionally, whether pools
In case it's related, can you confirm the kernel version you are using
now, and the kernel version from before the upgrade?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1635115
Title:
grub fails
If it's orphaned in Debian, the obvious answer is that I should adopt it
there. I'll do that.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1614062
Title:
imapproxy out of date
To manage
801 - 900 of 1092 matches
Mail list logo