As far as I know, it's not a necessary condition. It's the first time
I've seen it in a zfs_member udev rule.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1530953
Title:
Support GRUB's native
Why do you have ENV{ID_FS_USAGE}=="raid"?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1530953
Title:
Support GRUB's native root=ZFS=
To manage notifications about this bug go to:
@colin-king: It looks like we were commenting at the same time.
Including *both* rules is not harmful in any *technical* way. The only
concern I have is that then you're "supporting" other partition types,
which may "obligate" you to do that "forever" (quotes because none of
these things are
The changelog says this:
Make zed work out-of-the box (LP: #1542276)
That's the wrong bug number.
** Changed in: zfs-linux (Ubuntu)
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
This is related:
https://github.com/zfsonlinux/zfs/pull/4343
If that was merged, then GRUB could be patched to use -L and/or -p.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1527727
Title:
** Patch added: "zfs-1527727.debdiff"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1527727/+attachment/4581527/+files/zfs-1527727.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
The upstream change was merged. I propose the following:
1) Update zfs-linux in Xenial with the patch:
https://github.com/zfsonlinux/zfs/commit/d2f3e292dccab23e47ade3c67677a10f353b9e85
2) Patch grub2 in Xenial to setenv("ZPOOL_VDEV_NAME_PATH", "YES")
3) Remove the udev rules from zfs-initramfs
Public bug reported:
ZFS-on-Linux has its own I/O scheduler, so it sets the "noop" elevator
on whole disks used in a pool.
https://github.com/zfsonlinux/zfs/issues/90
It does not set the scheduler for a disk if a partition is used in a
pool out of respect for the possibility that there are
** Patch added: "zfs-scheduler.debdiff"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1550301/+attachment/4581751/+files/zfs-scheduler.debdiff
** Description changed:
ZFS-on-Linux has its own I/O scheduler, so it sets the "noop" elevator
- on whole disks used in a pool. See:
+
The disk load added by scrubbing is almost entirely reading. There is a
tiny amount of metadata writes to track of the scrub's progress (so it
can resume after a reboot). The only time a scrub would write
significant data is if it found a bad block (checksum error) and needed
to rewrite a good
I can still reproduce this with the Wily Live CD, but I'm not going to
be able to test with those packages, as those are pre-built kernels, not
zfs-dkms.
** Changed in: zfs-linux (Ubuntu)
Status: In Progress => Incomplete
** Changed in: zfs-linux (Ubuntu)
Status: Incomplete =>
A false negative, where an objectively corrupt block is treated as
valid, is not ideal, but not harmful. The scrub would fail to correct
the error, but it wouldn't make it worse. It would be detected as bad on
the next read (scrub or otherwise).
There's also a case of a bad block being
Can you also ship an /etc/udev/rules.d/90-zfs-vdev.rules with these
contents?
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*",
ENV{ID_FS_TYPE}=="zfs_member",
SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
This makes GRUB work if the user uses /dev/disk/by-id/x names, which
upstream ZFS-on-Linux
Actually, I think the numbering for that udev file should NOT be in the
90s. I think the 90s are for the system administrator. So we probably
want 60-zfs-vdev.rules if it's shipped by the package.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
I'm attaching a debdiff which outlines the changes necessary to ship a
rules.d file. It uses proper numbering in the name, is installed in the
right place, and has the right rule. I've tested the package with this
change applied. It also has a comment explaining what it does and why
it's
I have never seen this work, but I can't say the earliest kernel we
tried with.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1542826
Title:
NFS Client Ignores TCP Resets
To manage notifications
** Description changed:
Steps to reproduce:
1) Mount NFS share from HA cluster with TCP.
2) Failover the HA cluster. (The NFS server's IP address moves from one
- machine to the other.)
- 3) Access the mounted NFS share.
+ machine to the other.)
+ 3) Access the mounted NFS share from the
** Attachment added: "dovecot-test.upstream-kernel.pcap"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1542826/+attachment/4571304/+files/dovecot-test.upstream-kernel.pcap
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I posted this upstream:
http://www.spinics.net/lists/linux-nfs/msg56520.html
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1542826
Title:
NFS Client Ignores TCP Resets
To manage notifications
Related to this, zfs-zed should email root by default, and I'm thinking
it does not do so currently.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1547549
Title:
zfs utils do not recommend or
I had been working on addressing this. I finally made time to finish my
changes, which have been rebased on top of your (colin-king's) PPA. With
these changes, zed works out-of-the-box, has all non-debugging scripts
enabled, and emails root by default (just like mdadm has for years).
The upstream
Public bug reported:
mdadm automatically checks MD arrays. ZFS should automatically scrub
pools.
I've attached a debdiff which accomplishes this.
The meat of it is the scrub script I've been using (and recommending in
my HOWTO) for years, which scrubs all *healthy* pools. If a pool is not
Is this going to make the Xenial release?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1527727
Title:
grub-probe for zfs assumes all devices prefix with /dev, ignoring
/dev/disk/...
To manage
Is this going to make the Xenial release? What needs to be done still?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1548009
Title:
ZFS pools should be automatically scrubbed
To manage
This bug was introduced in dbconfig-common 2.0.3, which is in Xenial.
The fix is already in Debian as 2.0.4 (along with a couple other fixes).
What is necessary to get dbconfig-common 2.0.4 into Xenial?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which
Public bug reported:
Package: dbconfig-common
Version: 2.0.3
internal/mysql has this at line 404:
if [ "$dbc_dbserver" != "" ] && "$dbc_dbserver" != localhost ; then
It needs brackets like this instead:
if [ "$dbc_dbserver" != "" ] && [ "$dbc_dbserver" != localhost ] ; then
Otherwise, I get
This bug was introduced in dbconfig-common 2.0.3, which is in Xenial.
The fix is already in Debian as 2.0.4 (along with a couple other fixes).
What is necessary to get dbconfig-common 2.0.4 into Xenial?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Public bug reported:
Package: dbconfig-common
Version: 2.0.3
internal/mysql has this at line 404:
if [ "$dbc_dbserver" != "" ] && "$dbc_dbserver" != localhost ; then
It needs brackets like this instead:
if [ "$dbc_dbserver" != "" ] && [ "$dbc_dbserver" != localhost ] ; then
Otherwise, I get
Public bug reported:
https://github.com/zfsonlinux/zfs/commit/c352ec27d5c5ecea8f6af066258dfd106085eaac
"In certain circumstances, "zfs send -i" (incremental send) can produce
a stream which will result in incorrect sparse file contents on the
target.
The problem manifests as regions of the
Thanks for letting me know about the requestsync tool. I will be sure to
use that next time.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1557153
Title:
dbconfig-common Syntax Error
To manage
** Description changed:
mdadm automatically checks MD arrays. ZFS should automatically scrub
- pools.
+ pools too, to detect and (when possible) correct on-disk corruption.
- I've attached a debdiff which accomplishes this.
+ I've attached a debdiff which accomplishes this. It builds and
** Description changed:
mdadm automatically checks MD arrays. ZFS should automatically scrub
- pools too, to detect and (when possible) correct on-disk corruption.
+ pools too. Scrubbing a pool allows ZFS to detect and (when the pool has
+ redundancy) correct on-disk corruption.
I've
Thanks for letting me know about the requestsync tool. I will be sure to
use that next time.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to dbconfig-common in Ubuntu.
https://bugs.launchpad.net/bugs/1557153
Title:
dbconfig-common
This is the relevant commit which *fixes* the bug:
https://github.com/torvalds/linux/commit/a8c4a2522a0808c5c2143612909717d1115c40cf
The fact that this occurs with UDP+IPv4 traffic lines up with my NFS
usage, as we use UDP for NFS.
--
You received this bug notification because you are a member
** Attachment added: "Refreshed & re-tested debdiff against 0.6.5.6."
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1550301/+attachment/4634753/+files/zfs-scheduler.debdiff.2
** Description changed:
ZFS-on-Linux has its own I/O scheduler, so it sets the "noop" elevator
on
We are getting very close to the final freeze. What's the plan to deal
with spl? Is it going in, or will the /etc/hostid code be moved to ZFS,
or something else?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
*** This bug is a duplicate of bug 1527727 ***
https://bugs.launchpad.net/bugs/1527727
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Fix Released
** This bug has been marked a duplicate of bug 1527727
grub-probe for zfs assumes all devices prefix with /dev, ignoring
ZoL 0.6.5.6 has landed, so this should be good to go. Note that it only
affects zfs-initramfs. It doesn't affect any other use of ZFS.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1550301
Title:
I tested that build. It works. Thanks!
** Changed in: linux (Ubuntu Xenial)
Status: In Progress => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1558025
Title:
Public bug reported:
We're setting up a new Dovecot server on Xenial. When we start the
migration of emails, either the IMAP traffic or the NFS traffic
immediately reproduces the crash below.
I bisected using the mainline kernels. Everything up to and including
v4.5-rc7-wily fails, but v4.5-wily
This requires the -L flag to zpool status. It sounds like ZoL 0.6.5.6
will land in Xenial, so then this is good to go at that time too.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1550301
Title:
The right fix is here:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1527727
If you bring 0.6.5.6 and patch Ubuntu's GRUB to set that environment
variable, then the udev rules can go away entirely.
Otherwise, as to the debdiff in #38, it is fine, but the "raid"
condition is
** Changed in: linux (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1566074
Title:
splat.ko is packaged
To manage notifications about this bug go to:
The spl package creates /etc/hostid in order to guarantee a stable
hostid. Retaining that may be important, because if the hostid changes,
pools will not import without being forced (which is not something that
should be encouraged). Moving the contents of spl.postinst into
zfsutils-linux.postinst
This is effectively a request to ship dstat 0.7.3. Universe currently
has dstat 0.7.2.
** Also affects: dstat (Ubuntu)
Importance: Undecided
Status: New
** Changed in: dstat (Ubuntu)
Status: New => Confirmed
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Invalid
There is a reason it's there. ZFS on Solaris has NFS integration. I
haven't used it on Linux, but it's supposed to exist here as well.
(Samba integration on Linux might still be an unmerged patch.)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
I'm not an Ubuntu developer, so I may be overstepping here. I'm being
bold and marking this invalid. This is a "Suggests", which according to
policy says, "This is used to declare that one package may be more
useful with one or more others. Using this field tells the packaging
system and the user
I proposed the fix for upstream as a BZR branch:
https://code.launchpad.net/~rlaager/ecryptfs/fix-lp-1574174/+merge/292844
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1574174
Title:
** Also affects: ecryptfs
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1574174
Title:
ecryptfs-setup-private fails with ZFS
To manage notifications
** Changed in: gnucash (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/286206
Title:
Scheduled Transaction Calendar Is One Occurrence Short
To manage
AFAIK, Sun Java is no longer in Ubuntu. Even if it is, it's probably a
new version than 6. I'm sure this is broken, but nobody is going to fix
it. I just don't care any more.
** Changed in: sun-java6 (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you
** Package changed: ubuntu => debian-installer (Ubuntu)
** Summary changed:
- Installer manual partitioner doesn't properly clear labels
+ debian-installer partitioner doesn't properly clear labels
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
This has been fixed for some time. It works for me using GNOME Classic
in Ubuntu Vivid.
** Changed in: gnome-panel (Ubuntu)
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to the bug report.
Marking Invalid since this is ancient, the software has been replaced,
and I don't care any more.
** Changed in: nautilus-cd-burner (Ubuntu)
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to the bug report.
Upstream says this was fixed in 2.7.0. Every supported version of Ubuntu
has at least that version. I haven't used keychain in a long time.
** No longer affects: baltix
** Changed in: keychain (Ubuntu)
Status: Incomplete => Fix Released
--
You received this bug notification because you
AFAIK, Sun Java is no longer in Ubuntu. Even if it is, it's probably a
new version than 6. I'm sure this is broken, but nobody is going to fix
it. I just don't care any more.
** Changed in: sun-java6 (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you
AFAIK, Sun Java is no longer in Ubuntu. Even if it is, it's probably a
new version than 6. I'm sure this is broken, but nobody is going to fix
it. I just don't care any more.
** Changed in: sun-java6 (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you
** Changed in: libjpeg6b (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/381140
Title:
jpegtran is NOT lossless with metadata (Exif data, etc.) by
This is ancient. Upstream released a fix a long time ago. I no longer
know how to reproduce this.
** Changed in: policykit (Ubuntu)
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I don't use GnuCash any more. There's been no traction on this in
forever, and the upstream suggestion of "put the information in the
description" column is probably fine anyway.
** Changed in: gnucash (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because
** Changed in: help2man (Ubuntu)
Status: Triaged => Invalid
** Changed in: help2man
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/44880
Title:
test man
This is undoubtedly still broken, but I don't use this anymore. But this
is probably not worth keeping open, because PPTP is not secure by modern
standards.
** Changed in: network-manager-pptp (Ubuntu)
Status: Triaged => Invalid
--
You received this bug notification because you are a
The upstream fix has been committed for 0.6.5.7:
https://github.com/zfsonlinux/zfs/commit/325414e483a7858c1d10fb30cefe5749207098f4
At this point, Ubuntu could accept my debdiff in comment #11, or wait
for 0.6.5.7.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
I re-tested. This is still an issue in 16.04.
** Description changed:
- During a Jaunty install, I ran into a bug with the manual partitioner.
Here are the steps to reproduce:
- 1. Create a partition. Leave the type as ext3. Set the label, for example:
/srv
+ 1. Create a partition. Leave
** Changed in: linux (Ubuntu)
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1404409
Title:
[regression] Intel 10Gb NIC Crashes
To manage notifications about
Since you've already filed one bug report upstream, would you be
interested in filing this one upstream? I can certainly copy-and-paste
it upstream, but it seems like it'd be better to have it come from you.
I don't know anything about LXD. (I'm just trying to help out with ZoL
bug reports.)
--
For the record, I haven't used ipkungfu in years (since ufw was
introduced). I've removed by direct subscription to this bug, as I no
longer care.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/53328
All supported versions of Ubuntu have > 1.13-4.
** Changed in: libfinance-quote-perl (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/272777
Title:
I don't care about upstart (nor prelink) any more.
** Changed in: prelink (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/977467
Title:
prelink breaks upstart
I might be overstepping here, but I'm marking this as Fix Released. I
can confirm, as with comment #16, that this is no longer an issue. I'm
on Vivid.
** Changed in: sudo (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
You make an excellent point about single-point-of-failure. It wouldn't
necessarily have to be a new daemon. ZFS already has a daemon, so this
could just be additional functionality.
I've pointed Brian Behlendorf (the upstream ZFS-on-Linux lead developer)
to this bug report. It'd be nice to get
I think it is safe to "assume" python for the purposes of "is this
pulling in more code?". It's not safe to "assume" python for the
purposes of the actual dependencies in debian/control. That is, python
will have to be listed as a dependency, but I don't think adding this
dependency to
What did you have rootdelay set to? I just tested again now with
"rootdelay=1" as well as the useless "rootdelay=0" and invalid
"rootdelay=".
Try adding "set -x" to the top of /usr/share/initramfs-
tools/scripts/zfs, rebuilding the init script, and rebooting with
rootdelay set. Grab a picture of
In the last comment, I meant "rebuilding the initrd", of course, not
"rebuilding the init script".
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577057
Title:
zfs initrd script fails when
The relevant issue upstream seems to be:
https://github.com/zfsonlinux/zfs/issues/4178
There's no answer there yet, but I wanted to get that link posted here.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
The obvious approach for using ZFS and ecryptfs together involves creating a
dataset like this:
zfs create -o mountpoint=/home/.ecryptfs/USER rpool/home/USER
As a result, /proc/mounts looks like this:
rpool/home/USER /home/.ecryptfs/USER zfs rw,xattr 0 0
This is fixed in zfs-linux in yakkety by way of having the 0.6.5.7
release.
** Changed in: zfs-linux (Ubuntu Yakkety)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I assume you are talking about the -1 not matching the -X of the debian
package? If so, those aren't necessarily intended to match. The -1 comes
from the META file in the upstream tarball. I suppose we could overwrite
that with, for example, -0ubuntu9 in debian/rules. It doesn't seem very
I confirmed the package in yakkety works (on Xenial). That is, I can
successfully run update-grub without having the special-case
"/dev/disk"-style symlink in /dev.
Is the next step to upload this to xenial-proposed?
--
You received this bug notification because you are a member of Ubuntu
Bugs,
I apparently can't set this to Won't Fix.
I don't think there are any changes necessary. Backing pools with files
is mainly for testing. For real use, devices are best.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Invalid
** Changed in: zfs-linux (Ubuntu)
Status: Invalid
This is a consequence of not using a cache file by default.
Try this instead:
zpool create -o cachefile=/etc/zfs/zpool.cache ...
** Changed in: zfs-linux (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
What does this say:
systemctl status -l zfs-import-cache.service
** Summary changed:
- Pools get lots on reboot
+ File-based pools do not import on boot
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Does this import the pool:
sudo zpool import -c /etc/zfs/zpool.cache -aN
If so, is "/web/lxd" and/or "/web" a separate filesystem from "/"?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1581904
I tested 0.6.5.6-0ubuntu9. Everything looks good to me.
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1579082
Title:
minor changes to satisfy MIR request
To
I'd really love to see the R_ROOT thing removed as well. It serves no
useful purpose, complicates people actually using the sudoers file, and
pollutes the global namespace. Plus, I'm not sure what happens if
someone defines R_ROOT twice (which isn't inconceivable).
Basically, I'm saying I want
Try this instead of "After=local-fs.target":
mkdir -p /etc/systemd/system/zfs-import-cache.service.d
cat >> /etc/systemd/system/zfs-import-cache.service.d/web.conf << EOF
[Unit]
Requires=web.mount
After=web.mount
EOF
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Status: Incomplete => Confirmed
** Changed in: zfs-linux (Ubuntu)
Assignee: (unassigned) => Richard Laager (rlaager)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577057
Title:
z
The module is explicitly modprobe'd by the same service which imports
the pool(s).
If you have an /etc/zfs/zpool.cache file, that service is:
zfs-import-cache.service
If you do not have a zpool.cache, then this is used:
zfs-import-scan.service
What does this command have to say?
systemctl status
That Replaces line dates back from the ZoL PPA. I could be missing
something, but I don't see why it was even necessary originally. Here's
the commit where it was added: https://github.com/zfsonlinux/pkg-
zfs/commit/9da00045fb5be4c61bcd29daf0a59daa32c9a43d
--
You received this bug notification
<< 0.6.3 seems wrong. As I noted in the MIR bug report, the current
versions of the packages still conflict because they both install
zed.8.gz. That needs to be fixed at the same time and the version needs
to be the version with that fix. I have this in the rlaager/zfs PPA.
--
You received this
Does this need a versioned Breaks as well? If not, I'm curious why (for
my own learning for the future).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1579082
Title:
minor changes to satisfy MIR
Did you have a zpool.cache or modified ZFS script from testing on that
other bug?
The production script unconditionally exports the pool after a read-only
import. There shouldn't be any way to end up with a pool still imported
unless that export fails for some reason, which seems unlikely.
**
It works for me even with rootdelay=200. If you want to hit me up on
#zfsonlinux (on FreeNode), I'll try to help debug this.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577057
Title:
zfs initrd
Public bug reported:
When trying to use PHPMyAdmin without the mbstring extension installed, I get a
fatal error on the main page:
"The mbstring extension is missing. Please check your PHP configuration."
This is coming from line 95 in /usr/share/phpmyadmin/libraries/common.inc.php:
/**
*
The attached debdiff solves this problem.
** Patch added: "phpmyadmin-mbstring-depends.debdiff"
https://bugs.launchpad.net/ubuntu/+source/phpmyadmin/+bug/1577482/+attachment/4653769/+files/phpmyadmin-mbstring-depends.debdiff
--
You received this bug notification because you are a member of
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577971
Title:
zfs module is not loading after a restart
To manage notifications
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577057
Title:
zfs initrd script fails when rootdelay boot option is set
To
** Also affects: linux (Ubuntu)
Importance: Undecided
Status: New
** Changed in: linux (Ubuntu)
Status: New => Confirmed
** Changed in: zfs-linux (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
This is fixed upstream:
https://github.com/zfsonlinux/zfs/commit/874bd959f4f15b3d4b007160ee7ad3f4111dd341
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1567558
Title:
ZFS is confused by user
Is this good to go, at least for yakkety? This is necessary for zfs-
linux to be promoted to main.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1569294
Title:
[MIR] spl-linux
To manage
701 - 800 of 1092 matches
Mail list logo