[Kernel-packages] [Bug 2061079] Re: GTK-ngl (new default backend) rendering issues with the nvidia 470 driver

2024-04-16 Thread Didier Roche-Tolomelli
Confirming that it’s fixed on the same machine with 550.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-graphics-drivers-470 in Ubuntu.
https://bugs.launchpad.net/bugs/2061079

Title:
  GTK-ngl (new default backend) rendering issues with the nvidia 470
  driver

Status in GTK+:
  New
Status in gtk4 package in Ubuntu:
  In Progress
Status in nvidia-graphics-drivers-470 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-535 package in Ubuntu:
  Invalid
Status in nvidia-graphics-drivers-545 package in Ubuntu:
  Invalid

Bug description:
  With nvidia driver, all GTK4 applications have label rendering issues.

  They are not refresh until passing the cursor over them, giving blank
  windows. The corner are white and not themed. Passing from one app
  scren to another one reproduces the issue.

  gnome-control-center or files, for instance, are blank by default.

  As suggested by seb128, exporting GSK_RENDERER=gl fixes the issue.

  Related upstream bugs and discussions are:
  - https://blog.gtk.org/2024/01/28/new-renderers-for-gtk/
  - https://gitlab.gnome.org/GNOME/gtk/-/issues/6574
  - https://gitlab.gnome.org/GNOME/gtk/-/issues/6411
  - https://gitlab.gnome.org/GNOME/gtk/-/issues/6542

  
  --

  
  $ glxinfo
  name of display: :1
  display: :1  screen: 0
  direct rendering: Yes
  server glx vendor string: NVIDIA Corporation
  server glx version string: 1.4
  server glx extensions:
  GLX_ARB_context_flush_control, GLX_ARB_create_context, 
  GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
  GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
  GLX_ARB_multisample, GLX_EXT_buffer_age, 
  GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
  GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_libglvnd, 
  GLX_EXT_stereo_tree, GLX_EXT_swap_control, GLX_EXT_swap_control_tear, 
  GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
  GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
  GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge, 
  GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
  GLX_SGI_video_sync
  client glx vendor string: NVIDIA Corporation
  client glx version string: 1.4
  client glx extensions:
  GLX_ARB_context_flush_control, GLX_ARB_create_context, 
  GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
  GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
  GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age, 
  GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
  GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, 
  GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control, 
  GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap, 
  GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer, 
  GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
  GLX_NV_multigpu_context, GLX_NV_multisample_coverage, 
  GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group, 
  GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
  GLX_SGI_video_sync
  GLX version: 1.4
  GLX extensions:
  GLX_ARB_context_flush_control, GLX_ARB_create_context, 
  GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
  GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
  GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age, 
  GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
  GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_stereo_tree, 
  GLX_EXT_swap_control, GLX_EXT_swap_control_tear, 
  GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
  GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
  GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge, 
  GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
  GLX_SGI_video_sync
  Memory info (GL_NVX_gpu_memory_info):
  Dedicated video memory: 4096 MB
  Total available memory: 4096 MB
  Currently available dedicated video memory: 3041 MB
  OpenGL vendor string: NVIDIA Corporation
  OpenGL renderer string: NVIDIA GeForce GTX 1050/PCIe/SSE2
  OpenGL core profile version string: 4.6.0 NVIDIA 470.239.06
  OpenGL core profile shading language version string: 4.60 NVIDIA
  OpenGL core profile context flags: (none)
  OpenGL core profile profile mask: core profile
  OpenGL core profile extensions:
  GL_AMD_multi_draw_indirect, GL_AMD_seamless_cubemap_per_texture, 
  GL_AMD_vertex_shader_layer, GL_AMD_vertex_shader_viewport_index, 
  GL_ARB_ES2_compatibility, GL_ARB_ES3_1_compatibility, 
  GL_ARB_ES3_2_compatibility, GL_ARB_ES3_compatibility, 
  GL_ARB_arrays_of_arrays, 

[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2022-06-01 Thread Didier Roche
** Changed in: grub2 (Ubuntu)
 Assignee: Didier Roche (didrocks) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875767] Re: When operating install/removal with apt, zed floods log and apparently crashes snapshoting

2022-06-01 Thread Didier Roche
** Changed in: zsys (Ubuntu)
 Assignee: Didier Roche (didrocks) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875767

Title:
  When operating install/removal with apt, zed floods log and apparently
  crashes snapshoting

Status in zfs-linux package in Ubuntu:
  Won't Fix
Status in zsys package in Ubuntu:
  Incomplete

Bug description:
  Hello!

  When I ran a install, it behaved like this:

  ERROR rpc error: code = DeadlineExceeded desc = context deadline exceeded 
  ... etc apt messages ...
  A processar 'triggers' para libc-bin (2.31-0ubuntu9) ...
  ERROR rpc error: code = Unavailable desc = transport is closing 

  Log gets flooded by the follow message:

  abr 28 20:41:48 manauara zed[512257]: eid=10429 class=history_event 
pool_guid=0x7E8B0F177C4DD12C
  abr 28 20:41:49 manauara zed[508106]: Missed 1 events

  And machine load gets high for incredible amount of time. Workarround
  is:

  systemctl restart zsysd
  systemctl restart zed

  System also gets a bit slow and fans get high for a while (because the
  load).

  This is a fresh install of ubuntu 20.04 with ZFS on SATA SSD.

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu12
  ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
  Uname: Linux 5.4.0-28-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Tue Apr 28 20:49:14 2020
  InstallationDate: Installed on 2020-04-27 (1 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  ProcEnviron:
   LANGUAGE=pt_BR:pt:en
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=pt_BR.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875767/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1946808] Re: zsys fail during reboot

2021-10-13 Thread Didier Roche
The issue is in zfs-linux, where the merge from debian
http://launchpadlibrarian.net/535966758/zfs-
linux_2.0.2-1ubuntu5_2.0.3-8ubuntu1.diff.gz once again reverted some of
the fixes and rolled back the patch to an earlier version. The fix was
already reverted erronously in hirsute during the debian merge and we
reintroduce the fix in https://launchpad.net/ubuntu/+source/zfs-
linux/2.0.2-1ubuntu3.

Colin, do you mind having a look and reintroducing the patch as a 0-days
SRU (the first time we introduced it was in
https://launchpad.net/ubuntu/+source/zfs-linux/0.8.4-1ubuntu14)?

Can you check that you haven’t reverted by error other part of the patch and 
fix this one?
As this is happening consecutively in 2 releases where the debian merge doesn’t 
seem to start from the latest version in ubuntu but reintroduce an older 
version of the patch, can you have a look at the local setup issue you may have 
when doing the merges?

** Package changed: zsys (Ubuntu) => zfs-linux (Ubuntu)

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Summary changed:

- zsys fail during reboot
+ zsys fail reverting to a previous snapshot on reboot

** Summary changed:

- zsys fail reverting to a previous snapshot on reboot
+ zfs fails reverting to a previous snapshot on reboot when selected on grub

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1946808

Title:
  zfs fails reverting to a previous snapshot on reboot when selected on
  grub

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  After creating a snapshot with: zsysctl save 211012-linux13-19 -s
  the reboot fails as shown on the screenshot, the other screenshot shows the 
result of the snapshot.

  ProblemType: Bug
  DistroRelease: Ubuntu 21.10
  Package: zsys 0.5.8
  ProcVersionSignature: Ubuntu 5.13.0-19.19-generic 5.13.14
  Uname: Linux 5.13.0-19-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu70
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: XFCE
  Date: Tue Oct 12 19:11:43 2021
  InstallationDate: Installed on 2021-10-12 (0 days ago)
  InstallationMedia: Xubuntu 21.10 "Impish Indri" - Release amd64 (20211012)
  Mounts: Error: [Errno 40] Too many levels of symbolic links: '/proc/mounts'
  ProcKernelCmdLine: BOOT_IMAGE=/BOOT/ubuntu_zgtuq6@/vmlinuz-5.13.0-19-generic 
root=ZFS=rpool/ROOT/ubuntu_zgtuq6 ro quiet splash
  RelatedPackageVersions:
   zfs-initramfs  2.0.6-1ubuntu2
   zfsutils-linux 2.0.6-1ubuntu2
  SourcePackage: zsys
  SystemdFailedUnits:
   
  UpgradeStatus: No upgrade log present (probably fresh install)
  ZFSImportedPools:
   NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUPHEALTH 
 ALTROOT
   bpool   768M  79.2M   689M- - 0%10%  1.00xONLINE 
 -
   rpool14G  3.33G  10.7G- - 1%23%  1.00xONLINE 
 -
  ZFSListcache-bpool:
   bpool/boot   off on  on  off on  off on  
off -   none-   -   -   -   -   -   -   
-
   bpool/BOOT   noneoff on  on  off on  off on  
off -   none-   -   -   -   -   -   -   
-
   bpool/BOOT/ubuntu_zgtuq6 /boot   on  on  on  off on  
off on  off -   none-   -   -   -   -   
-   -   -
  ZSYSJournal:
   -- Journal begins at Tue 2021-10-12 18:10:37 AST, ends at Tue 2021-10-12 
19:11:52 AST. --
   -- No entries --
  modified.conffile..etc.apt.apt.conf.d.90_zsys_system_autosnapshot: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1946808/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894329] Re: ZFS revert from grub menu not working.

2020-11-27 Thread Didier Roche
** Description changed:

+ [Impact]
+ 
+  * Users can’t revert to previous snapshots when enabling the hw enablement 
stack kernel on focal or using any more recent version.
+  * The option is available on grub and will let you with a broken system, 
partially cloned.
+ 
+ [Test Case]
+ 
+  * Boot on a system, using ZFS and ZSys.
+  * In grub, select "History" entry
+  * Select one of the "Revert" option: the system should boot after being 
reverted with an older version.
+ 
+ 
+ [Where problems could occur]
+  * The code is in the initramfs, where the generated id suffix for all our 
ZFS datasets was empty due to new coreutils/kernels.
+  * We replace dd with another way (more robust and simple) for generating 
this ID.
+ 
+ 
+ -
+ 
  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?
  
  
  
  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.
  
  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".
  
  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the dataset without a 
proper suffix.
  Replacing the function  with:
  uid()
  {
     grep -a -m10 -E "\*" /dev/urandom 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6
  }
  
  fixes the problem.
  
  Ubuntu version is:
  Description:Ubuntu Groovy Gorilla (development branch)
  Release:20.10
  
  zfs-initramfs version is:
  0.8.4-1ubuntu11
  
  With regards,
  
  Usarin Heininga
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu11
  ProcVersionSignature: Ubuntu 5.8.0-18.19-generic 5.8.4
  Uname: Linux 5.8.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu45
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: KDE
  Date: Fri Sep  4 20:23:44 2020
  InstallationDate: Installed on 2020-09-02 (2 days ago)
  InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Alpha amd64 (20200831)
  ProcEnviron:
   LANGUAGE=
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=nl_NL.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1894329

Title:
  ZFS revert from grub menu not working.

Status in coreutils package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in coreutils source package in Focal:
  Incomplete
Status in zfs-linux source package in Focal:
  Triaged
Status in coreutils source package in Groovy:
  Incomplete
Status in zfs-linux source package in Groovy:
  Triaged

Bug description:
  [Impact]

   * Users can’t revert to previous snapshots when enabling the hw enablement 
stack kernel on focal or using any more recent version.
   * The option is available on grub and will let you with a broken system, 
partially cloned.

  [Test Case]

   * Boot on a system, using ZFS and ZSys.
   * In grub, select "History" entry
   * Select one of the "Revert" option: the system should boot after being 
reverted with an older version.

  
  [Where problems could occur]
   * The code is in the initramfs, where the generated id suffix for all our 
ZFS datasets was empty due to new coreutils/kernels.
   * We replace dd with another way (more robust and simple) for generating 
this ID.

  
  -

  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?

  

  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.

  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".

  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the 

[Kernel-packages] [Bug 1894329] Re: ZFS revert from grub menu not working.

2020-11-16 Thread Didier Roche
We will backport your patch to previous releases soon.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1894329

Title:
  ZFS revert from grub menu not working.

Status in coreutils package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in coreutils source package in Focal:
  Incomplete
Status in zfs-linux source package in Focal:
  Triaged
Status in coreutils source package in Groovy:
  Incomplete
Status in zfs-linux source package in Groovy:
  Triaged

Bug description:
  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?

  

  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.

  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".

  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the dataset without a 
proper suffix.
  Replacing the function  with:
  uid()
  {
     grep -a -m10 -E "\*" /dev/urandom 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6
  }

  fixes the problem.

  Ubuntu version is:
  Description:Ubuntu Groovy Gorilla (development branch)
  Release:20.10

  zfs-initramfs version is:
  0.8.4-1ubuntu11

  With regards,

  Usarin Heininga

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu11
  ProcVersionSignature: Ubuntu 5.8.0-18.19-generic 5.8.4
  Uname: Linux 5.8.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu45
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: KDE
  Date: Fri Sep  4 20:23:44 2020
  InstallationDate: Installed on 2020-09-02 (2 days ago)
  InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Alpha amd64 (20200831)
  ProcEnviron:
   LANGUAGE=
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=nl_NL.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1894329/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1894329] Re: ZFS revert from grub menu not working.

2020-11-16 Thread Didier Roche
Thanks for the confirmation :)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1894329

Title:
  ZFS revert from grub menu not working.

Status in coreutils package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in coreutils source package in Focal:
  Incomplete
Status in zfs-linux source package in Focal:
  Triaged
Status in coreutils source package in Groovy:
  Incomplete
Status in zfs-linux source package in Groovy:
  Triaged

Bug description:
  @coreutils maintainers, any idea why dd is being flagged as having an
  executable stack?

  

  When I try to revert to a previous state from the grub menu, the boot
  fails. The system drops me to a repair modus.

  zfs-mount-generator fails with the message:
  couldn't ensure boot: Mounted clone bootFS dataset created by initramfs 
doesn't have a valid _suffix (at least .*_): \"rpool/ROOT/ubuntu_\"".

  After a reboot I have an extra clone called "rpool/ROOT/ubuntu_", indeed 
without a suffix.
  After a little investigation I found the problem in 
/usr/share/initramfs-tools/scripts/zfs at the end in function
  uid()
  {
     dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null | tr -dc 
'a-z0-9' | cut -c-6
  }, the dd command fails during boot with the message "process 'dd' started 
with executable stack.
  After this an empty uid is returned which explains the dataset without a 
proper suffix.
  Replacing the function  with:
  uid()
  {
     grep -a -m10 -E "\*" /dev/urandom 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6
  }

  fixes the problem.

  Ubuntu version is:
  Description:Ubuntu Groovy Gorilla (development branch)
  Release:20.10

  zfs-initramfs version is:
  0.8.4-1ubuntu11

  With regards,

  Usarin Heininga

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu11
  ProcVersionSignature: Ubuntu 5.8.0-18.19-generic 5.8.4
  Uname: Linux 5.8.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu45
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: KDE
  Date: Fri Sep  4 20:23:44 2020
  InstallationDate: Installed on 2020-09-02 (2 days ago)
  InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Alpha amd64 (20200831)
  ProcEnviron:
   LANGUAGE=
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=nl_NL.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1894329/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875767] Re: When operating install/removal with apt, zed floods log and apparently crashes snapshoting

2020-09-02 Thread Didier Roche
Hey! Is this reproducible today? We made some performance improvements
on zsys since then.

Please also, use the apport hook to help debugging:
apport-collect -p zsys 1875767

** Changed in: zsys (Ubuntu)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875767

Title:
  When operating install/removal with apt, zed floods log and apparently
  crashes snapshoting

Status in zfs-linux package in Ubuntu:
  New
Status in zsys package in Ubuntu:
  Incomplete

Bug description:
  Hello!

  When I ran a install, it behaved like this:

  ERROR rpc error: code = DeadlineExceeded desc = context deadline exceeded 
  ... etc apt messages ...
  A processar 'triggers' para libc-bin (2.31-0ubuntu9) ...
  ERROR rpc error: code = Unavailable desc = transport is closing 

  Log gets flooded by the follow message:

  abr 28 20:41:48 manauara zed[512257]: eid=10429 class=history_event 
pool_guid=0x7E8B0F177C4DD12C
  abr 28 20:41:49 manauara zed[508106]: Missed 1 events

  And machine load gets high for incredible amount of time. Workarround
  is:

  systemctl restart zsysd
  systemctl restart zed

  System also gets a bit slow and fans get high for a while (because the
  load).

  This is a fresh install of ubuntu 20.04 with ZFS on SATA SSD.

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu12
  ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
  Uname: Linux 5.4.0-28-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Tue Apr 28 20:49:14 2020
  InstallationDate: Installed on 2020-04-27 (1 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  ProcEnviron:
   LANGUAGE=pt_BR:pt:en
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=pt_BR.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875767/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1891867] Re: zfs not correctly imported at boot

2020-08-18 Thread Didier Roche
Please run apport-collect to attach logs so that we can debug your setting.
@baling: why subscribing zsys to this bu? There is no mention of zsys being 
used here, it seems directly a manual zfs setup.

** Package changed: zsys (Ubuntu) => zfs-linux (Ubuntu)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1891867

Title:
  zfs not correctly imported at boot

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  On a fresh and up-to-date Ubuntu 20.04 amd64 installation I configured
  two encrypted partitions on the same hdd. On these I created a
  stripped zpool. After login I can import and mount the pool without
  problems, but the at-boot import fails after the first partitions is
  available and never tried again.

  zpool version:
  zfs-0.8.3-1ubuntu12.2
  zfs-kmod-0.8.3-1ubuntu12.2
  uname -a:
  Linux hostname 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04 UTC 2020 
x86_64 x86_64 x86_64 GNU/Linux
  systemd --version
  systemd 245 (245.4-4ubuntu3.2)
  +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP 
+GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 
default-hierarchy=hybrid

  Relevant logs:
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] 3907029168 512-byte 
logical blocks: (2.00 TB/1.82 TiB)
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Write Protect is off
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read 
cache: enabled, doesn't support DPO or FUA
  Aug 17 07:12:25 hostname kernel:  sdb: sdb1 sdb2
  Aug 17 07:12:25 hostname kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found ordering 
cycle on cryptsetup.target/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on systemd-cryptsetup@vol\x2dswap_crypt.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on systemd-random-seed.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on zfs-mount.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on zfs-import.target/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Found 
dependency on zfs-import-cache.service/start
  Aug 17 07:12:25 hostname systemd[1]: zfs-import-cache.service: Job 
cryptsetup.target/start deleted to break ordering cycle starting with 
zfs-import-cache.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:25 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:26 hostname systemd[1]: Starting Cryptography Setup for 
sdb1_crypt...
  Aug 17 07:12:26 hostname systemd[1]: Starting Cryptography Setup for 
sdb2_crypt...
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-mount.service/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import.target/start
  Aug 17 07:12:26 hostname systemd[1]: cryptsetup.target: Found dependency on 
zfs-import-cache.service/start
  Aug 17 07:12:32 hostname systemd[1]: Finished Cryptography Setup for 
sdb2_crypt.
  Aug 17 07:12:32 hostname systemd[1]: Reached target Block Device Preparation 
for /dev/mapper/sdb2_crypt.
  Aug 17 07:12:32 hostname zpool[1887]: cannot import 'sdb': no such pool or 
dataset
  Aug 17 07:12:32 hostname zpool[1887]: Destroy and re-create the pool 
from
  Aug 17 07:12:32 hostname zpool[1887]: a backup source.
  Aug 17 07:12:32 hostname systemd[1]: zfs-import-cache.service: Main process 
exited, code=exited, status=1/FAILURE
  Aug 17 07:12:32 hostname systemd[1]: zfs-import-cache.service: Failed with 
result 'exit-code'.
  Aug 17 07:12:34 hostname systemd[1]: Finished Cryptography Setup for 
sdb1_crypt.
  Aug 17 07:12:34 hostname systemd[1]: Reached target Block Device Preparation 
for /dev/mapper/sdb1_crypt.

To manage notifications about this bug go 

[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-06-18 Thread Didier Roche
I will have a look (I don’t remember if the grub task is due to the
grub.cfg generation or to grub code itself), but TBH, this is low
priority on my list (downgrading the bug task priority as such, as this
is a multi-system corner-case)

** Changed in: systemd (Ubuntu)
   Importance: Medium => Low

** Changed in: grub2 (Ubuntu)
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-06-17 Thread Didier Roche
The patch doesn’t fix all instances of the bug (see upstream report
linked above). I think we should clarify that before backporting it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1882975] Re: crypttab not found error causes boot failure with changes in zfs-initramfs_0.8.4-1ubuntu5

2020-06-11 Thread Didier Roche
Thanks for the bug report and sorry for this, you are right. Uploaded in
-proposed


** Changed in: zfs-linux (Ubuntu)
   Status: New => Fix Committed

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Critical

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Didier Roche (didrocks)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1882975

Title:
  crypttab not found error causes boot failure with changes in zfs-
  initramfs_0.8.4-1ubuntu5

Status in zfs-linux package in Ubuntu:
  Fix Committed

Bug description:
  boot ends before rpool loads with a failure to find the crypttab file,
  which doesn't exist.

  Maybe this has a dependency upon a package that makes that?

  ProblemType: Bug
  DistroRelease: Ubuntu 20.10
  Package: zfs-initramfs 0.8.4-1ubuntu5
  ProcVersionSignature: Ubuntu 5.4.0-34.38-generic 5.4.41
  Uname: Linux 5.4.0-34-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu38
  Architecture: amd64
  CasperMD5CheckResult: skip
  Date: Wed Jun 10 11:42:55 2020
  InstallationDate: Installed on 2019-10-19 (235 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1882975/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1881541] Re: Prevent segfault immediately after install when zfs kernel module isn't loaded

2020-06-04 Thread Didier Roche
Sorry Colin, this was ZSys and I targetted the wrong component when
filing batch-bugs for ZSys 0.5 upload.

Fixed in https://launchpad.net/ubuntu/+source/zsys/0.5.0.

** Package changed: zfs-linux (Ubuntu) => zsys (Ubuntu)

** Changed in: zsys (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881541

Title:
  Prevent segfault immediately after install when zfs kernel module
  isn't loaded

Status in zsys package in Ubuntu:
  Fix Released

Bug description:
  Install zsys on a non ZFS system without the kernel module loaded
  leaded to a segfault.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1881541/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1881541] [NEW] Prevent segfault immediately after install when zfs kernel module isn't loaded

2020-06-01 Thread Didier Roche
Public bug reported:

Install zsys on a non ZFS system without the kernel module loaded leaded
to a segfault.

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881541

Title:
  Prevent segfault immediately after install when zfs kernel module
  isn't loaded

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Install zsys on a non ZFS system without the kernel module loaded
  leaded to a segfault.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881541/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Didier Roche
Great to hear John! Thanks for confirming and thanks to Richard for the
patch.

I’m happy to SRU it to focal once it’s proposed upstream. (Keep me
posted Richard, you can drop a link here and I will monitor)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Didier Roche
On an installed packaged system, the files are in different directories
(and don’t have the .in extension as they have been built with the
prefix replacement). Their names and locations are:

/lib/systemd/system/zfs-mount.service
/lib/systemd/system-generators/zfs-mount-generator

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Didier Roche
Your patch makes sense Richard and I think it will be a good upstream
candidates. In all approaches you proposed, this is my prefered one
because this is the most flexible IMHO.

Tell me when you get a chance to test it and maybe John, you can confirm
this fixes it for you?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1876052] [NEW] Nvidia driver, default configuration, "Use dedicated card option" for app triggers software acceleration

2020-04-30 Thread Didier Roche
Public bug reported:

Fresh install of 20.04 LTS with nvidia binary driver from our archive.
(dual Intel/Nvidia setup)

No setting change has been done. The card supports "On demand".

Tested with Firefox (about:support) and Chrome (config:cpu). Both are showing 
the same result:
- default launch -> Intel drive, OK
GL_VENDOR   Intel
GL_RENDERER Mesa Intel(R) UHD Graphics 630 (CFL GT2)
GL_VERSION  4.6 (Core Profile) Mesa 20.0.4
- select use dedicated card -> Sofware acceleration! KO
GL_VENDOR   Google Inc.
GL_RENDERER Google SwiftShader
GL_VERSION  OpenGL ES 3.0 SwiftShader 4.1.0.7


Selecting the option for performance though is worse than not selecting it with 
our default configuration.


If you open nvidia-settings, you have only one tab available (which is showing 
Performance mode), which is misleading because this is not the mode you are in. 
Note that you not on On Demand mode either as selecting it + reboot restores a 
an expected behavior (multiple tabs in nvidia-settings).

For completeness, here are the other settings:

* On Demand (manually selected): OK
Right click menu option shows Use dedicated card card option: OK
- default launch -> Intel drive, OK
- select use dedicated card -> Nvidia, OK

* Power saving mode (manually selected): OK
- default launch -> Intel drive, OK

* Performance mode (manually selected, meaning choose another option to change 
the default and selecting it back): KO
Right click menu option shows Use dedicated card card option! KO
- default launch -> Nvidia, OK
- select use dedicated card -> Nvidia, OK, but this option shouldn’t be present.
Reported this one as bug #1876049

2 additional things:
- It would be great the default to be either Performance mode (real one) or On 
Demand for supported card (nvidia-settings has the option only if this is 
supported AFAIK, so it would be good to default dynamically to this one. Filed 
as bug #1876051
- It would be great to have a way to pin an application with "Use dedicated 
card". bug #1876050

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: gnome-shell 3.36.1-5ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
Uname: Linux 5.4.0-28-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
CurrentDesktop: ubuntu:GNOME
Date: Thu Apr 30 09:22:04 2020
DisplayManager: gdm3
InstallationDate: Installed on 2020-04-24 (5 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=fr_FR.UTF-8
 SHELL=/bin/bash
RelatedPackageVersions: mutter-common 3.36.1-3ubuntu3
SourcePackage: gnome-shell
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: gnome-shell (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: nvidia-settings (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal nvidia-dedicatedcard-option

** Also affects: nvidia-settings (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-settings in Ubuntu.
https://bugs.launchpad.net/bugs/1876052

Title:
  Nvidia driver, default configuration, "Use dedicated card option" for
  app triggers software acceleration

Status in gnome-shell package in Ubuntu:
  New
Status in nvidia-settings package in Ubuntu:
  New

Bug description:
  Fresh install of 20.04 LTS with nvidia binary driver from our archive.
  (dual Intel/Nvidia setup)

  No setting change has been done. The card supports "On demand".

  Tested with Firefox (about:support) and Chrome (config:cpu). Both are showing 
the same result:
  - default launch -> Intel drive, OK
  GL_VENDOR   Intel
  GL_RENDERER Mesa Intel(R) UHD Graphics 630 (CFL GT2)
  GL_VERSION  4.6 (Core Profile) Mesa 20.0.4
  - select use dedicated card -> Sofware acceleration! KO
  GL_VENDOR   Google Inc.
  GL_RENDERER Google SwiftShader
  GL_VERSION  OpenGL ES 3.0 SwiftShader 4.1.0.7

  
  Selecting the option for performance though is worse than not selecting it 
with our default configuration.

  
  If you open nvidia-settings, you have only one tab available (which is 
showing Performance mode), which is misleading because this is not the mode you 
are in. Note that you not on On Demand mode either as selecting it + reboot 
restores a an expected behavior (multiple tabs in nvidia-settings).

  For completeness, here are the other settings:

  * On Demand (manually selected): OK
  Right click menu option shows Use dedicated card card option: OK
  - default launch -> Intel drive, OK
  - select use dedicated card -> Nvidia, OK

  * Power saving mode (manually selected): OK
  - default launch -> Intel drive, OK

  * Performance mode (manually selected, meaning choose another option to 

[Kernel-packages] [Bug 1876051] [NEW] Default acceleration mode option is none of the 3 nvidia settings option

2020-04-30 Thread Didier Roche
Public bug reported:

As stated on bug #1876052, the default acceleration mode option is none
of the 3 nvidia settings option.

It’s displayed as "Performance mode" when you launch it for the first time, 
however:
- default launch is Intel (so no performance mode)
- there is a "Use dedicated card" option (which shouldn't be displayed in 
"Performance mode" and this one is triggering software acceleration)
- nvidia settings is only displaying that tab, and selecting another mode, then 
selecting it back this one after reboot will display all other tab options, so 
nvidia settings knows that the default setting is different from Performance 
mode.

It seems nvidia settings is only showing the On demand option for cards that 
support it. I suggest thus that our default selection represents a better 
option for our users:
- If the card supports On demand acceleration -> select that by default
- If the card doesn’t support On demand acceleration -> select Performance mode 
by default
- Remove the current "weird" status it's currently on by default.

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: nvidia-settings 440.64-0ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
Uname: Linux 5.4.0-28-generic x86_64
NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
CurrentDesktop: ubuntu:GNOME
Date: Thu Apr 30 09:43:31 2020
InstallationDate: Installed on 2020-04-24 (5 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=fr_FR.UTF-8
 SHELL=/bin/bash
SourcePackage: nvidia-settings
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: nvidia-settings (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal nvidia-dedicatedcard-option

** Description changed:

- As stated on bug #…, the default acceleration mode option is none of the
- 3 nvidia settings option.
+ As stated on bug #1876052, the default acceleration mode option is none
+ of the 3 nvidia settings option.
  
  It’s displayed as "Performance mode" when you launch it for the first time, 
however:
  - default launch is Intel (so no performance mode)
  - there is a "Use dedicated card" option (which shouldn't be displayed in 
"Performance mode" and this one is triggering software acceleration)
  - nvidia settings is only displaying that tab, and selecting another mode, 
then selecting it back this one after reboot will display all other tab 
options, so nvidia settings knows that the default setting is different from 
Performance mode.
  
  It seems nvidia settings is only showing the On demand option for cards that 
support it. I suggest thus that our default selection represents a better 
option for our users:
  - If the card supports On demand acceleration -> select that by default
  - If the card doesn’t support On demand acceleration -> select Performance 
mode by default
  - Remove the current "weird" status it's currently on by default.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: nvidia-settings 440.64-0ubuntu1
  ProcVersionSignature: Ubuntu 5.4.0-28.32-generic 5.4.30
  Uname: Linux 5.4.0-28-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia zfs zunicode zavl icp zcommon 
znvpair
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Apr 30 09:43:31 2020
  InstallationDate: Installed on 2020-04-24 (5 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  ProcEnviron:
-  TERM=xterm-256color
-  PATH=(custom, no user)
-  XDG_RUNTIME_DIR=
-  LANG=fr_FR.UTF-8
-  SHELL=/bin/bash
+  TERM=xterm-256color
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=
+  LANG=fr_FR.UTF-8
+  SHELL=/bin/bash
  SourcePackage: nvidia-settings
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to nvidia-settings in Ubuntu.
https://bugs.launchpad.net/bugs/1876051

Title:
  Default acceleration mode option is none of the 3 nvidia settings
  option

Status in nvidia-settings package in Ubuntu:
  New

Bug description:
  As stated on bug #1876052, the default acceleration mode option is
  none of the 3 nvidia settings option.

  It’s displayed as "Performance mode" when you launch it for the first time, 
however:
  - default launch is Intel (so no performance mode)
  - there is a "Use dedicated card" option (which shouldn't be displayed in 
"Performance mode" and this one is triggering software acceleration)
  - nvidia settings is only displaying that tab, and selecting another mode, 
then selecting it back this one after reboot will display all other tab 
options, so nvidia settings knows that the default setting is different from 
Performance 

[Kernel-packages] [Bug 1849522] Re: imported non-rpool/bpool zpools are not being reimported after reboot

2020-04-02 Thread Didier Roche
See my previous comment: this is only related to zfs-linux with the
version I mentioned. Also, we didnt’ make any change to grub for ZFS
since 26 February, and if you have an empty grub.cfg, this may be due to
other bugs, like multiple rpool/bpool, which isn’t what this one was
about. Ensure that your bpool was imported before generating the grub
menu and is in the cache. This may be why your grub config is empty.

Just to scope this one:
- have a bootable system (preferably installed with the beta image to not get 
stuck in a previous bug)
- create a pool that you import
- reboot -> the pool should still be there

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1849522

Title:
  imported non-rpool/bpool zpools are not being reimported after reboot

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  Installed ubuntu 19.10 onto a zfs bpool/rpool.

  Installed zsys.

  Did a "zpool import" of my existing zfs pools.

  Rebooted.

  The previously imported zpools are not imported at boot!

  I am currently using this hacky workaround:

  https://gist.github.com/satmandu/4da5e900c2c80c93da38c76537291507

  
  I would expect that local zpools I have manually imported would re-import 
when the system is rebooted.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zsys 0.2.2
  ProcVersionSignature: Ubuntu 5.3.0-19.20-generic 5.3.1
  Uname: Linux 5.3.0-19-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  Date: Wed Oct 23 11:40:36 2019
  InstallationDate: Installed on 2019-10-19 (4 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zsys
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1849522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-04-02 Thread Didier Roche
This is probably because your bpool is not in the zfs cache file.

Either reinstall from the beta image which has a fix in the installer, or:
- clean up any files and directories (after unmounting /boot/grub and 
/boot/efi) under /boot (not /boot itself)
- zpool import bpool
- zpool set cachefile= bpool
- sudo mount -a (to remount /boot/grub and /boot/efi)
- update-grub

-> you souldn’t have any issue on reboot anymore and will be equivalent
to a new install from the beta image.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1849522] Re: imported non-rpool/bpool zpools are not being reimported after reboot

2020-04-01 Thread Didier Roche
Thanks for your bug report! This is now fixed in zfs-linux
0.8.3-1ubuntu10 in focal.

** Package changed: zsys (Ubuntu) => zfs-linux (Ubuntu)

** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1849522

Title:
  imported non-rpool/bpool zpools are not being reimported after reboot

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  Installed ubuntu 19.10 onto a zfs bpool/rpool.

  Installed zsys.

  Did a "zpool import" of my existing zfs pools.

  Rebooted.

  The previously imported zpools are not imported at boot!

  I am currently using this hacky workaround:

  https://gist.github.com/satmandu/4da5e900c2c80c93da38c76537291507

  
  I would expect that local zpools I have manually imported would re-import 
when the system is rebooted.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zsys 0.2.2
  ProcVersionSignature: Ubuntu 5.3.0-19.20-generic 5.3.1
  Uname: Linux 5.3.0-19-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  Date: Wed Oct 23 11:40:36 2019
  InstallationDate: Installed on 2019-10-19 (4 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zsys
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1849522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-03-31 Thread Didier Roche
Hey Balint. I just added the task post ZFS upload (the upload was
yesterday and I added the task this morning) so indeed, there is some
work needed, part of it being in systemd.

Basically, systemd isn’t capable of mounting datasets when pool names are 
duplicated on a machine
zfs-mount-generator generates .mount units with the pool name. systemd needs to 
either, for all poo«ls mactching the desired name
- prefers pool id matching zpool.cache
- check every pools for their dataset and import the first matching one (same 
dataset path)
- or the .mount unit should be able to import by ID and zfs-mount-generator 
upstream should generate a pool id somewhere in the unit file.

** Changed in: systemd (Ubuntu)
   Status: Incomplete => Confirmed

** Changed in: systemd (Ubuntu)
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1867007] Re: zfs-initramfs fails with multiple rpool on separate disks

2020-03-31 Thread Didier Roche
** Also affects: systemd (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1867007

Title:
  zfs-initramfs fails with multiple rpool on separate disks

Status in grub2 package in Ubuntu:
  Triaged
Status in systemd package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  == Test Case ==
  1. On a multi disks setup, install Ubuntu with ZFS on disk 1
  2. Reboot and make sure everything works as expected
  3. Do a second installation and install Ubuntu with ZFS on disk 2
  4. Reboot

  * Expected Result *
  GRUB should display all the machines available and let the user select which 
installation to boot

  * Actual result *
  - Only one machine is listed
  - initramfs crashes because there are several pool with the same name but 
different IDs and import the pools by name
  - Same problem in the systemd generator which will try to import all the 
rpools.

  == Original Description ==

  I had an Ubuntu old installation that used a ZFS root, using the
  layout described in the ZFS on Linux docs. Consequently, the pool name
  for my Ubuntu installation was "rpool". I'm currently encountering an
  issue with that pool that only allows me to mount it read-only. So,
  I'd like to replicate the datasets from there to a new device.

  On the new device, I've set up a ZFS system using the Ubuntu 20.04
  daily installer (March 9, 2020). This setup creates a new pool named
  "rpool". So, with both devices inserted, I have two distinct pools
  each named "rpool", one of which will kernel panic if I try to mount
  it read-write.

  ZFS is fine with having multiple pools with the same name. In these
  cases, you use `zfs import` with the pool's GUID and give it a
  distinct pool name on import. However, the grub config for booting
  from ZFS doesn't appear to handle multiple pools with the same rpool
  name very well. Rather than using the pool's GUID, it uses the name,
  and as such, it's unable to boot properly when another pool with the
  name "rpool" is attached to the system.

  I think it'd be better if the config were written in such a way that
  `update-grub` generated boot config bound to whatever pool it found at
  the time of its invocation, and not start searching through all pools
  dynamically upon boot. Just to be clear, I have an Ubuntu 20.04 system
  with a ZFS root that boots just fine. But, the moment I attach the old
  pool, also named "rpool", I'm no longer able to boot up my system even
  though I haven't removed the good pool and I haven't re-run `update-
  grub`. Instead of booting, I'm thrown into the grub command line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1867007/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-03-27 Thread Didier Roche
** Also affects: grub2 (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

Status in grub2 package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Confirmed
Status in grub2 source package in Focal:
  New
Status in zfs-linux source package in Focal:
  Confirmed

Bug description:
  Fresh installation of stock Ubuntu 19.10 Eoan with experimental root on ZFS.
  System has existing zpools with data.

  Installation is uneventful. First boot with no problems. Updates
  applied. No other changes from fresh installation. Reboot.

  External pool 'tank' imports with no errors. Reboot.

  External pool has failed to import on boot. In contrast bpool and
  rpool are ok. Manually re-import 'tank' with no issues. I can see both
  'tank' and its path in /dev/disk/by-id/ in /etc/zfs/zpool.cache.
  Reboot.

  'tank' has failed to import on boot. It is also missing from
  /etc/zfs/zpool.cache. Is it possible that the cache is being re-
  generated on reboot, and the newly imported pools are getting erased
  from it? I can re-import the pools again manually with no issues, but
  they don't persist between re-boots.

  Installing normally on ext4 this is not an issue and data pools import
  automatically on boot with no further effort.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1850130/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862776] Re: [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

2020-03-11 Thread Didier Roche
$ ./change-override -c main -S alsa-ucm-conf
Override component to main
alsa-ucm-conf 1.2.2-1 in focal: universe/misc -> main
alsa-ucm-conf 1.2.2-1 in focal amd64: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal arm64: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal armhf: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal i386: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal ppc64el: universe/libs/optional/100% -> main
alsa-ucm-conf 1.2.2-1 in focal s390x: universe/libs/optional/100% -> main
Override [y|N]? y
7 publications overridden.
$ ./change-override -c main -S alsa-topology-conf
Override component to main
alsa-topology-conf 1.2.2-1 in focal: universe/misc -> main
alsa-topology-conf 1.2.2-1 in focal amd64: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal arm64: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal armhf: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal i386: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal ppc64el: universe/libs/optional/100% -> main
alsa-topology-conf 1.2.2-1 in focal s390x: universe/libs/optional/100% -> main
Override [y|N]? y
7 publications overridden.


** Changed in: alsa-topology-conf (Ubuntu)
   Status: New => Fix Released

** Changed in: alsa-ucm-conf (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to alsa-topology-conf in Ubuntu.
https://bugs.launchpad.net/bugs/1862776

Title:
  [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

Status in alsa-topology-conf package in Ubuntu:
  Fix Released
Status in alsa-ucm-conf package in Ubuntu:
  Fix Released

Bug description:
  * alsa-ucm-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/1.2.1.2-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-ucm-conf
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-ucm-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-ucm-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

  * alsa-topology-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-topology-conf/1.2.1-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-topology-conf
  https://launchpad.net/ubuntu/+source/alsa-topology-conff/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-topology-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf/+bug/1862776/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862776] Re: [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

2020-03-11 Thread Didier Roche
Ack on both. Simple configuration files, simple packaging and build
system. All good +1

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to alsa-topology-conf in Ubuntu.
https://bugs.launchpad.net/bugs/1862776

Title:
  [MIR] alsa-ucm-conf & alsa-topology-conf (b-d of alsa-lib)

Status in alsa-topology-conf package in Ubuntu:
  Fix Released
Status in alsa-ucm-conf package in Ubuntu:
  Fix Released

Bug description:
  * alsa-ucm-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/1.2.1.2-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-ucm-conf
  https://launchpad.net/ubuntu/+source/alsa-ucm-conf/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-ucm-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-ucm-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

  * alsa-topology-conf

  = Availability =
  Built for all supported architectures as it's an arch all binary.
  In sync with Debian.
  https://launchpad.net/ubuntu/+source/alsa-topology-conf/1.2.1-2

  = Rationale =
  It's providing data useful to alsa to know how to handle hardware

  = Security =
  No known CVEs.

  https://security-tracker.debian.org/tracker/source-package/alsa-topology-conf
  https://launchpad.net/ubuntu/+source/alsa-topology-conff/+cve

  = Quality assurance =
  - Kernel Packages is subscribed to the ubuntu source
  - no tests, the package provides data file only

  https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf
  https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=alsa-topology-conf

  = Dependencies =
  No universe binary dependencies, the package has no depends

  = Standards compliance =
  standard dh12 packaging

  = Maintenance =
  Maintained with alsa upstream and in Debian

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alsa-topology-conf/+bug/1862776/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-06 Thread Didier Roche
One last thing: I think we should test this on rotational disk and
assess the performance impacts before pushing it as a default. This will
give us a good baseline to decide if this should be pushed or if we need
to add even more warnings on the ZFS install option.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-06 Thread Didier Roche
Thanks Richard for digging in, the performance comparison and the valuable 
upstream feedback and pointers.
Good catch about retrieving the master key written in old blocks with the 
previous (fix) passphrase even if changed later on. It seems that trimming 
could help. Do you think that we should base on work going that direction 
(overwriting old keys) and keep the current approach?

On a more general side, the approach seems to be forward-compatible with
per user dataset encryption (zfs change-key ), which creates a
new encryption root.

Steve:
* the only comment I have on the ubiquity part of the equation is based on 
Richard's feedback. Otherwise, looks good to me. I think we should wait on the 
above feedback before taking a finale decision on the approach though.
* the zfs-linux initramfs POC looks good (not tested though, currently 
travelling, but I didn't spot any issues). It should be easily pluggable later 
on once the user set it to "prompt" with their own passphrase and use the 
plymouth prompt codepath. (Not tested yet either).
Just a nitpick: Colin asked for our patches in zfs-linux to be numbered (hence 
the 4XXX- namespace), as the debian ones. It seems that it's not reliably the 
case since the 0.8.2 merge with debian, so need double checking (order seems as 
well messy after this merge right now).

Anyway, we need to wait on the kernel patches first.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847711] Re: Move zsys tags from org.zsys to com.ubuntu.zsys

2019-10-11 Thread Didier Roche
** Also affects: grub2 (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: grubzfs-testsuite (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: grub2 (Ubuntu)
   Status: New => Fix Committed

** Changed in: grubzfs-testsuite (Ubuntu)
   Status: New => Fix Committed

** Changed in: ubiquity (Ubuntu)
   Status: New => Fix Committed

** Changed in: zfs-linux (Ubuntu)
   Status: New => Fix Committed

** Changed in: zsys (Ubuntu)
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847711

Title:
  Move zsys tags from org.zsys to com.ubuntu.zsys

Status in grub2 package in Ubuntu:
  Fix Committed
Status in grubzfs-testsuite package in Ubuntu:
  Fix Committed
Status in ubiquity package in Ubuntu:
  Fix Committed
Status in zfs-linux package in Ubuntu:
  Fix Committed
Status in zsys package in Ubuntu:
  Fix Committed

Bug description:
  As we are not going to own in the end org.zsys, move our identifier
  tags to com.ubuntu.zsys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1847711/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847711] [NEW] Move zsys tags from org.zsys to com.ubuntu.zsys

2019-10-11 Thread Didier Roche
Public bug reported:

As we are not going to own in the end org.zsys, move our identifier tags
to com.ubuntu.zsys.

** Affects: ubiquity (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: zsys (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: ubiquity (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847711

Title:
  Move zsys tags from org.zsys to com.ubuntu.zsys

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New
Status in zsys package in Ubuntu:
  New

Bug description:
  As we are not going to own in the end org.zsys, move our identifier
  tags to com.ubuntu.zsys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1847711/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847389] Re: Confusing zpool status in Ubuntu 19.10 installed onZFS

2019-10-09 Thread Didier Roche
We wrote on another bug report BertN45 to not upgrade your bpool. Only
power users will use zpool status command to list and we expect them to
know the implication.

I think I'll retarget this bug for preventing bpool upgrade.

** Summary changed:

- Confusing zpool status in Ubuntu 19.10 installed onZFS
+ Prevent bpool (or pools with /BOOT/) to be upgraded

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847389

Title:
  Prevent bpool (or pools with /BOOT/) to be upgraded

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  The bpool status is confusing. Should I upgrade the pool or is it on
  purpose that the bpool is like this. I do no like to see this warning
  after installing the system on ZFS from scratch.

  See screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847389/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1845606] [NEW] Race between empty cache file and fstab containing /boot/grub

2019-09-27 Thread Didier Roche
Public bug reported:

There is a race between empty cache file for the mount generator and fstab 
which contains /boot/grub.
With zfs on root, the generator is the only solution to avoid races. However in 
0.8 it misses cache invalidation (when rollbacking or booting on other 
datasets).

This is a workaround for the first boot with empty files to ensure we
initialize it properly.

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1845606

Title:
  Race between empty cache file and fstab containing /boot/grub

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  There is a race between empty cache file for the mount generator and fstab 
which contains /boot/grub.
  With zfs on root, the generator is the only solution to avoid races. However 
in 0.8 it misses cache invalidation (when rollbacking or booting on other 
datasets).

  This is a workaround for the first boot with empty files to ensure we
  initialize it properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1845606/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1838278] Re: zfs-initramfs wont mount rpool

2019-09-25 Thread Didier Roche
Marking the zfs-linux task as won't fix after looking more deeply about 
cause/consequences of forcing -f on every boot:
- zfs 0.8, as told previously, tag with which system the pool was associated 
with and refuse to import previously unexported pool, as they can still be 
attached to any systems (possibly running).
- there is a kernel option zfs.force=on (or _, '') which can be set to on/yes/1 
to force the import in the initramfs.

This is seen upstream as a way to force broken systems, where they have
been imported but not exported before reboot.

Note that this broken case only impacts the 2 following scenarios:
- you install a new system (so system id != final id) and then reboot to your 
new installed system. This is the curtin (and ubiquity) cases. I think it's 
fine to require them to properly export the pools before rebooting (which will 
cause a sync).
- you have 2 systems installed in parallel on the same pool, and on shutdown, 
while switching between the 2 systems, the export wasn't working on shutdown. 
This has to be seen how frequent this is and having zfs marked as experimental 
for this cycle sounds like a good fit to get those data.

Marking the zfs task as won't fix for now.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1838278

Title:
  zfs-initramfs wont mount rpool

Status in curtin package in Ubuntu:
  In Progress
Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  1. Eoan

  2. http://archive.ubuntu.com/ubuntu eoan/main amd64 zfs-initramfs
  amd64 0.8.1-1ubuntu7 [23.1 kB]

  3. ZFS rootfs rpool is mounted at boot

  4. Booting an image with a rootfs rpool:

  [0.00] Linux version 5.2.0-8-generic (buildd@lgw01-amd64-015) (gcc 
version 9.1.0 (Ubuntu 9.1.0-6ubuntu2)) #9-Ubuntu SMP Mon Jul 8 13:07:27 UTC 
2019 (Ubuntu 5.2.0-8.9-generic 5.2.0)
  [0.00] Command line: 
BOOT_IMAGE=/ROOT/zfsroot@/boot/vmlinuz-5.2.0-8-generic 
root=ZFS=rpool/ROOT/zfsroot ro console=ttyS0

  
  Command: /sbin/zpool import -N   'rpool'
  Message: cannot import 'rpool': pool was previously in use from another 
system.
  Last accessed by ubuntu (hostid=d24775ba) at Mon Jul 29 05:21:19 2019
  The pool can be imported, use 'zpool import -f' to import the pool.
  Error: 1

  Failed to import pool 'rpool'.
  Manually import the pool and exit.

  
  Note, this works fine under Disco, 

  http://archive.ubuntu.com/ubuntu disco/main amd64 zfs-initramfs amd64
  0.7.12-1ubuntu5 [22.2 kB]

  [4.773077] spl: loading out-of-tree module taints kernel.  
  [4.777256] SPL: Loaded module v0.7.12-1ubuntu3
  [4.779433] znvpair: module license 'CDDL' taints kernel.
  [4.780333] Disabling lock debugging due to kernel taint
  [5.713830] ZFS: Loaded module v0.7.12-1ubuntu5, ZFS pool version 5000, 
ZFS filesystem version 5
  Begin: Sleeping for ... done.
  Begin: Importing ZFS root pool 'rpool' ... Begin: Importing pool 'rpool' 
using defaults ... done.
  Begin: Mounting 'rpool/ROOT/zfsroot' on '/root//' ... done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1838278/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1845298] Re: Potential race with systemd if /var/lib is an independent persistent unit

2019-09-25 Thread Didier Roche
** Also affects: zfs
   Importance: Undecided
   Status: New

** Description changed:

-  If /var/lib is a dataset not under /ROOT/, as proposed
-  in the ubuntu root on zfs upstream guide
-  (https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS), we end up
-  with a race where some services, like systemd-random-seed are writing under
-  /var/lib, while zfs-mount is called. zfs mount will then potentially fail
-  because of /var/lib isn't empty, and so, can't be mounted.
-  Order those 2 units for now (more may be needed) as we can't declare
-  virtually a provide mount point to match
-  "RequiresMountsFor=/var/lib/systemd/random-seed" from
-  systemd-random-seed.service.
-  The optional generator for zfs 0.8 fixes it, but it's not enabled by default
-  nor necessarily required.
-  Example:
-  - rpool/ROOT/ubuntu (mountpoint = /)
-  - rpool/var/ (mountpoint = /var)
-  - rpool/var/lib  (mountpoint = /var/lib)
+ If /var/lib is a dataset not under /ROOT/, as proposed
+  in the ubuntu root on zfs upstream guide
+  (https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS), we end up
+  with a race where some services, like systemd-random-seed are writing under
+  /var/lib, while zfs-mount is called. zfs mount will then potentially fail
+  because of /var/lib isn't empty, and so, can't be mounted.
+  Order those 2 units for now (more may be needed) as we can't declare
+  virtually a provide mount point to match
+  "RequiresMountsFor=/var/lib/systemd/random-seed" from
+  systemd-random-seed.service.
+  The optional generator for zfs 0.8 fixes it, but it's not enabled by default
+  nor necessarily required.
+  Example:
+  - rpool/ROOT/ubuntu (mountpoint = /)
+  - rpool/var/ (mountpoint = /var)
+  - rpool/var/lib  (mountpoint = /var/lib)
  
  Both zfs-mount.service and systemd-random-seed.service are starting
  After=systemd-remount-fs.service. zfs-mount.service should be done
  before local-fs.target while systemd-random-seed.service should finish
  before sysinit.target (which is a later target).
  
  Ideally, we would have a way for zfs mount -a unit to declare all paths
  or move systemd-random-seed after local-fs.target.
+ 
+ Upstream PR: https://github.com/zfsonlinux/zfs/pull/9360

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1845298

Title:
  Potential race with systemd if /var/lib is an independent persistent
  unit

Status in Native ZFS for Linux:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  If /var/lib is a dataset not under /ROOT/, as proposed
   in the ubuntu root on zfs upstream guide
   (https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS), we end up
   with a race where some services, like systemd-random-seed are writing under
   /var/lib, while zfs-mount is called. zfs mount will then potentially fail
   because of /var/lib isn't empty, and so, can't be mounted.
   Order those 2 units for now (more may be needed) as we can't declare
   virtually a provide mount point to match
   "RequiresMountsFor=/var/lib/systemd/random-seed" from
   systemd-random-seed.service.
   The optional generator for zfs 0.8 fixes it, but it's not enabled by default
   nor necessarily required.
   Example:
   - rpool/ROOT/ubuntu (mountpoint = /)
   - rpool/var/ (mountpoint = /var)
   - rpool/var/lib  (mountpoint = /var/lib)

  Both zfs-mount.service and systemd-random-seed.service are starting
  After=systemd-remount-fs.service. zfs-mount.service should be done
  before local-fs.target while systemd-random-seed.service should finish
  before sysinit.target (which is a later target).

  Ideally, we would have a way for zfs mount -a unit to declare all
  paths or move systemd-random-seed after local-fs.target.

  Upstream PR: https://github.com/zfsonlinux/zfs/pull/9360

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1845298/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1845298] [NEW] Potential race with systemd if /var/lib is an independent persistent unit

2019-09-25 Thread Didier Roche
Public bug reported:

 If /var/lib is a dataset not under /ROOT/, as proposed
 in the ubuntu root on zfs upstream guide
 (https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS), we end up
 with a race where some services, like systemd-random-seed are writing under
 /var/lib, while zfs-mount is called. zfs mount will then potentially fail
 because of /var/lib isn't empty, and so, can't be mounted.
 Order those 2 units for now (more may be needed) as we can't declare
 virtually a provide mount point to match
 "RequiresMountsFor=/var/lib/systemd/random-seed" from
 systemd-random-seed.service.
 The optional generator for zfs 0.8 fixes it, but it's not enabled by default
 nor necessarily required.
 Example:
 - rpool/ROOT/ubuntu (mountpoint = /)
 - rpool/var/ (mountpoint = /var)
 - rpool/var/lib  (mountpoint = /var/lib)

Both zfs-mount.service and systemd-random-seed.service are starting
After=systemd-remount-fs.service. zfs-mount.service should be done
before local-fs.target while systemd-random-seed.service should finish
before sysinit.target (which is a later target).

Ideally, we would have a way for zfs mount -a unit to declare all paths
or move systemd-random-seed after local-fs.target.

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1845298

Title:
  Potential race with systemd if /var/lib is an independent persistent
  unit

Status in zfs-linux package in Ubuntu:
  New

Bug description:
   If /var/lib is a dataset not under /ROOT/, as proposed
   in the ubuntu root on zfs upstream guide
   (https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS), we end up
   with a race where some services, like systemd-random-seed are writing under
   /var/lib, while zfs-mount is called. zfs mount will then potentially fail
   because of /var/lib isn't empty, and so, can't be mounted.
   Order those 2 units for now (more may be needed) as we can't declare
   virtually a provide mount point to match
   "RequiresMountsFor=/var/lib/systemd/random-seed" from
   systemd-random-seed.service.
   The optional generator for zfs 0.8 fixes it, but it's not enabled by default
   nor necessarily required.
   Example:
   - rpool/ROOT/ubuntu (mountpoint = /)
   - rpool/var/ (mountpoint = /var)
   - rpool/var/lib  (mountpoint = /var/lib)

  Both zfs-mount.service and systemd-random-seed.service are starting
  After=systemd-remount-fs.service. zfs-mount.service should be done
  before local-fs.target while systemd-random-seed.service should finish
  before sysinit.target (which is a later target).

  Ideally, we would have a way for zfs mount -a unit to declare all
  paths or move systemd-random-seed after local-fs.target.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1845298/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1843222] Re: Old Linux version booted after upgrade to Ubuntu 19.10

2019-09-23 Thread Didier Roche
Can you try to reproduce an upgrade without your 40_custom file?

I don't think your pool manual upgrade has any link to this.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843222

Title:
  Old Linux version booted after upgrade to Ubuntu 19.10

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have a triple boot with the following systems:
  - Xubuntu 19.10 from zfs 0.8.1
  - Ubuntu Mate 18.04.3 from zfs 0.7.12
  - Ubuntu 19.10 from ext4

  Upgrading the first system Xubuntu to 19.10 worked fine and I was very happy 
with the almost perfect result and the nice grub-menu.
  Upgrading to Ubuntu 19.10 created the following problems:
  - That system booted after the upgrade to Ubuntu 19.10 in Linux 5.0 with zfs 
0.7.12
  - All grub entries with the ZFS systems disappeared and the whole nice 
grub-menu was gone.

  Running update-grub and grub install; I did see the Linux 5.2 version
  appear, but it still booted from 5.0.

  There were some error messages about mounting/importing during the zfs
  part of the upgrade, but they were the same as the ones during the
  Xubuntu upgrade and that upgrade worked perfectly.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu11
  ProcVersionSignature: Ubuntu 5.0.0-27.28-generic 5.0.21
  Uname: Linux 5.0.0-27-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu7
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Mon Sep  9 01:37:21 2019
  InstallationDate: Installed on 2019-03-10 (183 days ago)
  InstallationMedia: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 
(20190210)
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to eoan on 2019-09-09 (0 days ago)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1843222/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1838278] Re: zfs-initramfs wont mount rpool

2019-09-17 Thread Didier Roche
The issue seems to be related to a change in ZFS 0.8 initramfs script.

The initramfs script for ZFS does a normal ZFS import.
ZFS import now forces to export a pool before importing it back again on a 
different system. This is a security feature to ensure the same pool isn't 
imported on two different systems on the same time.

I guess what happens is that the way you are installing the pool doesn't
export it before the reboot, then you reboot to the new installed system
(which has different ID), and so zfs import fails in the initramfs.

We were wondering if we should import -f in the initramfs to force importing, 
that's a question for Colin K. I think?
At least, we should ensure you find what didn't export the pool properly.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1838278

Title:
  zfs-initramfs wont mount rpool

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  1. Eoan

  2. http://archive.ubuntu.com/ubuntu eoan/main amd64 zfs-initramfs
  amd64 0.8.1-1ubuntu7 [23.1 kB]

  3. ZFS rootfs rpool is mounted at boot

  4. Booting an image with a rootfs rpool:

  [0.00] Linux version 5.2.0-8-generic (buildd@lgw01-amd64-015) (gcc 
version 9.1.0 (Ubuntu 9.1.0-6ubuntu2)) #9-Ubuntu SMP Mon Jul 8 13:07:27 UTC 
2019 (Ubuntu 5.2.0-8.9-generic 5.2.0)
  [0.00] Command line: 
BOOT_IMAGE=/ROOT/zfsroot@/boot/vmlinuz-5.2.0-8-generic 
root=ZFS=rpool/ROOT/zfsroot ro console=ttyS0

  
  Command: /sbin/zpool import -N   'rpool'
  Message: cannot import 'rpool': pool was previously in use from another 
system.
  Last accessed by ubuntu (hostid=d24775ba) at Mon Jul 29 05:21:19 2019
  The pool can be imported, use 'zpool import -f' to import the pool.
  Error: 1

  Failed to import pool 'rpool'.
  Manually import the pool and exit.

  
  Note, this works fine under Disco, 

  http://archive.ubuntu.com/ubuntu disco/main amd64 zfs-initramfs amd64
  0.7.12-1ubuntu5 [22.2 kB]

  [4.773077] spl: loading out-of-tree module taints kernel.  
  [4.777256] SPL: Loaded module v0.7.12-1ubuntu3
  [4.779433] znvpair: module license 'CDDL' taints kernel.
  [4.780333] Disabling lock debugging due to kernel taint
  [5.713830] ZFS: Loaded module v0.7.12-1ubuntu5, ZFS pool version 5000, 
ZFS filesystem version 5
  Begin: Sleeping for ... done.
  Begin: Importing ZFS root pool 'rpool' ... Begin: Importing pool 'rpool' 
using defaults ... done.
  Begin: Mounting 'rpool/ROOT/zfsroot' on '/root//' ... done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1838278/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1843222] Re: Old Linux version booted after upgrade to Ubuntu 19.10

2019-09-10 Thread Didier Roche
The upgrade failed, in your logs, you have:
"/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-5.2.0-15-generic
I: The initramfs will attempt to resume from /dev/sda2
I: (UUID=bf02ddd4-8d65-40d6-ab24-4fc8a5673dc6)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.2.0-15-generic
Found initrd image: /boot/initrd.img-5.2.0-15-generic
Found linux image: /boot/vmlinuz-5.0.0-27-generic
Found initrd image: /boot/initrd.img-5.0.0-27-generic
Found linux image: /boot/vmlinuz-5.0.0-25-generic
Found initrd image: /boot/initrd.img-5.0.0-25-generic
cannot open 'This': no such pool
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 1
dpkg: error processing package linux-image-5.2.0-15-generic (--configure):
 installed linux-image-5.2.0-15-generic package post-installation script 
subprocess returned error exit status 1
Processing triggers for dbus (1.12.14-1ubuntu2) ...
Processing triggers for initramfs-tools (0.133ubuntu10) ...
update-initramfs: Generating /boot/initrd.img-5.2.0-15-generic
I: The initramfs will attempt to resume from /dev/sda2
I: (UUID=bf02ddd4-8d65-40d6-ab24-4fc8a5673dc6)
I: Set the RESUME variable to override this.
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.39.2-3) ...
Processing triggers for rygel (0.38.1-2ubuntu2) ...
Errors were encountered while processing:
 zfsutils-linux
 zfs-initramfs
 friendly-recovery
 zfs-zed
 grub-pc
 linux-image-5.2.0-15-generic
Log ended: 2019-09-09  01:10:12"

See the "cannot open 'This': no such pool" while updating grub. This is
why you have kernel 5.0 after it and don't have latest zfs after
upgrade.

You mentionned multiple times a "custom" grub script, does it have
"This" somewhere? We don't have any "This" in the distribution script.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843222

Title:
  Old Linux version booted after upgrade to Ubuntu 19.10

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have a triple boot with the following systems:
  - Xubuntu 19.10 from zfs 0.8.1
  - Ubuntu Mate 18.04.3 from zfs 0.7.12
  - Ubuntu 19.10 from ext4

  Upgrading the first system Xubuntu to 19.10 worked fine and I was very happy 
with the almost perfect result and the nice grub-menu.
  Upgrading to Ubuntu 19.10 created the following problems:
  - That system booted after the upgrade to Ubuntu 19.10 in Linux 5.0 with zfs 
0.7.12
  - All grub entries with the ZFS systems disappeared and the whole nice 
grub-menu was gone.

  Running update-grub and grub install; I did see the Linux 5.2 version
  appear, but it still booted from 5.0.

  There were some error messages about mounting/importing during the zfs
  part of the upgrade, but they were the same as the ones during the
  Xubuntu upgrade and that upgrade worked perfectly.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu11
  ProcVersionSignature: Ubuntu 5.0.0-27.28-generic 5.0.21
  Uname: Linux 5.0.0-27-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu7
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Mon Sep  9 01:37:21 2019
  InstallationDate: Installed on 2019-03-10 (183 days ago)
  InstallationMedia: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 
(20190210)
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to eoan on 2019-09-09 (0 days ago)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1843222/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1843222] Re: Old Linux version booted after upgrade to Ubuntu 19.10

2019-09-09 Thread Didier Roche
Thanks for testing the upgrade to 19.10. The only reason you would boot
with an older version of a kernel and zfs itself is due to your update
between 19.04 to 19.10 failed. Can you share the upgrade logs
/var/log/upgrade content and /var/log/dist-upgrade?

The weird part is that do-release-upgrade would have warned you about a
failed upgrade.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843222

Title:
  Old Linux version booted after upgrade to Ubuntu 19.10

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have a triple boot with the following systems:
  - Xubuntu 19.10 from zfs 0.8.1
  - Ubuntu Mate 18.04.3 from zfs 0.7.12
  - Ubuntu 19.10 from ext4

  Upgrading the first system Xubuntu to 19.10 worked fine and I was very happy 
with the almost perfect result and the nice grub-menu.
  Upgrading to Ubuntu 19.10 created the following problems:
  - That system booted after the upgrade to Ubuntu 19.10 in Linux 5.0 with zfs 
0.7.12
  - All grub entries with the ZFS systems disappeared and the whole nice 
grub-menu was gone.

  Running update-grub and grub install; I did see the Linux 5.2 version
  appear, but it still booted from 5.0.

  There were some error messages about mounting/importing during the zfs
  part of the upgrade, but they were the same as the ones during the
  Xubuntu upgrade and that upgrade worked perfectly.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu11
  ProcVersionSignature: Ubuntu 5.0.0-27.28-generic 5.0.21
  Uname: Linux 5.0.0-27-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu7
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Mon Sep  9 01:37:21 2019
  InstallationDate: Installed on 2019-03-10 (183 days ago)
  InstallationMedia: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 
(20190210)
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to eoan on 2019-09-09 (0 days ago)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1843222/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1837717] Re: [0.8 regression] zfs mount -a dataset mount ordering issues

2019-07-24 Thread Didier Roche
** Bug watch added: Github Issue Tracker for ZFS #8833
   https://github.com/zfsonlinux/zfs/issues/8833

** Also affects: zfs via
   https://github.com/zfsonlinux/zfs/issues/8833
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1837717

Title:
  [0.8 regression] zfs mount -a dataset mount ordering issues

Status in Native ZFS for Linux:
  Unknown
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Update: I was able to reproduce it with a simpler schema (/ isn't always 
mounted before /var). This is to mimick the official zol guide with zfs on 
root: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
  $ zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
  $ zfs create rpool/ROOT -o canmount=off -o mountpoint=none
  $ zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
  $ zfs create rpool/ROOT/ubuntu_123456/var

  $ zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
  $ zfs create rpool/ROOT -o canmount=off -o mountpoint=none
  $ zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
  $ zfs create rpool/ROOT/ubuntu_123456/var
  $ zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
  $ zfs create rpool/var -o canmount=off
  $ zfs create rpool/var/lib
  $ zfs create rpool/var/games
  $ zfs create rpool/ROOT/ubuntu_123456/var/lib/apt

  # Zfs mount is what we expect (5 datasets mounted):
  $ zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/var/lib   /mnt/var/lib
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt

  $ zfs umount -a

  # Everything unmounted as expected:
  $ find /mnt/
  /mnt/

  # However, zfs mount -a doesn't mount everything in the correct order 
reliably:
  $ zfs mount -a
  cannot mount '/mnt': directory is not empty

  # In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
  $ zfs mount
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/lib   /mnt/var/lib
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt

  $ find /mnt/
  /mnt/
  /mnt/var
  /mnt/var/lib
  /mnt/var/lib/apt
  /mnt/var/games

  $ zfs umount -a
  $ find /mnt/
  /mnt/

  # Everything was umounted, let's try to remount all again:
  $ zfs mount -a
  cannot mount '/mnt/var/lib': failed to create mountpoint
  $ zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt

  #This time, rpool/ROOT/ubuntu_123456 was mounted, but not
  rpool/var/lib (before rpool/ROOT/ubuntu_123456/var/lib/apt)

  Note: the same ordering issue can happen on zfs umount -a.

  Tested as well with zfs 0.7: tried to zfs mount -a && zfs umount -a in
  loop, no issue: all datasets are mounted in the correct order
  reliably.

  Note that it seems to be slightly related to the version of zfs we created a 
pool with:
  - Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues still happens.
  - However, the contrary isn't try: try zfs mount -a on zfs 0.8 with a 
pool/datasets created under zfs 0.7: there can be some ordering issues.

  There is nothing specific in the journal log:
  juil. 24 10:59:27 ubuntu kernel: ZFS: Loaded module v0.8.1-1ubuntu5, ZFS pool 
version 5000, ZFS filesystem version 5
  juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt.mount: Succeeded.
  juil. 24 10:59:45 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:45 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:46 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:46 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 11:01:06 

[Kernel-packages] [Bug 1837717] Re: [0.8 regression] zfs mount -a dataset mount ordering issues

2019-07-24 Thread Didier Roche
** Description changed:

- # zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
- # zfs create rpool/ROOT -o canmount=off -o mountpoint=none
- # zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
- # zfs create rpool/ROOT/ubuntu_123456/var
- # zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
- # zfs create rpool/var -o canmount=off
- # zfs create rpool/var/lib
- # zfs create rpool/var/games
- # zfs create rpool/ROOT/ubuntu_123456/var/lib/apt
+ Update: I was able to reproduce it with a simpler schema (/ isn't always 
mounted before /var). This is to mimick the official zol guide with zfs on 
root: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
+  zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
+ zfs create rpool/ROOT -o canmount=off -o mountpoint=none
+ zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
+ zfs create rpool/ROOT/ubuntu_123456/var
  
- Zfs mount is what we expect (5 datasets mounted):
- # zfs mount
+ 
+ zpool create -o ashift=12 -O atime=off -O canmount=off -O normalization=formD 
-O mountpoint=/ -R /mnt rpool /dev/vda2
+ zfs create rpool/ROOT -o canmount=off -o mountpoint=none
+ zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
+ zfs create rpool/ROOT/ubuntu_123456/var
+ zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
+ zfs create rpool/var -o canmount=off
+ zfs create rpool/var/lib
+ zfs create rpool/var/games
+ zfs create rpool/ROOT/ubuntu_123456/var/lib/apt
+ 
+ # Zfs mount is what we expect (5 datasets mounted):
+ zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/var/lib   /mnt/var/lib
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
- # zfs umount -a
  
- Everything unmounted as expected:
- # find /mnt/
+ zfs umount -a
+ 
+ # Everything unmounted as expected:
+ find /mnt/
  /mnt/
  
- However, zfs mount -a doesn't mount everything in the correct order reliably:
- # zfs mount -a
+ # However, zfs mount -a doesn't mount everything in the correct order 
reliably:
+ zfs mount -a
  cannot mount '/mnt': directory is not empty
- -> In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
- # zfs mount
+ 
+ # In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
+ zfs mount
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/lib   /mnt/var/lib
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
- # find /mnt/
+ 
+ find /mnt/
  /mnt/
  /mnt/var
  /mnt/var/lib
  /mnt/var/lib/apt
  /mnt/var/games
- # zfs umount -a
- # find /mnt/
+ 
+ zfs umount -a
+ find /mnt/
  /mnt/
- -> Everything was umounted, let's try to remount all again:
  
- # zfs mount -a
+ # Everything was umounted, let's try to remount all again:
+ zfs mount -a
  cannot mount '/mnt/var/lib': failed to create mountpoint
- # zfs mount
+ zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  
- -> This time, rpool/ROOT/ubuntu_123456 was mounted, but not
- rpool/var/lib (before rpool/ROOT/ubuntu_123456/var/lib/apt)
+ #This time, rpool/ROOT/ubuntu_123456 was mounted, but not rpool/var/lib
+ (before rpool/ROOT/ubuntu_123456/var/lib/apt)
  
  Note: the same ordering issue can happen on zfs umount -a.
  
  Tested as well with zfs 0.7: tried to zfs mount -a && zfs umount -a in
  loop, no issue: all datasets are mounted in the correct order reliably.
  
  Note that it seems to be slightly related to the version of zfs we created a 
pool with:
  - Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues still happens.
  - However, the contrary isn't try: try zfs mount -a on zfs 0.8 with a 
pool/datasets created under zfs 0.7: there can be some ordering issues.
  
  There is nothing specific in the journal log:
  juil. 24 10:59:27 ubuntu kernel: ZFS: Loaded module v0.8.1-1ubuntu5, ZFS pool 
version 5000, ZFS filesystem version 5
  juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt.mount: Succeeded.
  

[Kernel-packages] [Bug 1837717] Re: [0.8 regression] zfs mount -a dataset mount ordering issues

2019-07-24 Thread Didier Roche
** Description changed:

  # zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
  # zfs create rpool/ROOT -o canmount=off -o mountpoint=none
  # zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
  # zfs create rpool/ROOT/ubuntu_123456/var
  # zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
  # zfs create rpool/var -o canmount=off
  # zfs create rpool/var/lib
  # zfs create rpool/var/games
  # zfs create rpool/ROOT/ubuntu_123456/var/lib/apt
  
  Zfs mount is what we expect (5 datasets mounted):
  # zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/var/lib   /mnt/var/lib
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  # zfs umount -a
  
  Everything unmounted as expected:
  # find /mnt/
  /mnt/
  
  However, zfs mount -a doesn't mount everything in the correct order reliably:
  # zfs mount -a
  cannot mount '/mnt': directory is not empty
  -> In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
  # zfs mount
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/lib   /mnt/var/lib
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  # find /mnt/
  /mnt/
  /mnt/var
  /mnt/var/lib
  /mnt/var/lib/apt
  /mnt/var/games
  # zfs umount -a
  # find /mnt/
  /mnt/
  -> Everything was umounted, let's try to remount all again:
  
  # zfs mount -a
  cannot mount '/mnt/var/lib': failed to create mountpoint
  # zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  
  -> This time, rpool/ROOT/ubuntu_123456 was mounted, but not
  rpool/var/lib (before rpool/ROOT/ubuntu_123456/var/lib/apt)
  
  Note: the same ordering issue can happen on zfs umount -a.
  
  Tested as well with zfs 0.7: tried to zfs mount -a && zfs umount -a in
  loop, no issue: all datasets are mounted in the correct order reliably.
  
  Note that it seems to be slightly related to the version of zfs we created a 
pool with:
  - Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues still happens.
  - However, the contrary isn't try: try zfs mount -a on zfs 0.8 with a 
pool/datasets created under zfs 0.7: there can be some ordering issues.
  
  There is nothing specific in the journal log:
  juil. 24 10:59:27 ubuntu kernel: ZFS: Loaded module v0.8.1-1ubuntu5, ZFS pool 
version 5000, ZFS filesystem version 5
  juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt-var.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1116]: mnt.mount: Succeeded.
  juil. 24 10:59:42 ubuntu systemd[1]: mnt.mount: Succeeded.
  juil. 24 10:59:45 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:45 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
  juil. 24 10:59:46 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 10:59:46 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1]: mnt.mount: Succeeded.
  juil. 24 11:01:06 ubuntu systemd[1116]: mnt.mount: Succeeded.
  juil. 24 11:01:08 ubuntu systemd[1]: mnt-var.mount: Succeeded.
  juil. 24 11:01:08 ubuntu systemd[1116]: mnt-var.mount: Succeeded.
+ 
+ Note that pools created with 0.8-3 from debian has the same issues.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1837717

Title:
  [0.8 regression] zfs mount -a dataset mount ordering issues

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Update: I was able to reproduce it with a simpler schema (/ isn't always 
mounted before /var). This is to mimick the official zol guide with zfs on 
root: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
  $ zpool create -o ashift=12 -O atime=off -O canmount=off -O 

[Kernel-packages] [Bug 1837717] Re: [0.8 regression] zfs mount -a dataset mount ordering issues

2019-07-24 Thread Didier Roche
** Description changed:

- 
  # zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
  # zfs create rpool/ROOT -o canmount=off -o mountpoint=none
  # zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
  # zfs create rpool/ROOT/ubuntu_123456/var
  # zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
  # zfs create rpool/var -o canmount=off
  # zfs create rpool/var/lib
  # zfs create rpool/var/games
  # zfs create rpool/ROOT/ubuntu_123456/var/lib/apt
  
  Zfs mount is what we expect (5 datasets mounted):
  # zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/var/lib   /mnt/var/lib
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  # zfs umount -a
  
  Everything unmounted as expected:
  # find /mnt/
  /mnt/
  
  However, zfs mount -a doesn't mount everything in the correct order reliably:
  # zfs mount -a
  cannot mount '/mnt': directory is not empty
  -> In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
  # zfs mount
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/lib   /mnt/var/lib
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  # find /mnt/
  /mnt/
  /mnt/var
  /mnt/var/lib
  /mnt/var/lib/apt
  /mnt/var/games
  # zfs umount -a
  # find /mnt/
  /mnt/
  -> Everything was umounted, let's try to remount all again:
  
  # zfs mount -a
  cannot mount '/mnt/var/lib': failed to create mountpoint
  # zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  
  -> This time, rpool/ROOT/ubuntu_123456 was mounted, but not
  rpool/var/lib (before rpool/ROOT/ubuntu_123456/var/lib/apt)
  
+ Note: the same ordering issue can happen on zfs umount -a.
+ 
  Tested as well with zfs 0.7: tried to zfs mount -a && zfs umount -a in
  loop, no issue: all datasets are mounted in the correct order reliably.
  
- Note that it seems to be related to the version of zfs we created a pool with:
- - Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues occur
- - Try zfs mount -a on zfs 0.8 with a pool/datasets created under zfs 0.7: no 
ordering issue.
+ Note that it seems to be slightly related to the version of zfs we created a 
pool with:
+ - Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues still happens.
+ - However, the contrary isn't try: try zfs mount -a on zfs 0.8 with a 
pool/datasets created under zfs 0.7: there can be some ordering issues.
+ 
+ There is nothing specific in the journal log:
+ juil. 24 10:59:27 ubuntu kernel: ZFS: Loaded module v0.8.1-1ubuntu5, ZFS pool 
version 5000, ZFS filesystem version 5
+ juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
+ juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
+ juil. 24 10:59:39 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
+ juil. 24 10:59:39 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1116]: mnt-var.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1]: mnt-var.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1116]: mnt.mount: Succeeded.
+ juil. 24 10:59:42 ubuntu systemd[1]: mnt.mount: Succeeded.
+ juil. 24 10:59:45 ubuntu systemd[1116]: mnt-var-lib.mount: Succeeded.
+ juil. 24 10:59:45 ubuntu systemd[1]: mnt-var-lib.mount: Succeeded.
+ juil. 24 10:59:46 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
+ juil. 24 10:59:46 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
+ juil. 24 11:01:06 ubuntu systemd[1116]: mnt-var-lib-apt.mount: Succeeded.
+ juil. 24 11:01:06 ubuntu systemd[1]: mnt-var-lib-apt.mount: Succeeded.
+ juil. 24 11:01:06 ubuntu systemd[1]: mnt-var-games.mount: Succeeded.
+ juil. 24 11:01:06 ubuntu systemd[1116]: mnt-var-games.mount: Succeeded.
+ juil. 24 11:01:06 ubuntu systemd[1]: mnt.mount: Succeeded.
+ juil. 24 11:01:06 ubuntu systemd[1116]: mnt.mount: Succeeded.
+ juil. 24 11:01:08 ubuntu systemd[1]: mnt-var.mount: Succeeded.
+ juil. 24 11:01:08 ubuntu systemd[1116]: mnt-var.mount: Succeeded.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1837717

Title:
  [0.8 regression] zfs mount -a dataset mount ordering issues

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  # zpool create -o ashift=12 -O atime=off -O canmount=off -O 

[Kernel-packages] [Bug 1837717] [NEW] [0.8 regression] zfs mount -a dataset mount ordering issues

2019-07-24 Thread Didier Roche
Public bug reported:


# zpool create -o ashift=12 -O atime=off -O canmount=off -O normalization=formD 
-O mountpoint=/ -R /mnt rpool /dev/vda2
# zfs create rpool/ROOT -o canmount=off -o mountpoint=none
# zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
# zfs create rpool/ROOT/ubuntu_123456/var
# zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
# zfs create rpool/var -o canmount=off
# zfs create rpool/var/lib
# zfs create rpool/var/games
# zfs create rpool/ROOT/ubuntu_123456/var/lib/apt

Zfs mount is what we expect (5 datasets mounted):
# zfs mount
rpool/ROOT/ubuntu_123456/mnt
rpool/ROOT/ubuntu_123456/var/mnt/var
rpool/var/games /mnt/var/games
rpool/var/lib   /mnt/var/lib
rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
# zfs umount -a

Everything unmounted as expected:
# find /mnt/
/mnt/

However, zfs mount -a doesn't mount everything in the correct order reliably:
# zfs mount -a
cannot mount '/mnt': directory is not empty
-> In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
# zfs mount
rpool/ROOT/ubuntu_123456/var/mnt/var
rpool/var/lib   /mnt/var/lib
rpool/var/games /mnt/var/games
rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
# find /mnt/
/mnt/
/mnt/var
/mnt/var/lib
/mnt/var/lib/apt
/mnt/var/games
# zfs umount -a
# find /mnt/
/mnt/
-> Everything was umounted, let's try to remount all again:

# zfs mount -a
cannot mount '/mnt/var/lib': failed to create mountpoint
# zfs mount
rpool/ROOT/ubuntu_123456/mnt
rpool/ROOT/ubuntu_123456/var/mnt/var
rpool/var/games /mnt/var/games
rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt

-> This time, rpool/ROOT/ubuntu_123456 was mounted, but not
rpool/var/lib (before rpool/ROOT/ubuntu_123456/var/lib/apt)

Tested as well with zfs 0.7: tried to zfs mount -a && zfs umount -a in
loop, no issue: all datasets are mounted in the correct order reliably.

Note that it seems to be related to the version of zfs we created a pool with:
- Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues occur
- Try zfs mount -a on zfs 0.8 with a pool/datasets created under zfs 0.7: no 
ordering issue.

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1837717

Title:
  [0.8 regression] zfs mount -a dataset mount ordering issues

Status in zfs-linux package in Ubuntu:
  New

Bug description:

  # zpool create -o ashift=12 -O atime=off -O canmount=off -O 
normalization=formD -O mountpoint=/ -R /mnt rpool /dev/vda2
  # zfs create rpool/ROOT -o canmount=off -o mountpoint=none
  # zfs create rpool/ROOT/ubuntu_123456  -o mountpoint=/
  # zfs create rpool/ROOT/ubuntu_123456/var
  # zfs create rpool/ROOT/ubuntu_123456/var/lib -o canmount=off
  # zfs create rpool/var -o canmount=off
  # zfs create rpool/var/lib
  # zfs create rpool/var/games
  # zfs create rpool/ROOT/ubuntu_123456/var/lib/apt

  Zfs mount is what we expect (5 datasets mounted):
  # zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/var/lib   /mnt/var/lib
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  # zfs umount -a

  Everything unmounted as expected:
  # find /mnt/
  /mnt/

  However, zfs mount -a doesn't mount everything in the correct order reliably:
  # zfs mount -a
  cannot mount '/mnt': directory is not empty
  -> In that case, rpool/ROOT/ubuntu_123456 wasn't mounted:
  # zfs mount
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/lib   /mnt/var/lib
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt
  # find /mnt/
  /mnt/
  /mnt/var
  /mnt/var/lib
  /mnt/var/lib/apt
  /mnt/var/games
  # zfs umount -a
  # find /mnt/
  /mnt/
  -> Everything was umounted, let's try to remount all again:

  # zfs mount -a
  cannot mount '/mnt/var/lib': failed to create mountpoint
  # zfs mount
  rpool/ROOT/ubuntu_123456/mnt
  rpool/ROOT/ubuntu_123456/var/mnt/var
  rpool/var/games /mnt/var/games
  rpool/ROOT/ubuntu_123456/var/lib/apt  /mnt/var/lib/apt

  -> This time, rpool/ROOT/ubuntu_123456 was mounted, but not
  rpool/var/lib (before rpool/ROOT/ubuntu_123456/var/lib/apt)

  Tested as well with zfs 0.7: tried to zfs mount -a && zfs umount -a in
  loop, no issue: all datasets are mounted in the correct order
  reliably.

  Note that it seems to be related to the version of zfs we created a pool with:
  - Try zfs mount -a on zfs 0.7 with a pool/datasets created under zfs 0.8: the 
ordering issues occur
  - Try zfs mount -a on zfs 0.8 with a pool/datasets created under zfs 0.7: no 
ordering issue.

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1285172] [NEW] Display brightness on internal monitor is set to minimum and can't be changed after plugin out the external monitor.

2014-02-26 Thread Didier Roche
Public bug reported:

I can reproduce it approx on 2 of 3 trials.

1. Get the brightness up (not minimum) of the internal monitor
2. Have an external monitor plugged in in the VGA cable
3. unplug the external monitor
- the brightness of the internal monitor is set to minimum
- the fn keys to change the brightness don't work anymore (you are stuck to 
minimum brightness)

workaround:
- replug the external monitor and unplug it again
- then, you can most of the time, change the internal monitor brightness 
through the Fn keys.

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: linux-image-3.13.0-12-generic 3.13.0-12.32
ProcVersionSignature: Ubuntu 3.13.0-12.32-generic 3.13.4
Uname: Linux 3.13.0-12-generic x86_64
ApportVersion: 2.13.2-0ubuntu5
Architecture: amd64
AudioDevicesInUse:
 USERPID ACCESS COMMAND
 /dev/snd/controlC0:  didrocks   2685 F pulseaudio
CurrentDesktop: Unity
Date: Wed Feb 26 15:11:38 2014
EcryptfsInUse: Yes
HibernationDevice: RESUME=UUID=721253a1-6e55-4181-bc64-506c5987a191
InstallationDate: Installed on 2012-05-28 (639 days ago)
InstallationMedia: Ubuntu 12.04 LTS Precise Pangolin - Release amd64 
(20120425)
MachineType: LENOVO 4287CTO
ProcFB: 0 inteldrmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.13.0-12-generic 
root=UUID=a9f4b475-e4ce-45ed-aa33-9b92e52c49b0 ro quiet splash vt.handoff=7
RelatedPackageVersions:
 linux-restricted-modules-3.13.0-12-generic N/A
 linux-backports-modules-3.13.0-12-generic  N/A
 linux-firmware 1.125
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
WifiSyslog:
 
dmi.bios.date: 02/14/2012
dmi.bios.vendor: LENOVO
dmi.bios.version: 8DET58WW (1.28 )
dmi.board.asset.tag: Not Available
dmi.board.name: 4287CTO
dmi.board.vendor: LENOVO
dmi.board.version: Not Available
dmi.chassis.asset.tag: No Asset Information
dmi.chassis.type: 10
dmi.chassis.vendor: LENOVO
dmi.chassis.version: Not Available
dmi.modalias: 
dmi:bvnLENOVO:bvr8DET58WW(1.28):bd02/14/2012:svnLENOVO:pn4287CTO:pvrThinkPadX220:rvnLENOVO:rn4287CTO:rvrNotAvailable:cvnLENOVO:ct10:cvrNotAvailable:
dmi.product.name: 4287CTO
dmi.product.version: ThinkPad X220
dmi.sys.vendor: LENOVO

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug trusty

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1285172

Title:
  Display brightness on internal monitor is set to minimum and can't be
  changed after plugin out the external monitor.

Status in “linux” package in Ubuntu:
  New

Bug description:
  I can reproduce it approx on 2 of 3 trials.

  1. Get the brightness up (not minimum) of the internal monitor
  2. Have an external monitor plugged in in the VGA cable
  3. unplug the external monitor
  - the brightness of the internal monitor is set to minimum
  - the fn keys to change the brightness don't work anymore (you are stuck to 
minimum brightness)

  workaround:
  - replug the external monitor and unplug it again
  - then, you can most of the time, change the internal monitor brightness 
through the Fn keys.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: linux-image-3.13.0-12-generic 3.13.0-12.32
  ProcVersionSignature: Ubuntu 3.13.0-12.32-generic 3.13.4
  Uname: Linux 3.13.0-12-generic x86_64
  ApportVersion: 2.13.2-0ubuntu5
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  didrocks   2685 F pulseaudio
  CurrentDesktop: Unity
  Date: Wed Feb 26 15:11:38 2014
  EcryptfsInUse: Yes
  HibernationDevice: RESUME=UUID=721253a1-6e55-4181-bc64-506c5987a191
  InstallationDate: Installed on 2012-05-28 (639 days ago)
  InstallationMedia: Ubuntu 12.04 LTS Precise Pangolin - Release amd64 
(20120425)
  MachineType: LENOVO 4287CTO
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.13.0-12-generic 
root=UUID=a9f4b475-e4ce-45ed-aa33-9b92e52c49b0 ro quiet splash vt.handoff=7
  RelatedPackageVersions:
   linux-restricted-modules-3.13.0-12-generic N/A
   linux-backports-modules-3.13.0-12-generic  N/A
   linux-firmware 1.125
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  WifiSyslog:
   
  dmi.bios.date: 02/14/2012
  dmi.bios.vendor: LENOVO
  dmi.bios.version: 8DET58WW (1.28 )
  dmi.board.asset.tag: Not Available
  dmi.board.name: 4287CTO
  dmi.board.vendor: LENOVO
  dmi.board.version: Not Available
  dmi.chassis.asset.tag: No Asset Information
  dmi.chassis.type: 10
  dmi.chassis.vendor: LENOVO
  dmi.chassis.version: Not Available
  dmi.modalias: 
dmi:bvnLENOVO:bvr8DET58WW(1.28):bd02/14/2012:svnLENOVO:pn4287CTO:pvrThinkPadX220:rvnLENOVO:rn4287CTO:rvrNotAvailable:cvnLENOVO:ct10:cvrNotAvailable:
  dmi.product.name: 4287CTO
  dmi.product.version: ThinkPad X220
  dmi.sys.vendor: LENOVO

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1215456] [NEW] autopilot intel machine hw

2013-08-22 Thread Didier Roche
Public bug reported:

autopilot intel saucy machine hw for the kernel guys

ProblemType: Bug
DistroRelease: Ubuntu 13.10
Package: linux-image-3.11.0-3-generic 3.11.0-3.7
ProcVersionSignature: Ubuntu 3.11.0-3.7-generic 3.11.0-rc6
Uname: Linux 3.11.0-3-generic i686
AlsaVersion: Advanced Linux Sound Architecture Driver Version k3.11.0-3-generic.
AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
ApportVersion: 2.12.1-0ubuntu2
Architecture: i386
ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/by-path', 
'/dev/snd/controlC0', '/dev/snd/hwC0D1', '/dev/snd/hwC0D3', 
'/dev/snd/pcmC0D0c', '/dev/snd/pcmC0D0p', '/dev/snd/pcmC0D1p', 
'/dev/snd/pcmC0D2c', '/dev/snd/pcmC0D3p', '/dev/snd/seq', '/dev/snd/timer'] 
failed with exit code 1:
CRDA: Error: [Errno 2] No such file or directory: 'iw'
Card0.Amixer.info: Error: [Errno 2] No such file or directory: 'amixer'
Card0.Amixer.values: Error: [Errno 2] No such file or directory: 'amixer'
Date: Thu Aug 22 14:00:56 2013
HibernationDevice: RESUME=UUID=2debaec2-044d-40c2-b9c6-b752e4cfcaeb
IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
MachineType: MSI MS-7676
MarkForUpload: True
ProcFB: 0 inteldrmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-3-generic 
root=UUID=799cc1fa-a6a5-4b80-82bc-ded29c93ac77 ro quiet swapaccount=1
RelatedPackageVersions:
 linux-restricted-modules-3.11.0-3-generic N/A
 linux-backports-modules-3.11.0-3-generic  N/A
 linux-firmware1.113
RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
SourcePackage: linux
UpgradeStatus: Upgraded to saucy on 2013-06-04 (79 days ago)
dmi.bios.date: 05/04/2011
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: V10.2
dmi.board.asset.tag: To be filled by O.E.M.
dmi.board.name: Z68MA-ED55 (MS-7676)
dmi.board.vendor: MSI
dmi.board.version: 2.0
dmi.chassis.asset.tag: To Be Filled By O.E.M.
dmi.chassis.type: 3
dmi.chassis.vendor: MSI
dmi.chassis.version: 2.0
dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrV10.2:bd05/04/2011:svnMSI:pnMS-7676:pvr2.0:rvnMSI:rnZ68MA-ED55(MS-7676):rvr2.0:cvnMSI:ct3:cvr2.0:
dmi.product.name: MS-7676
dmi.product.version: 2.0
dmi.sys.vendor: MSI

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: apport-bug i386 saucy

** Summary changed:

- autopilot ati machine hw
+ autopilot intel machine hw

** Description changed:

- autopilot ati saucy machine hw for the kernel guys
+ autopilot intel saucy machine hw for the kernel guys
  
  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: linux-image-3.11.0-3-generic 3.11.0-3.7
  ProcVersionSignature: Ubuntu 3.11.0-3.7-generic 3.11.0-rc6
  Uname: Linux 3.11.0-3-generic i686
  AlsaVersion: Advanced Linux Sound Architecture Driver Version 
k3.11.0-3-generic.
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.12.1-0ubuntu2
  Architecture: i386
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/by-path', 
'/dev/snd/controlC0', '/dev/snd/hwC0D1', '/dev/snd/hwC0D3', 
'/dev/snd/pcmC0D0c', '/dev/snd/pcmC0D0p', '/dev/snd/pcmC0D1p', 
'/dev/snd/pcmC0D2c', '/dev/snd/pcmC0D3p', '/dev/snd/seq', '/dev/snd/timer'] 
failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory: 'iw'
  Card0.Amixer.info: Error: [Errno 2] No such file or directory: 'amixer'
  Card0.Amixer.values: Error: [Errno 2] No such file or directory: 'amixer'
  Date: Thu Aug 22 14:00:56 2013
  HibernationDevice: RESUME=UUID=2debaec2-044d-40c2-b9c6-b752e4cfcaeb
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: MSI MS-7676
  MarkForUpload: True
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-3-generic 
root=UUID=799cc1fa-a6a5-4b80-82bc-ded29c93ac77 ro quiet swapaccount=1
  RelatedPackageVersions:
-  linux-restricted-modules-3.11.0-3-generic N/A
-  linux-backports-modules-3.11.0-3-generic  N/A
-  linux-firmware1.113
+  linux-restricted-modules-3.11.0-3-generic N/A
+  linux-backports-modules-3.11.0-3-generic  N/A
+  linux-firmware1.113
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: Upgraded to saucy on 2013-06-04 (79 days ago)
  dmi.bios.date: 05/04/2011
  dmi.bios.vendor: American Megatrends Inc.
  dmi.bios.version: V10.2
  dmi.board.asset.tag: To be filled by O.E.M.
  dmi.board.name: Z68MA-ED55 (MS-7676)
  dmi.board.vendor: MSI
  dmi.board.version: 2.0
  dmi.chassis.asset.tag: To Be Filled By O.E.M.
  dmi.chassis.type: 3
  dmi.chassis.vendor: MSI
  dmi.chassis.version: 2.0
  dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrV10.2:bd05/04/2011:svnMSI:pnMS-7676:pvr2.0:rvnMSI:rnZ68MA-ED55(MS-7676):rvr2.0:cvnMSI:ct3:cvr2.0:
  dmi.product.name: MS-7676
  dmi.product.version: 2.0
  dmi.sys.vendor: MSI

-- 
You