[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-10-30 Thread Alex Kavanagh
> Hello - Does the recent switch from New -> Triaged for charm-cinder
and charm-nova-compute mean that someone was able to determine that the
charms are to blame and perhaps not the kernel?

Sadly not; I'm just knocking them off new; the alternative is incomplete
which means they'll time out after 60 days.  As it is confirmed in the
kernel, it may well be invalid for both charms.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  Triaged
Status in OpenStack nova-compute charm:
  Triaged
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-10-30 Thread Alex Kavanagh
** Changed in: charm-cinder
   Status: New => Triaged

** Changed in: charm-nova-compute
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  Triaged
Status in OpenStack nova-compute charm:
  Triaged
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1801349] Re: zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

2018-11-13 Thread Alex Kavanagh
Thanks Colin for tracking this done; very pleased it's not a kernel bug.
I'll now take this up with the snappy team.  Thank again for doing the
detective work; much appreciated!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  New
Status in linux package in Ubuntu:
  In Progress

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

  lxd_config = {
  'block-devices': '/dev/vdb',
  'ephemeral-unmount': '/mnt',
  'storage-type': 'zfs',
  'overwrite': True
  }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /ext4   defaults0 0
  LABEL=UEFI  /boot/efi   vfatdefaults0 0
  /dev/vdb/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
  /dev/vdcnoneswapsw,comment=cloudconfig  0   0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G - 0% 0%  1.00x  ONLINE  -

  # zpool status lxd
pool: lxd
   state: ONLINE
scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  lxd ONLINE   0 0 0
vdb   ONLINE   0 0 0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1801349] Re: zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

2018-11-12 Thread Alex Kavanagh
Ryan, my bad -- I should have updated the bug; I've provided Colin with
a serverstack bastion and some scripts to do testing, and Colin has been
doing that.  Apologies that the bug didn't reflect this new information.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  New
Status in linux package in Ubuntu:
  In Progress

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

  lxd_config = {
  'block-devices': '/dev/vdb',
  'ephemeral-unmount': '/mnt',
  'storage-type': 'zfs',
  'overwrite': True
  }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /ext4   defaults0 0
  LABEL=UEFI  /boot/efi   vfatdefaults0 0
  /dev/vdb/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
  /dev/vdcnoneswapsw,comment=cloudconfig  0   0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G - 0% 0%  1.00x  ONLINE  -

  # zpool status lxd
pool: lxd
   state: ONLINE
scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  lxd ONLINE   0 0 0
vdb   ONLINE   0 0 0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1801349] Re: zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

2018-11-08 Thread Alex Kavanagh
Colin, sorry, no; it's the bug that 'found' the issue; it's basically
saying that we aren't enabling a particular gate test (i.e. it's staying
dev-) due to this bug.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  New
Status in linux package in Ubuntu:
  In Progress

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

  lxd_config = {
  'block-devices': '/dev/vdb',
  'ephemeral-unmount': '/mnt',
  'storage-type': 'zfs',
  'overwrite': True
  }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /ext4   defaults0 0
  LABEL=UEFI  /boot/efi   vfatdefaults0 0
  /dev/vdb/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
  /dev/vdcnoneswapsw,comment=cloudconfig  0   0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G - 0% 0%  1.00x  ONLINE  -

  # zpool status lxd
pool: lxd
   state: ONLINE
scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  lxd ONLINE   0 0 0
vdb   ONLINE   0 0 0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1801349] Re: zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

2018-11-07 Thread Alex Kavanagh
** Changed in: charm-lxd
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  New
Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

  lxd_config = {
  'block-devices': '/dev/vdb',
  'ephemeral-unmount': '/mnt',
  'storage-type': 'zfs',
  'overwrite': True
  }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /ext4   defaults0 0
  LABEL=UEFI  /boot/efi   vfatdefaults0 0
  /dev/vdb/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
  /dev/vdcnoneswapsw,comment=cloudconfig  0   0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G - 0% 0%  1.00x  ONLINE  -

  # zpool status lxd
pool: lxd
   state: ONLINE
scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  lxd ONLINE   0 0 0
vdb   ONLINE   0 0 0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp