[Kernel-packages] [Bug 1880943] Re: [focal] disk I/O performance regression

2020-05-27 Thread Ryan Beisner
** Tags added: uosci

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1880943

Title:
  [focal] disk I/O performance regression

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Freshly deployed equal machines with flat disk layout as configured by
  MAAS, performance governor changed from ondemand to performance.

  Bionic (4.15.0-101-generic) fio ext4 on spinning rust:
  Run status group 0 (all jobs):
WRITE: bw=1441MiB/s (1511MB/s), 1441MiB/s-1441MiB/s (1511MB/s-1511MB/s), 
io=1267GiB (1360GB), run=91-91msec

  Disk stats (read/write):
sda: ios=2/332092, merge=0/567, ticks=196/123698912, in_queue=123760992, 
util=95.91%

  Bionic (4.15.0-101-generic) fio ext4 on nvme:
  Run status group 0 (all jobs):
WRITE: bw=2040MiB/s (2139MB/s), 2040MiB/s-2040MiB/s (2139MB/s-2139MB/s), 
io=1793GiB (1925GB), run=91-91msec

  Disk stats (read/write):
nvme0n1: ios=0/2617321, merge=0/465, ticks=0/233900784, in_queue=232549460, 
util=78.97%

  Focal (5.4.0-31-generic) fio ext4 on spinning rust:
  Run status group 0 (all jobs):
WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), 
io=100GiB (107GB), run=947255-947255msec

  Disk stats (read/write):
sda: ios=65/430942, merge=0/980, ticks=1655/5837146, in_queue=4898628, 
util=48.75%

  Focal (5.4.0-31-generic) fio ext4 on nvme:
   status group 0 (all jobs):
WRITE: bw=361MiB/s (378MB/s), 361MiB/s-361MiB/s (378MB/s-378MB/s), 
io=320GiB (344GB), run=907842-907842msec

  Disk stats (read/write):
nvme0n1: ios=0/2847497, merge=0/382, ticks=0/236641266, in_queue=230690420, 
util=78.95%

  Freshly deployed equal machines with bcache as configured by MAAS,
  performance governor changed from ondemand to performance.

  Bionic (4.15.0-101-generic):
  Run status group 0 (all jobs):
WRITE: bw=2080MiB/s (2181MB/s), 2080MiB/s-2080MiB/s (2181MB/s-2181MB/s), 
io=1828GiB (1963GB), run=900052-900052msec

  Disk stats (read/write):
  bcache3: ios=0/53036, merge=0/0, ticks=0/15519188, in_queue=15522076, 
util=91.81%, aggrios=0/212383, aggrmerge=0/402, aggrticks=0/59247094, 
aggrin_queue=59256646, aggrutil=91.82%
nvme0n1: ios=0/7169, merge=0/397, ticks=0/0, in_queue=0, util=0.00%
sda: ios=0/417598, merge=0/407, ticks=0/118494188, in_queue=118513292, 
util=91.82%

  
  Bionic (5.3.0-53-generic):
  Run status group 0 (all jobs):
WRITE: bw=2725MiB/s (2858MB/s), 2725MiB/s-2725MiB/s (2858MB/s-2858MB/s), 
io=2395GiB (2572GB), run=91-91msec

  Disk stats (read/write):
  bcache3: ios=96/3955, merge=0/0, ticks=4/895876, in_queue=895880, 
util=2.63%, aggrios=48/222087, aggrmerge=0/391, aggrticks=3/2730760, 
aggrin_queue=2272248, aggrutil=90.56%
nvme0n1: ios=96/2755, merge=0/373, ticks=6/78, in_queue=8, util=1.12%
sda: ios=0/441420, merge=0/409, ticks=0/5461443, in_queue=4544488, 
util=90.56%

  Focal (5.4.0-31-generic):
  Run status group 0 (all jobs):
WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), 
io=110GiB (118GB), run=959924-959924msec

  Disk stats (read/write):
  bcache3: ios=0/4061, merge=0/0, ticks=0/1571168, in_queue=1571168, 
util=1.40%, aggrios=0/226807, aggrmerge=0/183, aggrticks=0/2816798, 
aggrin_queue=2331594, aggrutil=52.79%
nvme0n1: ios=0/1474, merge=0/46, ticks=0/50, in_queue=0, util=0.53%
sda: ios=0/452140, merge=0/321, ticks=0/5633547, in_queue=4663188, 
util=52.79%

  
  ; fio-seq-write.job for fiotest

  [global]
  name=fio-seq-write
  filename=fio-seq-write
  rw=write
  bs=256K
  direct=0
  numjobs=1
  time_based=1
  runtime=900

  [file1]
  size=10G
  ioengine=libaio
  iodepth=16

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-31-generic 5.4.0-31.35
  ProcVersionSignature: Ubuntu 5.4.0-31.35-generic 5.4.34
  Uname: Linux 5.4.0-31-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 May 27 11:52 seq
   crw-rw 1 root audio 116, 33 May 27 11:52 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CasperMD5CheckResult: skip
  Date: Wed May 27 12:44:10 2020
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lsusb:
   Bus 002 Device 002: ID 8087:8002 Intel Corp. 
   Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
   Bus 001 Device 003: ID 413c:a001 Dell Computer Corp. Hub
   Bus 001 Device 002: ID 8087:800a Intel Corp. 
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  Lsusb-t:
   /:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
   |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M
   /:  Bus 01.Port 1: Dev 1, Class=root_hub, 

[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-09-05 Thread Ryan Beisner
We’ve just dug into this aspect of both Disco and Eoan.  Unfortunately,
I don’t know if this ever succeeded on these two releases.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] ProcInterrupts.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287038/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] ProcModules.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287039/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] WifiSyslog.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287041/+files/WifiSyslog.txt

** Changed in: linux (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] UdevDb.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1842751/+attachment/5287040/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] ProcInterrupts.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287030/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] ProcCpuinfoMinimal.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287029/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] CurrentDmesg.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287034/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] WifiSyslog.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287033/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] ProcModules.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287031/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] Lspci.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1842751/+attachment/5287035/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] ProcCpuinfoMinimal.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287037/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-09-05 Thread Ryan Beisner
And Bionic for grins:

** Description changed:

  Disco and Eoan device is busy after unmounting an ephemeral disk, cannot
  format the device until rebooting.
  
  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.
  
  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan
  
  
   Succeeds on Bionic:
  
  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  
  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic
  
  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt
  
  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000
  
  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb
  
  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0
  
  
   Fails on Disco:
  
  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco
  
  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt
  
  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 
  
  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy
  
  
   Fails on Eoan:
  
  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt
  
  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 
  
  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy
  
  
  ..
  
  
  Original bug description:
  
  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot open
  /dev/vdb: Device or resource busy
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-zaza-
  5b39f0208674.txt
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-crashdump-
  7d3902a2-4fdf-435a-82c4-5d2ad9af4cb5.tar.xz
  
  2019-09-03 21:33:27 DEBUG config-changed Unpacking apparmor-utils 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG config-changed Setting up python3-libapparmor 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG config-changed Setting up python3-apparmor 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG 

[Kernel-packages] [Bug 1842751] ProcCpuinfo.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287036/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] UdevDb.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1842751/+attachment/5287032/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] ProcCpuinfo.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287028/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-09-05 Thread Ryan Beisner
^ Disco apport above, Eoan to follow:

** Tags added: eoan

** Description changed:

  Disco and Eoan device is busy after unmounting an ephemeral disk, cannot
  format the device until rebooting.
  
  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.
  
  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan
  
  
   Succeeds on Bionic:
  
  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  
  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic
  
  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt
  
  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000
  
  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb
  
  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0
  
  
   Fails on Disco:
  
  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco
  
  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt
  
  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 
  
  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy
  
  
   Fails on Eoan:
  
  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt
  
  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 
  
  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy
  
  
  ..
  
  
  Original bug description:
  
  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot open
  /dev/vdb: Device or resource busy
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-zaza-
  5b39f0208674.txt
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-crashdump-
  7d3902a2-4fdf-435a-82c4-5d2ad9af4cb5.tar.xz
  
  2019-09-03 21:33:27 DEBUG config-changed Unpacking apparmor-utils 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG config-changed Setting up python3-libapparmor 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG config-changed Setting up python3-apparmor 

[Kernel-packages] [Bug 1842751] Lspci.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1842751/+attachment/5287027/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] CurrentDmesg.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "CurrentDmesg.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287026/+files/CurrentDmesg.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] WifiSyslog.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "WifiSyslog.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287025/+files/WifiSyslog.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] ProcCpuinfoMinimal.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcCpuinfoMinimal.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287021/+files/ProcCpuinfoMinimal.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] UdevDb.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "UdevDb.txt"
   https://bugs.launchpad.net/bugs/1842751/+attachment/5287024/+files/UdevDb.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] ProcCpuinfo.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcCpuinfo.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287020/+files/ProcCpuinfo.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] ProcModules.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcModules.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287023/+files/ProcModules.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] ProcInterrupts.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "ProcInterrupts.txt"
   
https://bugs.launchpad.net/bugs/1842751/+attachment/5287022/+files/ProcInterrupts.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  

[Kernel-packages] [Bug 1842751] Lspci.txt

2019-09-05 Thread Ryan Beisner
apport information

** Attachment added: "Lspci.txt"
   https://bugs.launchpad.net/bugs/1842751/+attachment/5287019/+files/Lspci.txt

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-09-05 Thread Ryan Beisner
apport information

** Tags added: apport-collected disco ec2-images

** Description changed:

  Disco and Eoan device is busy after unmounting an ephemeral disk, cannot
  format the device until rebooting.
  
  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.
  
  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan
  
  
   Succeeds on Bionic:
  
  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  
  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic
  
  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt
  
  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000
  
  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb
  
  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0
  
  
   Fails on Disco:
  
  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco
  
  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt
  
  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 
  
  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy
  
  
   Fails on Eoan:
  
  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt
  
  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 
  
  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy
  
  
  ..
  
  
  Original bug description:
  
  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot open
  /dev/vdb: Device or resource busy
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-zaza-
  5b39f0208674.txt
  
  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-crashdump-
  7d3902a2-4fdf-435a-82c4-5d2ad9af4cb5.tar.xz
  
  2019-09-03 21:33:27 DEBUG config-changed Unpacking apparmor-utils 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG config-changed Setting up python3-libapparmor 
(2.13.2-9ubuntu6.1) ...
  2019-09-03 21:33:27 DEBUG config-changed Setting up python3-apparmor 

[Kernel-packages] [Bug 1842751] Re: [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource busy

2019-09-05 Thread Ryan Beisner
** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1842751

Title:
  [disco] [eoan] After unmount, cannot open /dev/vdb: Device or resource
  busy

Status in OpenStack cinder charm:
  New
Status in OpenStack nova-compute charm:
  New
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Disco and Eoan device is busy after unmounting an ephemeral disk,
  cannot format the device until rebooting.

  This is blocking all of OpenStack Charms which interact with block
  devices (Nova Compute, Ceph, Swift, Cinder), on the Disco and Eoan
  series.  As we are nearing LTS-1, this will become urgent pretty
  quickly.

  Reproducer, on an OpenStack cloud:
  juju deploy cs:ubuntu ubuntu-bionic --series bionic
  juju deploy cs:ubuntu ubuntu-disco --series disco
  juju deploy cs:ubuntu ubuntu-eoan --series eoan

  
   Succeeds on Bionic:

  ubuntu@juju-8d01b7-foo-14:~$ uname -a
  Linux juju-8d01b7-foo-14 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@juju-8d01b7-foo-14:~$ lsb_release -c
  Codename:   bionic

  ubuntu@juju-8d01b7-foo-14:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk /mnt

  ubuntu@juju-8d01b7-foo-14:~$ df -h
  Filesystem  Size  Used Avail Use% Mounted on
  udev985M 0  985M   0% /dev
  tmpfs   200M  712K  199M   1% /run
  /dev/vda120G  1.7G   18G   9% /
  tmpfs   997M 0  997M   0% /dev/shm
  tmpfs   5.0M 0  5.0M   0% /run/lock
  tmpfs   997M 0  997M   0% /sys/fs/cgroup
  /dev/vda15  105M  3.6M  101M   4% /boot/efi
  /dev/vdb 40G   49M   38G   1% /mnt
  tmpfs   100K 0  100K   0% /var/lib/lxd/shmounts
  tmpfs   100K 0  100K   0% /var/lib/lxd/devlxd
  tmpfs   200M 0  200M   0% /run/user/1000

  ubuntu@juju-8d01b7-foo-14:~$ sudo umount /dev/vdb

  ubuntu@juju-8d01b7-foo-14:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  meta-data=/dev/vdb   isize=1024   agcount=4, agsize=2621440 blks
   =   sectsz=512   attr=2, projid32bit=1
   =   crc=1finobt=1, sparse=0, rmapbt=0, 
reflink=0
  data =   bsize=4096   blocks=10485760, imaxpct=25
   =   sunit=0  swidth=0 blks
  naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
  log  =internal log   bsize=4096   blocks=5120, version=2
   =   sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none   extsz=4096   blocks=0, rtextents=0


   Fails on Disco:

  ubuntu@juju-8d01b7-foo-12:~$ uname -a
  Linux juju-8d01b7-foo-12 5.0.0-27-generic #28-Ubuntu SMP Tue Aug 20 19:53:07 
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  ubuntu@juju-8d01b7-foo-12:~$ lsb_release -c
  Codename:   disco

  ubuntu@juju-8d01b7-foo-12:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-12:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-12:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy


   Fails on Eoan:

  ubuntu@juju-8d01b7-foo-13:~$ sudo umount /mnt

  ubuntu@juju-8d01b7-foo-13:~$ lsblk
  NAMEMAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  loop0 7:00 88.7M  1 loop /snap/core/7396
  loop1 7:10 54.5M  1 loop /snap/lxd/11727
  vda 252:00   20G  0 disk 
  ├─vda1  252:10 19.9G  0 part /
  ├─vda14 252:14   04M  0 part 
  └─vda15 252:15   0  106M  0 part /boot/efi
  vdb 252:16   0   40G  0 disk 

  ubuntu@juju-8d01b7-foo-13:~$ sudo mkfs.xfs -f -i size=1024 /dev/vdb
  mkfs.xfs: cannot open /dev/vdb: Device or resource busy

  
  ..

  
  Original bug description:

  On disco-stein, hook failed: "config-changed" with mkfs.xfs: cannot
  open /dev/vdb: Device or resource busy

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/index.html

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  /charm-cinder/678676/3/3803/test_charm_func_full_7062/juju-status-
  zaza-5b39f0208674.txt

  https://openstack-ci-
  reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack
  

[Kernel-packages] [Bug 1801349] Re: zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

2018-11-12 Thread Ryan Beisner
Hi Colin - We believe we've provided that in comment #3 above.  It is a
fresh Cosmic instance, followed by reproducer commands.  Please let us
know if this does not allow you to reproduce.  Thank you.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  New
Status in linux package in Ubuntu:
  In Progress

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

  lxd_config = {
  'block-devices': '/dev/vdb',
  'ephemeral-unmount': '/mnt',
  'storage-type': 'zfs',
  'overwrite': True
  }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /ext4   defaults0 0
  LABEL=UEFI  /boot/efi   vfatdefaults0 0
  /dev/vdb/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
  /dev/vdcnoneswapsw,comment=cloudconfig  0   0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G - 0% 0%  1.00x  ONLINE  -

  # zpool status lxd
pool: lxd
   state: ONLINE
scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  lxd ONLINE   0 0 0
vdb   ONLINE   0 0 0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1801349] Re: zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

2018-11-07 Thread Ryan Beisner
** Also affects: zfs-linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  Incomplete
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

  lxd_config = {
  'block-devices': '/dev/vdb',
  'ephemeral-unmount': '/mnt',
  'storage-type': 'zfs',
  'overwrite': True
  }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /ext4   defaults0 0
  LABEL=UEFI  /boot/efi   vfatdefaults0 0
  /dev/vdb/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
  /dev/vdcnoneswapsw,comment=cloudconfig  0   0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G - 0% 0%  1.00x  ONLINE  -

  # zpool status lxd
pool: lxd
   state: ONLINE
scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  lxd ONLINE   0 0 0
vdb   ONLINE   0 0 0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1720378] Re: Two processes can bind to the same port

2017-09-29 Thread Ryan Beisner
** Tags added: uosci

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1720378

Title:
  Two processes can bind to the same port

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  On both xenial and zesty apache and haproxy seem to be able to bind to
  the same port:

  # netstat -peanut | grep 8776
  tcp0  0 0.0.0.0:87760.0.0.0:*   LISTEN
  0  76856   26190/haproxy   
  tcp6   0  0 :::8776 :::*LISTEN
  0  76749   26254/apache2   
  tcp6   0  0 :::8776 :::*LISTEN
  0  76857   26190/haproxy   

  I thought this should not be possible?

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: linux-image-4.4.0-96-generic 4.4.0-96.119
  ProcVersionSignature: Ubuntu 4.4.0-96.119-generic 4.4.83
  Uname: Linux 4.4.0-96-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Sep 29 11:46 seq
   crw-rw 1 root audio 116, 33 Sep 29 11:46 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.1-0ubuntu2.10
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  CRDA: N/A
  Date: Fri Sep 29 14:15:26 2017
  Ec2AMI: ami-0193
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.blue
  Ec2Kernel: unavailable
  Ec2Ramdisk: unavailable
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lsusb:
   Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd 
   Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
  MachineType: OpenStack Foundation OpenStack Nova
  PciMultimedia:
   
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB:
   
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.4.0-96-generic 
root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
  RelatedPackageVersions:
   linux-restricted-modules-4.4.0-96-generic N/A
   linux-backports-modules-4.4.0-96-generic  N/A
   linux-firmwareN/A
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 04/01/2014
  dmi.bios.vendor: SeaBIOS
  dmi.bios.version: 1.10.1-1ubuntu1~cloud0
  dmi.chassis.type: 1
  dmi.chassis.vendor: QEMU
  dmi.chassis.version: pc-i440fx-zesty
  dmi.modalias: 
dmi:bvnSeaBIOS:bvr1.10.1-1ubuntu1~cloud0:bd04/01/2014:svnOpenStackFoundation:pnOpenStackNova:pvr15.0.2:cvnQEMU:ct1:cvrpc-i440fx-zesty:
  dmi.product.name: OpenStack Nova
  dmi.product.version: 15.0.2
  dmi.sys.vendor: OpenStack Foundation

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1720378/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1709784] Re: KVM on 16.04.3 throws an error

2017-09-12 Thread Ryan Beisner
Confirmed workaround for OpenStack Ocata on Xenial:

### Nova compute host [hwe-edge kernel via MAAS]
ubuntu@node-mawhile:~$ uname -a
Linux node-mawhile 4.11.0-14-generic #20~16.04.1-Ubuntu SMP Wed Aug 9 09:06:18 
UTC 2017 ppc64le ppc64le ppc64le GNU/Linux

### Nova guest [stock xenial ppc64el cloud image]
[0.00] Linux version 4.4.0-65-generic (buildd@bos01-ppc64el-028) (gcc 
version 5.4.0 20160609 (Ubuntu/IBM 5.4.0-6ubuntu1~16.04.4) ) #86-Ubuntu SMP Thu 
Feb 23 17:48:50 UTC 2017 (Ubuntu 4.4.0-65.86-gene
ric 4.4.49)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1709784

Title:
  KVM on 16.04.3 throws an error

Status in The Ubuntu-power-systems project:
  Fix Committed
Status in linux package in Ubuntu:
  Fix Committed
Status in qemu package in Ubuntu:
  Won't Fix
Status in linux source package in Xenial:
  Fix Committed
Status in linux source package in Zesty:
  Invalid

Bug description:
  Problem Description
  
  KVM on Ubuntu 16.04.3 throws an error when used
   
  ---uname output---
  Linux bastion-1 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:37:08 UTC 2017 
ppc64le ppc64le ppc64le GNU/Linux
   
  Machine Type =  8348-21C Habanero 
   
  ---Steps to Reproduce---
   Install 16.04.3

  install KVM like:

  apt-get install libvirt-bin qemu qemu-slof qemu-system qemu-utils

  then exit and log back in so virsh will work without sudo

  then run my spawn script

  $ cat spawn.sh
  #!/bin/bash

  img=$1
  qemu-system-ppc64 \
  -machine pseries,accel=kvm,usb=off -cpu host -m 512 \
  -display none -nographic \
  -net nic -net user \
  -drive "file=$img"

  with a freshly downloaded ubuntu cloud image

  sudo ./spawn.sh xenial-server-cloudimg-ppc64el-disk1.img

  And I get nothing on the output.

  and errors in dmesg

  
  ubuntu@bastion-1:~$ [  340.180295] Facility 'TM' unavailable, exception at 
0xd000148b7f10, MSR=90009033
  [  340.180399] Oops: Unexpected facility unavailable exception, sig: 6 [#1]
  [  340.180513] SMP NR_CPUS=2048 NUMA PowerNV
  [  340.180547] Modules linked in: xt_CHECKSUM iptable_mangle ipt_MASQUERADE 
nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 
nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp 
bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables 
iptable_filter ip_tables x_tables kvm_hv kvm binfmt_misc joydev input_leds 
mac_hid opal_prd ofpart cmdlinepart powernv_flash ipmi_powernv ipmi_msghandler 
mtd at24 uio_pdrv_genirq uio ibmpowernv powernv_rng vmx_crypto ib_iser rdma_cm 
iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov 
async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath 
linear mlx4_en hid_generic usbhid hid uas usb_storage ast i2c_algo_bit bnx2x 
ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops mlx4_core drm 
ahci vxlan libahci ip6_udp_tunnel udp_tunnel mdio libcrc32c
  [  340.181331] CPU: 46 PID: 5252 Comm: qemu-system-ppc Not tainted 
4.4.0-89-generic #112-Ubuntu
  [  340.181382] task: c01e34c30b50 ti: c01e34ce4000 task.ti: 
c01e34ce4000
  [  340.181432] NIP: d000148b7f10 LR: d00014822a14 CTR: 
d000148b7e40
  [  340.181475] REGS: c01e34ce77b0 TRAP: 0f60   Not tainted  
(4.4.0-89-generic)
  [  340.181519] MSR: 90009033   CR: 22024848  
XER: 
  [  340.181629] CFAR: d000148b7ea4 SOFTE: 1 
  GPR00: d00014822a14 c01e34ce7a30 d000148cc018 c01e37bc 
  GPR04: c01db9ac c01e34ce7bc0   
  GPR08: 0001 c01e34c30b50 0001 d000148278f8 
  GPR12: d000148b7e40 cfb5b500  001f 
  GPR16: 3fff91c3 0080 3fffa8e34390 3fff9242f200 
  GPR20: 3fff92430010 01001de5c030 3fff9242eb60 100c1ff0 
  GPR24: 3fffc91fe990 3fff91c10028  c01e37bc 
  GPR28:  c01db9ac c01e37bc c01db9ac 
  [  340.182315] NIP [d000148b7f10] kvmppc_vcpu_run_hv+0xd0/0xff0 [kvm_hv]
  [  340.182357] LR [d00014822a14] kvmppc_vcpu_run+0x44/0x60 [kvm]
  [  340.182394] Call Trace:
  [  340.182413] [c01e34ce7a30] [c01e34ce7ab0] 0xc01e34ce7ab0 
(unreliable)
  [  340.182468] [c01e34ce7b70] [d00014822a14] 
kvmppc_vcpu_run+0x44/0x60 [kvm]
  [  340.182522] [c01e34ce7ba0] [d0001481f674] 
kvm_arch_vcpu_ioctl_run+0x64/0x170 [kvm]
  [  340.182581] [c01e34ce7be0] [d00014813918] 
kvm_vcpu_ioctl+0x528/0x7b0 [kvm]
  [  340.182634] [c01e34ce7d40] [c02fffa0] do_vfs_ioctl+0x480/0x7d0
  [  340.182678] [c01e34ce7de0] [c03003c4] SyS_ioctl+0xd4/0xf0
  [  340.182723] [c01e34ce7e30] [c0009204] 

[Kernel-packages] [Bug 1709784] Re: KVM on 16.04.3 throws an error

2017-09-12 Thread Ryan Beisner
Affects OpenStack on ppc64el.  Marking other bug as duplicate (it has 
logs/attachments fyi). 
 https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1716469

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1709784

Title:
  KVM on 16.04.3 throws an error

Status in The Ubuntu-power-systems project:
  Fix Committed
Status in linux package in Ubuntu:
  Fix Committed
Status in qemu package in Ubuntu:
  Won't Fix
Status in linux source package in Xenial:
  Fix Committed
Status in linux source package in Zesty:
  Invalid

Bug description:
  Problem Description
  
  KVM on Ubuntu 16.04.3 throws an error when used
   
  ---uname output---
  Linux bastion-1 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:37:08 UTC 2017 
ppc64le ppc64le ppc64le GNU/Linux
   
  Machine Type =  8348-21C Habanero 
   
  ---Steps to Reproduce---
   Install 16.04.3

  install KVM like:

  apt-get install libvirt-bin qemu qemu-slof qemu-system qemu-utils

  then exit and log back in so virsh will work without sudo

  then run my spawn script

  $ cat spawn.sh
  #!/bin/bash

  img=$1
  qemu-system-ppc64 \
  -machine pseries,accel=kvm,usb=off -cpu host -m 512 \
  -display none -nographic \
  -net nic -net user \
  -drive "file=$img"

  with a freshly downloaded ubuntu cloud image

  sudo ./spawn.sh xenial-server-cloudimg-ppc64el-disk1.img

  And I get nothing on the output.

  and errors in dmesg

  
  ubuntu@bastion-1:~$ [  340.180295] Facility 'TM' unavailable, exception at 
0xd000148b7f10, MSR=90009033
  [  340.180399] Oops: Unexpected facility unavailable exception, sig: 6 [#1]
  [  340.180513] SMP NR_CPUS=2048 NUMA PowerNV
  [  340.180547] Modules linked in: xt_CHECKSUM iptable_mangle ipt_MASQUERADE 
nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 
nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp 
bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables 
iptable_filter ip_tables x_tables kvm_hv kvm binfmt_misc joydev input_leds 
mac_hid opal_prd ofpart cmdlinepart powernv_flash ipmi_powernv ipmi_msghandler 
mtd at24 uio_pdrv_genirq uio ibmpowernv powernv_rng vmx_crypto ib_iser rdma_cm 
iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov 
async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath 
linear mlx4_en hid_generic usbhid hid uas usb_storage ast i2c_algo_bit bnx2x 
ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops mlx4_core drm 
ahci vxlan libahci ip6_udp_tunnel udp_tunnel mdio libcrc32c
  [  340.181331] CPU: 46 PID: 5252 Comm: qemu-system-ppc Not tainted 
4.4.0-89-generic #112-Ubuntu
  [  340.181382] task: c01e34c30b50 ti: c01e34ce4000 task.ti: 
c01e34ce4000
  [  340.181432] NIP: d000148b7f10 LR: d00014822a14 CTR: 
d000148b7e40
  [  340.181475] REGS: c01e34ce77b0 TRAP: 0f60   Not tainted  
(4.4.0-89-generic)
  [  340.181519] MSR: 90009033   CR: 22024848  
XER: 
  [  340.181629] CFAR: d000148b7ea4 SOFTE: 1 
  GPR00: d00014822a14 c01e34ce7a30 d000148cc018 c01e37bc 
  GPR04: c01db9ac c01e34ce7bc0   
  GPR08: 0001 c01e34c30b50 0001 d000148278f8 
  GPR12: d000148b7e40 cfb5b500  001f 
  GPR16: 3fff91c3 0080 3fffa8e34390 3fff9242f200 
  GPR20: 3fff92430010 01001de5c030 3fff9242eb60 100c1ff0 
  GPR24: 3fffc91fe990 3fff91c10028  c01e37bc 
  GPR28:  c01db9ac c01e37bc c01db9ac 
  [  340.182315] NIP [d000148b7f10] kvmppc_vcpu_run_hv+0xd0/0xff0 [kvm_hv]
  [  340.182357] LR [d00014822a14] kvmppc_vcpu_run+0x44/0x60 [kvm]
  [  340.182394] Call Trace:
  [  340.182413] [c01e34ce7a30] [c01e34ce7ab0] 0xc01e34ce7ab0 
(unreliable)
  [  340.182468] [c01e34ce7b70] [d00014822a14] 
kvmppc_vcpu_run+0x44/0x60 [kvm]
  [  340.182522] [c01e34ce7ba0] [d0001481f674] 
kvm_arch_vcpu_ioctl_run+0x64/0x170 [kvm]
  [  340.182581] [c01e34ce7be0] [d00014813918] 
kvm_vcpu_ioctl+0x528/0x7b0 [kvm]
  [  340.182634] [c01e34ce7d40] [c02fffa0] do_vfs_ioctl+0x480/0x7d0
  [  340.182678] [c01e34ce7de0] [c03003c4] SyS_ioctl+0xd4/0xf0
  [  340.182723] [c01e34ce7e30] [c0009204] system_call+0x38/0xb4
  [  340.182766] Instruction dump:
  [  340.182788] e92d02a0 e9290a50 e9290108 792a07e3 41820058 e92d02a0 e9290a50 
e9290108 
  [  340.182863] 7927e8a4 78e71f87 40820ed8 e92d02a0 <7d4022a6> f9490ee8 
e92d02a0 7d4122a6 
  [  340.182938] ---[ end trace bc5080cb7d18f102 ]---
  [  340.276202] 

  
  This was with the latest ubuntu cloud image. I get the same thing 

[Kernel-packages] [Bug 1695093] Re: arm64: "unsupported RELA relocation: 275" loading certain modules

2017-06-07 Thread Ryan Beisner
** Tags added: arm64 uosci

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1695093

Title:
  arm64: "unsupported RELA relocation: 275" loading certain modules

Status in linux package in Ubuntu:
  Confirmed
Status in linux source package in Xenial:
  Confirmed
Status in linux source package in Yakkety:
  Confirmed
Status in linux source package in Zesty:
  Confirmed

Bug description:
  With the hwe-z kernel:

  ubuntu@grotian:~$ sudo modprobe libceph
  modprobe: ERROR: could not insert 'libceph': Exec format error
  ubuntu@grotian:~$ dmesg
  [66988.470307] module libceph: unsupported RELA relocation: 275

  This symptom is similar to LP: #1533009 but, in that case it impacted
  all modules, and the fix for that appears to remain in place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1695093/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-25 Thread Ryan Beisner
@smb - after repeating the test a few times, I too ran out of space with
the default 8GB VM disk size, resulting in a paused VM.  You'll have to
re-create the VMs a little bit differently (--disk GB).

ex:
@L0:
sudo uvt-kvm destroy trusty-vm
sudo uvt-kvm create --memory 2048 --disk 40 trusty-vm release=trusty

@L1:
#repeat original repro

ref:
http://manpages.ubuntu.com/manpages/trusty/man1/uvt-kvm.1.html

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are 

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
Also FYI:  I was not able to reproduce this issue when using Vivid as
the bare metal L0.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1413540/+subscriptions

-- 
Mailing list: 

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
This does not appear to be specific to OpenStack, nor tempest.  I've
reproduced with Trusty on Trusty on Trusty, vanilla qemu/kvm.

Simplified reproducer, with an existing MAAS cluster:

@L0 baremetal:
 - Create a Trusty bare metal host from daily images.
 - sudo apt-get update -y  sudo apt-get -y install uvtool
 - sudo uvt-simplestreams-libvirt sync release=trusty arch=amd64
 - sudo uvt-simplestreams-libvirt query
 - ssh-keygen
 - sudo uvt-kvm create --memory 2048 trusty-vm release=trusty
 - sudo virsh shutdown trusty-vm
 - # edit the /etc/libvirt/qemu/trusty-vm.xml to enable serial console dump to 
file:
serial type='file'
  source path='/tmp/trusty-vm-console.log'/
  target port='0'/
/serial
console type='file'
  source path='/tmp/trusty-vm-console.log'/
  target type='serial' port='0'/
/console
 - sudo virsh define /etc/libvirt/qemu/trusty-vm.xml
 - sudo virsh start trusty-vm
 - # confirm console output:
 - sudo tailf /tmp/trusty-vm-console.log
 - # take note of the VM's IP:
 - sudo uvt-kvm ip trusty-vm
 - # ssh into the new vm.

@L1 trusty-vm:
 - sudo apt-get update -y  sudo apt-get -y install uvtool
 - sudo uvt-simplestreams-libvirt sync release=trusty arch=amd64
 - sudo uvt-simplestreams-libvirt query
 - ssh-keygen
 - # change .122. to .123. in /etc/libvirt/qemu/networks/default.xml
 - # make sure default.xml is static linked inside /etc/libvirt/qemu/networks
 - sudo reboot  # for good measure
 - sudo uvt-kvm create --memory 768 trusty-nest release=trusty
 - # take note of the nested VM's IP
 - sudo uvt-kvm ip trusty-vm
 - # ssh into the new vm.

@L2 trusty-nest:
 - sudo apt-get update  sudo apt-get install stress
 - stress -c 1 -i 1 -m 1 -d 1 -t 600

Now watch the trusty-vm console for:  [  496.076004] BUG: soft lockup
- CPU#0 stuck for 23s! [ksmd:36].  It happens to me within a couple of
minutes.  Then, both L1 and L2 become unreachable indefinitely, with two
cores on L0 stuck at 100%.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
** Summary changed:

- soft lockup issues with nested KVM VMs running tempest
+ Trusty soft lockup issues with nested KVM

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1413540/+subscriptions

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
s/static/sym/  ;-)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1413540/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : 

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
A few hrs later, those two L0 bare metal host CPUs are still maxed.  In
scenarios where L0 is hosting many VMs, such as in a cloud, this bug can
be expected to cause significant performance, consistency and capacity
issues on the host and in the cloud as a whole.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you 

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
** Attachment added: L0-baremetal-cpu-pegged.png
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1413540/+attachment/4353983/+files/L0-baremetal-cpu-pegged.png

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
** Attachment added: L1-console-log-soft-lockup.png
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1413540/+attachment/4353984/+files/L1-console-log-soft-lockup.png

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1413540] Re: Trusty soft lockup issues with nested KVM

2015-03-23 Thread Ryan Beisner
I've collected crash dumps, and have stored them on an internal
Canonical server as they are 2gb+.   Feel free to ping me for access.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  Trusty soft lockup issues with nested KVM

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1413540] Re: soft lockup issues with nested KVM VMs running tempest

2015-03-07 Thread Ryan Beisner
Also FWIW, RAM overcommit does not appear to be a factor on the affected
compute node:  http://paste.ubuntu.com/10556588/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  soft lockup issues with nested KVM VMs running tempest

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24152.072002] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24180.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24208.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24236.072004] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]
  [24264.072003] BUG: soft lockup - CPU#0 stuck for 22s! [qemu-system-x86:24791]

  I am not sure whether the problem is that we are enabling KSM on a VM
  or the problem is that nested KSM is not behaving properly. Either way
  I can easily reproduce, please contact me if you need further details.

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1413540] Re: soft lockup issues with nested KVM VMs running tempest

2015-03-07 Thread Ryan Beisner
We have begun to see a noticeable percentage of this in our Openstack-
on-Openstack automated testing.  I've not yet dug too deeply, but the
symptom is:   a deployment completes and tests successfully, then as
we're collecting logs from machines, one or more of the compute nodes
falls over with:  http://paste.ubuntu.com/10556479/

Let me know if there's anything you'd like for us to try.   This is on
our private cloud serverstack environment.

ubuntu@chakora:~$ uname -a
Linux chakora 3.13.0-46-generic #77-Ubuntu SMP Mon Mar 2 18:23:39 UTC 2015 
x86_64 x86_64 x86_64 GNU/Linux

ubuntu@chakora:~$ dpkg-query --show linux-generic libvirt-bin qemu-system 
nova-compute-kvm nova-compute-libvirt
libvirt-bin 1.2.2-0ubuntu13.1.9
linux-generic   3.13.0.46.53
nova-compute-kvm1:2014.1.3-0ubuntu2
nova-compute-libvirt1:2014.1.3-0ubuntu2
qemu-system 2.0.0+dfsg-2ubuntu1.10

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1413540

Title:
  soft lockup issues with nested KVM VMs running tempest

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  Users of nested KVM for testing openstack have soft lockups as follows:

  PID: 22262  TASK: 8804274bb000  CPU: 1   COMMAND: qemu-system-x86
   #0 [88043fd03d18] machine_kexec at 8104ac02
   #1 [88043fd03d68] crash_kexec at 810e7203
   #2 [88043fd03e30] panic at 81719ff4
   #3 [88043fd03ea8] watchdog_timer_fn at 8110d7c5
   #4 [88043fd03ed8] __run_hrtimer at 8108e787
   #5 [88043fd03f18] hrtimer_interrupt at 8108ef4f
   #6 [88043fd03f80] local_apic_timer_interrupt at 81043537
   #7 [88043fd03f98] smp_apic_timer_interrupt at 81733d4f
   #8 [88043fd03fb0] apic_timer_interrupt at 817326dd
  --- IRQ stack ---
   #9 [880426f0d958] apic_timer_interrupt at 817326dd
  [exception RIP: generic_exec_single+130]
  RIP: 810dbe62  RSP: 880426f0da00  RFLAGS: 0202
  RAX: 0002  RBX: 880426f0d9d0  RCX: 0001
  RDX: 8180ad60  RSI:   RDI: 0286
  RBP: 880426f0da30   R8: 8180ad48   R9: 88042713bc68
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 8804274bb000
  R13:   R14: 880407670280  R15: 
  ORIG_RAX: ff10  CS: 0010  SS: 0018
  #10 [880426f0da38] smp_call_function_single at 810dbf75
  #11 [880426f0dab0] smp_call_function_many at 810dc3a6
  #12 [880426f0db10] native_flush_tlb_others at 8105c8f7
  #13 [880426f0db38] flush_tlb_mm_range at 8105c9cb
  #14 [880426f0db68] pmdp_splitting_flush at 8105b80d
  #15 [880426f0db88] __split_huge_page at 811ac90b
  #16 [880426f0dc20] split_huge_page_to_list at 811acfb8
  #17 [880426f0dc48] __split_huge_page_pmd at 811ad956
  #18 [880426f0dcc8] unmap_page_range at 8117728d
  #19 [880426f0dda0] unmap_single_vma at 81177341
  #20 [880426f0ddd8] zap_page_range at 811784cd
  #21 [880426f0de90] sys_madvise at 81174fbf
  #22 [880426f0df80] system_call_fastpath at 8173196d
  RIP: 7fe7ca2cc647  RSP: 7fe7be9febf0  RFLAGS: 0293
  RAX: 001c  RBX: 8173196d  RCX: 
  RDX: 0004  RSI: 007fb000  RDI: 7fe7be1ff000
  RBP:    R8:    R9: 7fe7d1cd2738
  R10: 7fe7d1f2dbd0  R11: 0206  R12: 7fe7be9ff700
  R13: 7fe7be9ff9c0  R14:   R15: 
  ORIG_RAX: 001c  CS: 0033  SS: 002b

  
  [Test Case]
  - Deploy openstack on openstack
  - Run tempest on L1 cloud
  - Check kernel log of L1 nova-compute nodes

  (Although this may not necessarily be related to nested KVM)
  Potentially related: https://lkml.org/lkml/2014/11/14/656

  --

  Original Description:

  When installing qemu-kvm on a VM, KSM is enabled.

  I have encountered this problem in trusty:$ lsb_release -a
  Distributor ID: Ubuntu
  Description:Ubuntu 14.04.1 LTS
  Release:14.04
  Codename:   trusty
  $ uname -a
  Linux juju-gema-machine-2 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 
17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  The way to see the behaviour:
  1) $ more /sys/kernel/mm/ksm/run
  0
  2) $ sudo apt-get install qemu-kvm
  3) $ more /sys/kernel/mm/ksm/run
  1

  To see the soft lockups, deploy a cloud on a virtualised env like ctsstack, 
run tempest on it, the compute nodes of the virtualised deployment will 
eventually stop responding with (run tempest 2 times at least):
   24096.072003] BUG: soft lockup - CPU#0 stuck for 23s! [qemu-system-x86:24791]
  [24124.072003] BUG: soft lockup - CPU#0 stuck 

[Kernel-packages] [Bug 1410363] Re: partition table updates require a reboot

2015-01-27 Thread Ryan Beisner
** Tags added: openstack uosci

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1410363

Title:
  partition table updates require a reboot

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  We're seeing a problem in automated testing of ceph and swift in that
  partition tables always require a reboot:

  2015-01-13 16:50:07 INFO mon-relation-changed Setting name!
  2015-01-13 16:50:07 INFO mon-relation-changed partNum is 1
  2015-01-13 16:50:07 INFO mon-relation-changed REALLY setting name!
  2015-01-13 16:50:07 INFO mon-relation-changed Warning: The kernel is still 
using the old partition table.
  2015-01-13 16:50:07 INFO mon-relation-changed The new table will be used at 
the next reboot.
  2015-01-13 16:50:07 INFO mon-relation-changed The operation has completed 
successfully.
  2015-01-13 16:50:09 INFO mon-relation-changed Setting name!
  2015-01-13 16:50:09 INFO mon-relation-changed partNum is 0
  2015-01-13 16:50:09 INFO mon-relation-changed REALLY setting name!
  2015-01-13 16:50:09 INFO mon-relation-changed Warning: The kernel is still 
using the old partition table.
  2015-01-13 16:50:09 INFO mon-relation-changed The new table will be used at 
the next reboot.
  2015-01-13 16:50:09 INFO mon-relation-changed The operation has completed 
successfully.
  2015-01-13 16:50:09 INFO mon-relation-changed mkfs.xfs: cannot open 
/dev/vdb1: Device or resource busy
  2015-01-13 16:50:09 INFO mon-relation-changed ceph-disk: Error: Command 
'['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdb1']' 
returned non-zero exit status 1
  2015-01-13 16:50:09 ERROR juju-log mon:1: Unable to initialize device: 
/dev/vdb
  2015-01-13 16:50:09 INFO mon-relation-changed Traceback (most recent call 
last):
  2015-01-13 16:50:09 INFO mon-relation-changed   File 
/var/lib/juju/agents/unit-ceph-0/charm/hooks/mon-relation-changed, line 381, 
in module
  2015-01-13 16:50:09 INFO mon-relation-changed hooks.execute(sys.argv)
  2015-01-13 16:50:09 INFO mon-relation-changed   File 
/var/lib/juju/agents/unit-ceph-0/charm/hooks/charmhelpers/core/hookenv.py, 
line 528, in execute
  2015-01-13 16:50:09 INFO mon-relation-changed self._hooks[hook_name]()
  2015-01-13 16:50:09 INFO mon-relation-changed   File 
/var/lib/juju/agents/unit-ceph-0/charm/hooks/mon-relation-changed, line 217, 
in mon_relation
  2015-01-13 16:50:09 INFO mon-relation-changed reformat_osd(), 
config('ignore-device-errors'))
  2015-01-13 16:50:09 INFO mon-relation-changed   File 
/var/lib/juju/agents/unit-ceph-0/charm/hooks/ceph.py, line 327, in osdize
  2015-01-13 16:50:09 INFO mon-relation-changed osdize_dev(dev, osd_format, 
osd_journal, reformat_osd, ignore_errors)
  2015-01-13 16:50:09 INFO mon-relation-changed   File 
/var/lib/juju/agents/unit-ceph-0/charm/hooks/ceph.py, line 375, in osdize_dev
  2015-01-13 16:50:09 INFO mon-relation-changed raise e
  2015-01-13 16:50:09 INFO mon-relation-changed subprocess.CalledProcessError: 
Command '['ceph-disk-prepare', '--fs-type', u'xfs', '--zap-disk', u'/dev/vdb']' 
returned non-zero exit status 1

  this is obviously blocking deployment and subsequent testing; previous
  Ubuntu releases have been OK.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.04
  Package: linux-image-generic (not installed)
  ProcVersionSignature: User Name 3.18.0-8.9-generic 3.18.1
  Uname: Linux 3.18.0-8-generic x86_64
  AlsaDevices: Error: command ['ls', '-l', '/dev/snd/'] failed with exit code 
2: ls: cannot access /dev/snd/: No such file or directory
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.15.1-0ubuntu2
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  CRDA: Error: [Errno 2] No such file or directory: 'iw'
  Date: Tue Jan 13 16:51:36 2015
  Ec2AMI: ami-006e
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lsusb: Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
  MachineType: OpenStack Foundation OpenStack Nova
  PciMultimedia:
   
  ProcFB:
   
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.18.0-8-generic 
root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
  RelatedPackageVersions:
   linux-restricted-modules-3.18.0-8-generic N/A
   linux-backports-modules-3.18.0-8-generic  N/A
   linux-firmwareN/A
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 01/01/2011
  dmi.bios.vendor: Bochs
  dmi.bios.version: Bochs
  dmi.chassis.type: 1
  dmi.chassis.vendor: Bochs
  dmi.modalias: 

[Kernel-packages] [Bug 1408972] Re: openvswitch: failed to flow_del (No such file or directory)

2015-01-09 Thread Ryan Beisner
** Tags added: openstack uosci

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1408972

Title:
  openvswitch: failed to flow_del (No such file or directory)

Status in linux package in Ubuntu:
  Confirmed
Status in openvswitch package in Ubuntu:
  Invalid

Bug description:
  As part of the investigation into bug 1336555, we've noticed a large
  number of these error messages being logged in 14.04 OpenStack
  deployments:

  2015-01-09T10:50:53.643Z|03976|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:50:54.644Z|03977|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:50:55.645Z|03978|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:50:56.645Z|03979|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:50:57.646Z|03980|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:50:58.646Z|03981|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:50:59.645Z|03982|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:51:00.646Z|03983|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)
  2015-01-09T10:51:01.646Z|03984|dpif|WARN|system@ovs-system: failed to 
flow_del (No such file or directory) 
skb_priority(0),in_port(4),skb_mark(0),eth(src=ce:2b:3a:c3:d7:d1,dst=33:33:ff:50:c2:61),eth_type(0x86dd),ipv6(src=::,dst=ff02::1:ff50:c261,label=0,proto=58,tclass=0,hlimit=255,frag=no),icmpv6(type=135,code=0),nd(target=fe80::5ccd:caff:fe50:c261)

  this was fixed upstream in the datapath dkms module (see
  https://github.com/openvswitch/ovs/commit/3601bd879) and is fixed in
  the 3.16 kernel as well; please can this fix be backported to 3.13.

  Thanks!

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: linux-image-generic 3.13.0.43.50
  ProcVersionSignature: Ubuntu 3.13.0-35.62-generic 3.13.11.6
  Uname: Linux 3.13.0-35-generic x86_64
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Dec 11 21:01 seq
   crw-rw 1 root audio 116, 33 Dec 11 21:01 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.14.1-0ubuntu3.6
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  Date: Fri Jan  9 10:50:39 2015
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  MachineType: HP ProLiant DL360p Gen8
  PciMultimedia:
   
  ProcFB: 0 VESA VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.13.0-35-generic 
root=UUID=f8e77ea1-c970-4e7a-9e49-48532d763167 ro console=tty0 
console=ttyS1,38400 quiet
  RelatedPackageVersions: