[Touch-packages] [Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-28 Thread Xav Paice
udevadm info -e >/tmp/1828617-2.out

~# ls -l /var/lib/ceph/osd/ceph*
-rw--- 1 ceph ceph  69 May 21 08:44 
/var/lib/ceph/osd/ceph.client.osd-upgrade.keyring

/var/lib/ceph/osd/ceph-11:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-33de740d-bd8c-4b47-a601-3e6e634e489a/osd-block-33de740d-bd8c-4b47-a601-3e6e634e489a
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-18:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-eb5270dc-1110-420f-947e-aab7fae299c9/osd-block-eb5270dc-1110-420f-947e-aab7fae299c9
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-eb5270dc-1110-420f-947e-aab7fae299c9
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-eb5270dc-1110-420f-947e-aab7fae299c9
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-24:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-d38a7e91-cf06-4607-abbe-53eac89ac5ea/osd-block-d38a7e91-cf06-4607-abbe-53eac89ac5ea
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-31:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-053e000a-76ed-427e-98b3-e5373e263f2d/osd-block-053e000a-76ed-427e-98b3-e5373e263f2d
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-053e000a-76ed-427e-98b3-e5373e263f2d
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-053e000a-76ed-427e-98b3-e5373e263f2d
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-38:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-c2669da2-63aa-42e2-b049-cf00a478e076/osd-block-c2669da2-63aa-42e2-b049-cf00a478e076
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-c2669da2-63aa-42e2-b049-cf00a478e076
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-c2669da2-63aa-42e2-b049-cf00a478e076
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-4:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-7478edfc-f321-40a2-a105-8e8a2c8ca3f6/osd-block-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 55 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  2 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-45:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e/osd-block-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami


** Attachment added: "1828617-2.out"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+attachment/5267247/+files/1828617-2.out

-- 
You received this bug 

[Touch-packages] [Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-28 Thread Xav Paice
journalctl --no-pager -lu systemd-udevd.service >/tmp/1828617-1.out

Hostname obfusticated

lsblk:

NAME
 MAJ:MIN  RM   SIZE RO TYPE  MOUNTPOINT
loop0   
   7:0 0  88.4M  1 loop  /snap/core/6964
loop1   
   7:1 0  89.4M  1 loop  /snap/core/6818
loop2   
   7:2 0   8.4M  1 loop  
/snap/canonical-livepatch/77
sda 
   8:0 0   1.8T  0 disk  
├─sda1  
   8:1 0   476M  0 part  /boot/efi
├─sda2  
   8:2 0   3.7G  0 part  /boot
└─sda3  
   8:3 0   1.7T  0 part  
  └─bcache7 
 252:896   0   1.7T  0 disk  /
sdb 
   8:160   1.8T  0 disk  
└─bcache0   
 252:0 0   1.8T  0 disk  
sdc 
   8:320   1.8T  0 disk  
└─bcache6   
 252:768   0   1.8T  0 disk  
  └─crypt-7478edfc-f321-40a2-a105-8e8a2c8ca3f6  
 253:0 0   1.8T  0 crypt 

└─ceph--7478edfc--f321--40a2--a105--8e8a2c8ca3f6-osd--block--7478edfc--f321--40a2--a105--8e8a2c8ca3f6
253:2 0   1.8T  0 lvm   
sdd 
   8:480   1.8T  0 disk  
└─bcache4   
 252:512   0   1.8T  0 disk  
  └─crypt-33de740d-bd8c-4b47-a601-3e6e634e489a  
 253:4 0   1.8T  0 crypt 

└─ceph--33de740d--bd8c--4b47--a601--3e6e634e489a-osd--block--33de740d--bd8c--4b47--a601--3e6e634e489a
253:5 0   1.8T  0 lvm   
sde 
   8:640   1.8T  0 disk  
└─bcache3   
 252:384   0   1.8T  0 disk  
  └─crypt-eb5270dc-1110-420f-947e-aab7fae299c9  
 253:1 0   1.8T  0 crypt 

└─ceph--eb5270dc--1110--420f--947e--aab7fae299c9-osd--block--eb5270dc--1110--420f--947e--aab7fae299c9
253:3 0   1.8T  0 lvm   
sdf 
   8:800   1.8T  0 disk  
└─bcache1   
 252:128   0   1.8T  0 disk  
  └─crypt-d38a7e91-cf06-4607-abbe-53eac89ac5ea  
 253:6 0   1.8T  0 crypt 

└─ceph--d38a7e91--cf06--4607--abbe--53eac89ac5ea-osd--block--d38a7e91--cf06--4607--abbe--53eac89ac5ea
253:7 0   1.8T  0 lvm   
sdg 
   8:960   1.8T  0 disk  
└─bcache5   
 252:640   0   1.8T  0 disk  
  └─crypt-053e000a-76ed-427e-98b3-e5373e263f2d  
 253:8 0   1.8T  0 crypt 

└─ceph--053e000a--76ed--427e--98b3--e5373e263f2d-osd--block--053e000a--76ed--427e--98b3--e5373e263f2d
253:9 0   1.8T  0 lvm   
sdh 
   8:112   0   1.8T  0 disk  
└─bcache8   
 252:1024  0   1.8T  0 disk  
  └─crypt-c2669da2-63aa-42e2-b049-cf00a478e076  
 253:250   1.8T  0 crypt 


[Touch-packages] [Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
Just one update, if I change the perms of the symlink made (chown -h)
the OSD will actually start.

After rebooting, however, I found that the links I had made had gone
again and the whole process needed repeating in order to start the OSD.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu 18.04.2 Ceph deployment.

  Ceph OSD devices utilizing LVM volumes pointing to udev-based physical 
devices.
  LVM module is supposed to create PVs from devices using the links in 
/dev/disk/by-dname/ folder that are created by udev.
  However on reboot it happens (not always, rather like race condition) that 
Ceph services cannot start, and pvdisplay doesn't show any volumes created. The 
folder /dev/disk/by-dname/ however has all necessary device created by the end 
of boot process.

  The behaviour can be fixed manually by running "#/sbin/lvm pvscan
  --cache --activate ay /dev/nvme0n1" command for re-activating the LVM
  components and then the services can be started.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
I'm seeing this in a slightly different manner, on Bionic/Queens.

We have LVMs encrypted (thanks Vault), and rebooting a host results in
at least one OSD not returning fairly consistently.  The LVs appear in
the list, however the difference between a working and a non-working OSD
is the lack of links to block.db and block.wal on a non-working OSD.

See https://pastebin.canonical.com/p/rW3VgMMkmY/ for some info.

If I made the links manually:

cd /var/lib/ceph/osd/ceph-4
ln -s 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
 block.db
ln -s 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
 block.wal

This resulted in a perms error accessing the device
"bluestore(/var/lib/ceph/osd/ceph-4) _open_db
/var/lib/ceph/osd/ceph-4/block.db symlink exists but target unusable:
(13) Permission denied"

ls -l /dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/
total 0
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-053e000a-76ed-427e-98b3-e5373e263f2d -> ../dm-20
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e -> ../dm-24
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-33de740d-bd8c-4b47-a601-3e6e634e489a -> ../dm-14
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-db-7478edfc-f321-40a2-a105-8e8a2c8ca3f6 -> ../dm-12
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-db-c2669da2-63aa-42e2-b049-cf00a478e076 -> ../dm-22
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-db-d38a7e91-cf06-4607-abbe-53eac89ac5ea -> ../dm-18
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-eb5270dc-1110-420f-947e-aab7fae299c9 -> ../dm-16
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-053e000a-76ed-427e-98b3-e5373e263f2d -> ../dm-19
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e -> ../dm-23
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-33de740d-bd8c-4b47-a601-3e6e634e489a -> ../dm-13
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-wal-7478edfc-f321-40a2-a105-8e8a2c8ca3f6 -> ../dm-11
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-wal-c2669da2-63aa-42e2-b049-cf00a478e076 -> ../dm-21
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-wal-d38a7e91-cf06-4607-abbe-53eac89ac5ea -> ../dm-17
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-eb5270dc-1110-420f-947e-aab7fae299c9 -> ../dm-15

I tried to change the perms to ceph.ceph ownership, but no change.

I have also tried (using `systemctl edit lvm2-monitor.service`) adding
the following to lvm2, but that's not changed the behavior either:

# cat /etc/systemd/system/lvm2-monitor.service.d/override.conf 
[Service]
ExecStartPre=/bin/sleep 60

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu 18.04.2 Ceph deployment.

  Ceph OSD devices utilizing LVM volumes pointing to udev-based physical 
devices.
  LVM module is supposed to create PVs from devices using the links in 
/dev/disk/by-dname/ folder that are created by udev.
  However on reboot it happens (not always, rather like race condition) that 
Ceph services cannot start, and pvdisplay doesn't show any volumes created. The 
folder /dev/disk/by-dname/ however has all necessary device created by the end 
of boot process.

  The behaviour can be fixed manually by running "#/sbin/lvm pvscan
  --cache --activate ay /dev/nvme0n1" command for re-activating the LVM
  components and then the services can be started.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu 18.04.2 Ceph deployment.

  Ceph OSD devices utilizing LVM volumes pointing to udev-based physical 
devices.
  LVM module is supposed to create PVs from devices using the links in 
/dev/disk/by-dname/ folder that are created by udev.
  However on reboot it happens (not always, rather like race condition) that 
Ceph services cannot start, and pvdisplay doesn't show any volumes created. The 
folder /dev/disk/by-dname/ however has all necessary device created by the end 
of boot process.

  The behaviour can be fixed manually by running "#/sbin/lvm pvscan
  --cache --activate ay /dev/nvme0n1" command for re-activating the LVM
  components and then the services can be started.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1777070] Re: firefox plugin libwidevinecdm.so crashes due to apparmor denial

2018-06-17 Thread Xav Paice
Thanks!  I won't claim to understand what that change did, but adding
the two lines as requested does seem to resolve the issue.  I opened up
Netflix and was able to watch, without the crash, and there wasn't any
new entries in syslog.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1777070

Title:
  firefox plugin libwidevinecdm.so crashes due to apparmor denial

Status in apparmor package in Ubuntu:
  New
Status in firefox package in Ubuntu:
  New

Bug description:
  Ubuntu 18.04, Firefox 60.0.1+build2-0ubuntu0.18.04.1

  Running firefix, then going to netflix.com and attempting to play a
  movie.  The widevinecdm plugin crashes, the following is found in
  syslog:

  
  Jun 15 19:13:22 xplt kernel: [301351.553043] audit: type=1400 
audit(1529046802.585:246): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16118 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
  Jun 15 19:13:22 xplt kernel: [301351.553236] audit: type=1400 
audit(1529046802.585:247): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
  Jun 15 19:13:22 xplt kernel: [301351.553259] plugin-containe[16118]: segfault 
at 0 ip 7fcdfdaa76af sp 7ffc1ff03e28 error 6 in 
libxul.so[7fcdfb77a000+6111000]
  Jun 15 19:13:22 xplt snmpd[2334]: error on subcontainer 'ia_addr' insert (-1)
  Jun 15 19:13:22 xplt /usr/lib/gdm3/gdm-x-session[6549]: ###!!! 
[Parent][MessageChannel::Call] Error: Channel error: cannot send/recv
  Jun 15 19:13:24 xplt kernel: [301353.960182] audit: type=1400 
audit(1529046804.994:248): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16135 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
  Jun 15 19:13:24 xplt kernel: [301353.960373] audit: type=1400 
audit(1529046804.994:249): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
  Jun 15 19:13:24 xplt kernel: [301353.960398] plugin-containe[16135]: segfault 
at 0 ip 7fe3b57f46af sp 7ffe6dc0b488 error 6 in 
libxul.so[7fe3b34c7000+6111000]
  Jun 15 19:13:28 xplt kernel: [301357.859177] audit: type=1400 
audit(1529046808.895:250): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16139 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
  Jun 15 19:13:28 xplt kernel: [301357.859328] audit: type=1400 
audit(1529046808.895:251): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
  Jun 15 19:13:28 xplt kernel: [301357.859349] plugin-containe[16139]: segfault 
at 0 ip 7fcf32ae06af sp 7ffeb8a136c8 error 6 in 
libxul.so[7fcf307b3000+6111000]
  Jun 15 19:13:25 xplt /usr/lib/gdm3/gdm-x-session[6549]: ###!!! 
[Parent][MessageChannel::Call] Error: Channel error: cannot send/recv
  Jun 15 19:13:29 xplt /usr/lib/gdm3/gdm-x-session[6549]: ERROR block_reap:328: 
[hamster] bad exit code 1
  Jun 15 19:13:29 xplt /usr/lib/gdm3/gdm-x-session[6549]: ###!!! 
[Parent][MessageChannel::Call] Error: Channel error: cannot send/recv
  Jun 15 19:13:29 xplt kernel: [301358.227635] audit: type=1400 
audit(1529046809.263:252): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16188 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
  Jun 15 19:13:29 xplt kernel: [301358.227811] audit: type=1400 
audit(1529046809.263:253): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
  Jun 15 19:13:29 xplt kernel: [301358.227844] plugin-containe[16188]: segfault 
at 0 ip 7fe5667c66af sp 7fffe8cc0da8 error 6 in 
libxul.so[7fe564499000+6111000]
  Jun 15 19:13:31 xplt kernel: [301360.574177] audit: type=1400 
audit(1529046811.608:254): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 

[Touch-packages] [Bug 1389239] Re: apparmor is uninstalled when deploying icehouse nova-compute on Precise

2014-11-04 Thread Xav Paice
*** This bug is a duplicate of bug 1387251 ***
https://bugs.launchpad.net/bugs/1387251

** This bug has been marked a duplicate of bug 1387251
   apparmor conflict with precise cloud archive

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apparmor in Ubuntu.
https://bugs.launchpad.net/bugs/1389239

Title:
  apparmor is uninstalled when deploying icehouse nova-compute on
  Precise

Status in “apparmor” package in Ubuntu:
  New
Status in “libvirt” package in Ubuntu:
  New
Status in “apparmor” source package in Precise:
  New
Status in “libvirt” source package in Precise:
  New

Bug description:
  When doing juju deploy nova-compute for icehouse with the latest charm
  on Ubuntu Precise, the apparmor package is uninstalled.

  Procedure to reproduce :
  $ juju switch local
  $ juju get-env | grep serie
  default-series: precise
  $ juju bootstrap
  $ bzr branch lp:charms/nova-compute
  $ juju deploy --repository=. local:nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1389239/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp