[Bug 1852221] Re: ovs-vswitchd needs to be forced to reconfigure after adding protocols to bridges

2020-09-18 Thread Xav Paice
re-reading this, the issue I was seeing was that the protocol wasn't
negotiated - did not need to restart ovs to get the 'good' test.
Apologies for the noise, it does actually look like this is also fixed
in 2.13.0-0ubuntu1 and possibly could be updated for openvswitch
(Ubuntu).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852221

Title:
  ovs-vswitchd needs to be forced to reconfigure after adding protocols
  to bridges

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1852221/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852221] Re: ovs-vswitchd needs to be forced to reconfigure after adding protocols to bridges

2020-09-18 Thread Xav Paice
Seeing this in Focal, openvswitch version 2.13.0-0ubuntu1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852221

Title:
  ovs-vswitchd needs to be forced to reconfigure after adding protocols
  to bridges

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1852221/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1734204] Re: Insufficient free host memory pages available to allocate guest RAM with Open vSwitch DPDK in Newton

2020-08-19 Thread Xav Paice
Hi, just wondering if there's any update on the work to get this into
Bionic?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1734204

Title:
   Insufficient free host memory pages available to allocate guest RAM
  with Open vSwitch DPDK in Newton

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1734204/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874344] [NEW] php 7.2 failure on Bionic install of icingaweb2

2020-04-22 Thread Xav Paice
Public bug reported:

When I use the icingaweb2 package 2.7.2-1.bionic, I hit
https://github.com/Icinga/icingaweb2/issues/3459.  If I use
2.7.3-1.bionic from the upstream repo https://packages.icinga.com/ubuntu
this is fixed - can we get a refresh?

Reproducer: follow
https://icinga.com/docs/icingaweb2/latest/doc/02-Installation
/#installing-icinga-web-2-from-package except do not add the
packages.icinga.com repo, use the packages in main.

** Affects: icingaweb2 (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874344

Title:
  php 7.2 failure on Bionic install of icingaweb2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1874344/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1078213] Re: logs are not logrotated

2019-06-18 Thread Xav Paice
** Tags added: canonical-bootstack

** Changed in: juju (Ubuntu)
   Status: Triaged => New

** Package changed: juju (Ubuntu) => juju

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1078213

Title:
  logs are not logrotated

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju/+bug/1078213/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-06-04 Thread Xav Paice
The pvscan issue is likely something different, just wanted to make sure
folks are aware of it for completeness.

The logs /var/log/ceph/ceph-volume-systemd.log and ceph-volume.log are
empty.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-06-04 Thread Xav Paice
Let me word that last comment differently.

I went to the host and installed the PPA update, then rebooted.

When the box booted up, the PV which hosts the wal LVs wasn't listed in
lsblk or 'pvs' or lvs.  I then ran pvscan --cache, which brought the LVs
back online, but not the OSDs, so I rebooted.

After that reboot, the behavior of the OSDs was exactly the same as
prior to the update - I reboot, and some OSDs don't come online, and are
missing symlinks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-06-03 Thread Xav Paice
After installing that PPA update and rebooting, the PV for the wal
didn't come online till I ran pvscan --cache.  Seems a second reboot
didn't do that though, might have been a red herring from prior
attempts.

Unfortunately, the OSDs didn't seem to come online in exactly the same
way after installing the update.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-29 Thread Xav Paice
Thanks, will do.  FWIW, the symlinks are in place before reboot.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-28 Thread Xav Paice
journalctl --no-pager -lu systemd-udevd.service >/tmp/1828617-1.out

Hostname obfusticated

lsblk:

NAME
 MAJ:MIN  RM   SIZE RO TYPE  MOUNTPOINT
loop0   
   7:0 0  88.4M  1 loop  /snap/core/6964
loop1   
   7:1 0  89.4M  1 loop  /snap/core/6818
loop2   
   7:2 0   8.4M  1 loop  
/snap/canonical-livepatch/77
sda 
   8:0 0   1.8T  0 disk  
├─sda1  
   8:1 0   476M  0 part  /boot/efi
├─sda2  
   8:2 0   3.7G  0 part  /boot
└─sda3  
   8:3 0   1.7T  0 part  
  └─bcache7 
 252:896   0   1.7T  0 disk  /
sdb 
   8:160   1.8T  0 disk  
└─bcache0   
 252:0 0   1.8T  0 disk  
sdc 
   8:320   1.8T  0 disk  
└─bcache6   
 252:768   0   1.8T  0 disk  
  └─crypt-7478edfc-f321-40a2-a105-8e8a2c8ca3f6  
 253:0 0   1.8T  0 crypt 

└─ceph--7478edfc--f321--40a2--a105--8e8a2c8ca3f6-osd--block--7478edfc--f321--40a2--a105--8e8a2c8ca3f6
253:2 0   1.8T  0 lvm   
sdd 
   8:480   1.8T  0 disk  
└─bcache4   
 252:512   0   1.8T  0 disk  
  └─crypt-33de740d-bd8c-4b47-a601-3e6e634e489a  
 253:4 0   1.8T  0 crypt 

└─ceph--33de740d--bd8c--4b47--a601--3e6e634e489a-osd--block--33de740d--bd8c--4b47--a601--3e6e634e489a
253:5 0   1.8T  0 lvm   
sde 
   8:640   1.8T  0 disk  
└─bcache3   
 252:384   0   1.8T  0 disk  
  └─crypt-eb5270dc-1110-420f-947e-aab7fae299c9  
 253:1 0   1.8T  0 crypt 

└─ceph--eb5270dc--1110--420f--947e--aab7fae299c9-osd--block--eb5270dc--1110--420f--947e--aab7fae299c9
253:3 0   1.8T  0 lvm   
sdf 
   8:800   1.8T  0 disk  
└─bcache1   
 252:128   0   1.8T  0 disk  
  └─crypt-d38a7e91-cf06-4607-abbe-53eac89ac5ea  
 253:6 0   1.8T  0 crypt 

└─ceph--d38a7e91--cf06--4607--abbe--53eac89ac5ea-osd--block--d38a7e91--cf06--4607--abbe--53eac89ac5ea
253:7 0   1.8T  0 lvm   
sdg 
   8:960   1.8T  0 disk  
└─bcache5   
 252:640   0   1.8T  0 disk  
  └─crypt-053e000a-76ed-427e-98b3-e5373e263f2d  
 253:8 0   1.8T  0 crypt 

└─ceph--053e000a--76ed--427e--98b3--e5373e263f2d-osd--block--053e000a--76ed--427e--98b3--e5373e263f2d
253:9 0   1.8T  0 lvm   
sdh 
   8:112   0   1.8T  0 disk  
└─bcache8   
 252:1024  0   1.8T  0 disk  
  └─crypt-c2669da2-63aa-42e2-b049-cf00a478e076  
 253:250   1.8T  0 crypt 


[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-28 Thread Xav Paice
udevadm info -e >/tmp/1828617-2.out

~# ls -l /var/lib/ceph/osd/ceph*
-rw--- 1 ceph ceph  69 May 21 08:44 
/var/lib/ceph/osd/ceph.client.osd-upgrade.keyring

/var/lib/ceph/osd/ceph-11:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-33de740d-bd8c-4b47-a601-3e6e634e489a/osd-block-33de740d-bd8c-4b47-a601-3e6e634e489a
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-18:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-eb5270dc-1110-420f-947e-aab7fae299c9/osd-block-eb5270dc-1110-420f-947e-aab7fae299c9
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-eb5270dc-1110-420f-947e-aab7fae299c9
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-eb5270dc-1110-420f-947e-aab7fae299c9
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-24:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-d38a7e91-cf06-4607-abbe-53eac89ac5ea/osd-block-d38a7e91-cf06-4607-abbe-53eac89ac5ea
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-31:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-053e000a-76ed-427e-98b3-e5373e263f2d/osd-block-053e000a-76ed-427e-98b3-e5373e263f2d
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-053e000a-76ed-427e-98b3-e5373e263f2d
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-053e000a-76ed-427e-98b3-e5373e263f2d
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-38:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-c2669da2-63aa-42e2-b049-cf00a478e076/osd-block-c2669da2-63aa-42e2-b049-cf00a478e076
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-c2669da2-63aa-42e2-b049-cf00a478e076
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-c2669da2-63aa-42e2-b049-cf00a478e076
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-4:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-7478edfc-f321-40a2-a105-8e8a2c8ca3f6/osd-block-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 55 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  2 May 28 22:12 whoami

/var/lib/ceph/osd/ceph-45:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block -> 
/dev/ceph-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e/osd-block-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e
lrwxrwxrwx 1 ceph ceph 94 May 28 22:12 block.db -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e
lrwxrwxrwx 1 ceph ceph 95 May 28 22:12 block.wal -> 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e
-rw--- 1 ceph ceph 37 May 28 22:12 ceph_fsid
-rw--- 1 ceph ceph 37 May 28 22:12 fsid
-rw--- 1 ceph ceph 56 May 28 22:12 keyring
-rw--- 1 ceph ceph  6 May 28 22:12 ready
-rw--- 1 ceph ceph 10 May 28 22:12 type
-rw--- 1 ceph ceph  3 May 28 22:12 whoami


** Attachment added: "1828617-2.out"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+attachment/5267247/+files/1828617-2.out

-- 
You received this bug 

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-28 Thread Xav Paice
Charm is cs:ceph-osd-284
Ceph version is 12.2.11-0ubuntu0.18.04.2

The udev rules are created by curtin during the maas install.

Here's an example udev rule:

cat bcache4.rules

# Written by curtin
SUBSYSTEM=="block", ACTION=="add|change", 
ENV{CACHED_UUID}=="7b0e872b-ac78-4c4e-af18-8ccdce5962f6", 
SYMLINK+="disk/by-dname/bcache4"

The problem here is that when the host boots, for some OSDs (random,
changes each boot), there's no symlinks for block.db and block.wal in
/var/lib/ceph/osd/ceph-${thing}.  If I manually create those two
symlinks (and make sure the perms are right for the links themselves),
then the OSD starts.

Some of the OSDs do get those links though, and that's interesting
because on these hosts, the ceph wal and db for all the OSDs are LVs on
the same nvme device, in fact the same partition even.  The ceph OSD
block dev is an LV on a different device.


** Changed in: systemd (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
Just one update, if I change the perms of the symlink made (chown -h)
the OSD will actually start.

After rebooting, however, I found that the links I had made had gone
again and the whole process needed repeating in order to start the OSD.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
Added field-critical, there's a cloud deploy ongoing where I currently
can't reboot any hosts, nor get some of the OSDs back from a host I
rebooted, until we have a workaround.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
I'm seeing this in a slightly different manner, on Bionic/Queens.

We have LVMs encrypted (thanks Vault), and rebooting a host results in
at least one OSD not returning fairly consistently.  The LVs appear in
the list, however the difference between a working and a non-working OSD
is the lack of links to block.db and block.wal on a non-working OSD.

See https://pastebin.canonical.com/p/rW3VgMMkmY/ for some info.

If I made the links manually:

cd /var/lib/ceph/osd/ceph-4
ln -s 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-db-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
 block.db
ln -s 
/dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/osd-wal-7478edfc-f321-40a2-a105-8e8a2c8ca3f6
 block.wal

This resulted in a perms error accessing the device
"bluestore(/var/lib/ceph/osd/ceph-4) _open_db
/var/lib/ceph/osd/ceph-4/block.db symlink exists but target unusable:
(13) Permission denied"

ls -l /dev/ceph-wal-4de27554-2d05-440e-874a-9921dfc6f47e/
total 0
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-053e000a-76ed-427e-98b3-e5373e263f2d -> ../dm-20
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e -> ../dm-24
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-33de740d-bd8c-4b47-a601-3e6e634e489a -> ../dm-14
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-db-7478edfc-f321-40a2-a105-8e8a2c8ca3f6 -> ../dm-12
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-db-c2669da2-63aa-42e2-b049-cf00a478e076 -> ../dm-22
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-db-d38a7e91-cf06-4607-abbe-53eac89ac5ea -> ../dm-18
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-db-eb5270dc-1110-420f-947e-aab7fae299c9 -> ../dm-16
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-053e000a-76ed-427e-98b3-e5373e263f2d -> ../dm-19
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-12e68fcb-d2b6-459f-97f2-d3eb4e28c75e -> ../dm-23
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-33de740d-bd8c-4b47-a601-3e6e634e489a -> ../dm-13
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-wal-7478edfc-f321-40a2-a105-8e8a2c8ca3f6 -> ../dm-11
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-wal-c2669da2-63aa-42e2-b049-cf00a478e076 -> ../dm-21
lrwxrwxrwx 1 root root 8 May 22 23:04 
osd-wal-d38a7e91-cf06-4607-abbe-53eac89ac5ea -> ../dm-17
lrwxrwxrwx 1 ceph ceph 8 May 22 23:04 
osd-wal-eb5270dc-1110-420f-947e-aab7fae299c9 -> ../dm-15

I tried to change the perms to ceph.ceph ownership, but no change.

I have also tried (using `systemctl edit lvm2-monitor.service`) adding
the following to lvm2, but that's not changed the behavior either:

# cat /etc/systemd/system/lvm2-monitor.service.d/override.conf 
[Service]
ExecStartPre=/bin/sleep 60

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

2019-05-22 Thread Xav Paice
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1784342] Re: AttributeError: 'Subnet' object has no attribute '_obj_network_id'

2019-05-02 Thread Xav Paice
subscribed field-high, added Ubuntu Neutron package, since this has
occurred in multiple production sites.

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1784342

Title:
  AttributeError: 'Subnet' object has no attribute '_obj_network_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1784342/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1827159] Re: check_all_disks includes squashfs /snap/* which are 100%

2019-04-30 Thread Xav Paice
** Merge proposal linked:
   
https://code.launchpad.net/~xavpaice/nagios-charm/+git/nagios-charm/+merge/366740

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1827159

Title:
  check_all_disks includes squashfs /snap/* which are 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1827159/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1827159] [NEW] check_all_disks includes squashfs /snap/* which are 100%

2019-04-30 Thread Xav Paice
Public bug reported:

When using nagios to monitor the Nagios host itself, if the host is not
a container, the template for checking the disk space on the Nagios host
does not exclude any snap filesystems.  This means we get a Critical
report if any snap is installed.

This can be changed by adding to the check_all_disks command a '-X
squashfs', but that command is defined in the nagios plugins package.

** Affects: nagios-charm
 Importance: Undecided
 Status: New

** Affects: monitoring-plugins (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Also affects: monitoring-plugins (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1827159

Title:
  check_all_disks includes squashfs /snap/* which are 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1827159/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1820789] [NEW] Removing a hypervisor doesn't delete it entirely

2019-03-18 Thread Xav Paice
Public bug reported:

When removing a host (because it got rebuilt, for example), we use:

openstack compute service delete 

I expected that to remove the hostname cleanly from the database (or at
least mark it as deleted) so that the hostname can be re-used.  This
isn't the case, the host remained in the nova_api database in the
resource_providers table, and therefore could not be re-used.

Starting nova-compute on the host in this state resulted in:

2019-03-18 22:48:26.023 62597 ERROR nova.scheduler.client.report 
[req-445f587d-74e5-4a96-a5b5-4717f9095fb6 - - - - -] 
[req-1f1e781e-2ed8-4f6a-ac9e-93ecc450cec5] Failed to create resource provider 
record in placement API for UUID c6c4d923-1d0c-4f12-8505-d5af60c28ade. Got 409: 
{"errors": [{"status": 409, "request_id": 
"req-1f1e781e-2ed8-4f6a-ac9e-93ecc450cec5", "detail": "There was a conflict 
when trying to complete your request.\n\n Conflicting resource provider name: 
host22.maas already exists.  ", "title": "Conflict"}]}.
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
[req-445f587d-74e5-4a96-a5b5-4717f9095fb6 - - - - -] Error updating resources 
for node host22.maas.: ResourceProviderCreationFailed: Failed to create 
resource provider host22.maas
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager Traceback (most recent 
call last):
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 7345, in 
update_available_resource_for_node
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 689, 
in update_available_resource
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 277, in 
inner
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager return f(*args, 
**kwargs)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 713, 
in _update_available_resource
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
self._init_compute_node(context, resources)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 572, 
in _init_compute_node
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
self._update(context, cn)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 887, 
in _update
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager inv_data,
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 68, 
in set_inventory_for_provider
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
parent_provider_uuid=parent_provider_uuid,
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager return 
getattr(self.instance, __name)(*args, **kwargs)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py", line 1104, 
in set_inventory_for_provider
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
parent_provider_uuid=parent_provider_uuid)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py", line 665, 
in _ensure_resource_provider
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
parent_provider_uuid=parent_provider_uuid)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py", line 64, in 
wrapper
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager return f(self, *a, 
**k)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py", line 612, 
in _create_resource_provider
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager raise 
exception.ResourceProviderCreationFailed(name=name)
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 
ResourceProviderCreationFailed: Failed to create resource provider host22.maas
2019-03-18 22:48:26.024 62597 ERROR nova.compute.manager 

I was unable to clear the database entry:
mysql> delete from resource_providers where name='host22.maas';
ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key 
constraint fails (`nova_api`.`resource_providers`, CONSTRAINT 

[Bug 1809454] Re: [SRU] nova rbd auth fallback uses cinder user with libvirt secret

2019-02-10 Thread Xav Paice
How do we go about getting this moving forward from cloud-archive
:queens-proposed to stable so we can run this in production?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1809454

Title:
  [SRU] nova rbd auth fallback uses cinder user with libvirt secret

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1809454/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1809454] Re: [SRU] nova rbd auth fallback uses cinder user with libvirt secret

2018-12-22 Thread Xav Paice
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1809454

Title:
  [SRU] nova rbd auth fallback uses cinder user with libvirt secret

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1809454/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802226] Re: upgrade to 13.0.1-0ubuntu3~cloud0 caused loss of css

2018-11-07 Thread Xav Paice
fwiw, setting Horizon to run with debug appears to allow things to work
OK, but of course we don't want to leave it that way.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802226

Title:
  upgrade to 13.0.1-0ubuntu3~cloud0 caused loss of css

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1802226/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802226] [NEW] upgrade to 13.0.1-0ubuntu3~cloud0 caused loss of css

2018-11-07 Thread Xav Paice
Public bug reported:

Using Queens on Xenial.  We updated the packages to the current
versions:

~$ apt-cache policy openstack-dashboard-ubuntu-theme
openstack-dashboard-ubuntu-theme:
  Installed: 3:13.0.1-0ubuntu3~cloud0
  Candidate: 3:13.0.1-0ubuntu3~cloud0
  Version table:
 *** 3:13.0.1-0ubuntu3~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
xenial-updates/queens/main amd64 Packages
100 /var/lib/dpkg/status
 2:9.1.2-0ubuntu5 500
500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
 2:9.0.0-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

Using the Ubuntu theme results in a page full of garbage, missing css.
The browser reports the following in consoles:

The requested URL /static/dashboard/css/6e9a9fafb1ba.css was not found
on this server.

If I use other themes, it seems OK, just this one.

** Affects: horizon (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802226

Title:
  upgrade to 13.0.1-0ubuntu3~cloud0 caused loss of css

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1802226/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1452641] Re: Static Ceph mon IP addresses in connection_info can prevent VM startup

2018-09-20 Thread Xav Paice
Just a clarification on the process to 'move' ceph-mon units.  I added
ceph mons to the cluster, and removed the old ones - in this case it was
a 'juju add-unit' and 'juju remove-unit' but any process to achieve the
same thing would have the same result - the mons are now all on
different addresses.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1452641

Title:
  Static Ceph mon IP addresses in connection_info can prevent VM startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452641/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1737866] Re: Too many open files when large number of routers on a host

2018-08-16 Thread Xav Paice
Subscribed field-high because we have an active environment (more?) that
are are affected by this using Xenial/Ocata, and we really need that SRU
released.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1737866

Title:
  Too many open files when large number of routers on a host

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1737866] Re: Too many open files when large number of routers on a host

2018-08-14 Thread Xav Paice
Any update on when we might land an SRU for Xenial?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1737866

Title:
  Too many open files when large number of routers on a host

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1737866/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1744062] Re: [SRU] L3 HA: multiple agents are active at the same time

2018-07-12 Thread Xav Paice
Subscribed field-high.  This is affecting production environments.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1744062

Title:
  [SRU] L3 HA: multiple agents are active at the same time

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1744062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1731595] Re: L3 HA: multiple agents are active at the same time

2018-07-03 Thread Xav Paice
Corey, as far as I'm aware there isn't a bug open for the keepalived
package (for Xenial at least).  Are you suggesting that we open a bug
for a backport to the current cloudarchive package?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1731595

Title:
  L3 HA: multiple agents are active at the same time

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1731595/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1731595] Re: L3 HA: multiple agents are active at the same time

2018-06-29 Thread Xav Paice
Comment for the folks that are noticing this as 'fix released' but still
affected - see
https://github.com/acassen/keepalived/commit/e90a633c34fbe6ebbb891aa98bf29ce579b8b45c
for the rest of this fix, we need keepalived to be at least 1.4.0 in
order to have this commit.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1731595

Title:
  L3 HA: multiple agents are active at the same time

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1731595/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1777070] Re: firefox plugin libwidevinecdm.so crashes due to apparmor denial

2018-06-17 Thread Xav Paice
Thanks!  I won't claim to understand what that change did, but adding
the two lines as requested does seem to resolve the issue.  I opened up
Netflix and was able to watch, without the crash, and there wasn't any
new entries in syslog.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1777070

Title:
  firefox plugin libwidevinecdm.so crashes due to apparmor denial

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1777070/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1777070] [NEW] firefox plugin libwidevinecdm.so crashes due to apparmor denial

2018-06-15 Thread Xav Paice
Public bug reported:

Ubuntu 18.04, Firefox 60.0.1+build2-0ubuntu0.18.04.1

Running firefix, then going to netflix.com and attempting to play a
movie.  The widevinecdm plugin crashes, the following is found in
syslog:


Jun 15 19:13:22 xplt kernel: [301351.553043] audit: type=1400 
audit(1529046802.585:246): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16118 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
Jun 15 19:13:22 xplt kernel: [301351.553236] audit: type=1400 
audit(1529046802.585:247): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
Jun 15 19:13:22 xplt kernel: [301351.553259] plugin-containe[16118]: segfault 
at 0 ip 7fcdfdaa76af sp 7ffc1ff03e28 error 6 in 
libxul.so[7fcdfb77a000+6111000]
Jun 15 19:13:22 xplt snmpd[2334]: error on subcontainer 'ia_addr' insert (-1)
Jun 15 19:13:22 xplt /usr/lib/gdm3/gdm-x-session[6549]: ###!!! 
[Parent][MessageChannel::Call] Error: Channel error: cannot send/recv
Jun 15 19:13:24 xplt kernel: [301353.960182] audit: type=1400 
audit(1529046804.994:248): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16135 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
Jun 15 19:13:24 xplt kernel: [301353.960373] audit: type=1400 
audit(1529046804.994:249): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
Jun 15 19:13:24 xplt kernel: [301353.960398] plugin-containe[16135]: segfault 
at 0 ip 7fe3b57f46af sp 7ffe6dc0b488 error 6 in 
libxul.so[7fe3b34c7000+6111000]
Jun 15 19:13:28 xplt kernel: [301357.859177] audit: type=1400 
audit(1529046808.895:250): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16139 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
Jun 15 19:13:28 xplt kernel: [301357.859328] audit: type=1400 
audit(1529046808.895:251): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
Jun 15 19:13:28 xplt kernel: [301357.859349] plugin-containe[16139]: segfault 
at 0 ip 7fcf32ae06af sp 7ffeb8a136c8 error 6 in 
libxul.so[7fcf307b3000+6111000]
Jun 15 19:13:25 xplt /usr/lib/gdm3/gdm-x-session[6549]: ###!!! 
[Parent][MessageChannel::Call] Error: Channel error: cannot send/recv
Jun 15 19:13:29 xplt /usr/lib/gdm3/gdm-x-session[6549]: ERROR block_reap:328: 
[hamster] bad exit code 1
Jun 15 19:13:29 xplt /usr/lib/gdm3/gdm-x-session[6549]: ###!!! 
[Parent][MessageChannel::Call] Error: Channel error: cannot send/recv
Jun 15 19:13:29 xplt kernel: [301358.227635] audit: type=1400 
audit(1529046809.263:252): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16188 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
Jun 15 19:13:29 xplt kernel: [301358.227811] audit: type=1400 
audit(1529046809.263:253): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
Jun 15 19:13:29 xplt kernel: [301358.227844] plugin-containe[16188]: segfault 
at 0 ip 7fe5667c66af sp 7fffe8cc0da8 error 6 in 
libxul.so[7fe564499000+6111000]
Jun 15 19:13:31 xplt kernel: [301360.574177] audit: type=1400 
audit(1529046811.608:254): apparmor="DENIED" operation="file_mmap" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" 
name="/home/xav/.mozilla/firefox/wiavokxk.default-1510977878171/gmp-widevinecdm/1.4.8.1008/libwidevinecdm.so"
 pid=16192 comm="plugin-containe" requested_mask="m" denied_mask="m" fsuid=1000 
ouid=1000
Jun 15 19:13:31 xplt kernel: [301360.574326] audit: type=1400 
audit(1529046811.608:255): apparmor="DENIED" operation="ptrace" 
profile="/usr/lib/firefox/firefox{,*[^s][^h]}" pid=24714 comm="firefox" 
requested_mask="trace" denied_mask="trace" 
peer="/usr/lib/firefox/firefox{,*[^s][^h]}"
Jun 15 19:13:31 xplt kernel: [301360.574352] plugin-containe[16192]: segfault 
at 0 ip 7f83507606af sp 7ffdb3d22f08 error 6 in 
libxul.so[7f834e433000+6111000]
Jun 15 

[Bug 1452641] Re: Static Ceph mon IP addresses in connection_info can prevent VM startup

2018-06-11 Thread Xav Paice
FWIW, in the cloud we saw this, migrating the (stopped) instance also
updated the connection info - it was just that migrating hundreds of
instances wasn't practical.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1452641

Title:
  Static Ceph mon IP addresses in connection_info can prevent VM startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452641/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1313539] Re: [DisplayPort] monitor shows black screen and "no input signal" after turning the monitor off and on manually

2018-05-14 Thread Xav Paice
FWIW, seeing this using any desktop environment, and at the login
screen.  Have tried, i3, awesomewm, Gnome and Unity.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1313539

Title:
  [DisplayPort] monitor shows black screen and "no input signal" after
  turning the monitor off and on manually

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1313539/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1313539] Re: [DisplayPort] monitor shows black screen and "no input signal" after turning the monitor off and on manually

2018-05-14 Thread Xav Paice
Seeing this on Bionic also, with 2 external screens and the built in
laptop display.

   product: UX303UA
   vendor: ASUSTeK COMPUTER INC.
product: HD Graphics 520
configuration: driver=i915 latency=0

I've attached my Xorg.log if that's helpful.

On first boot, or after unplugging the external displays (e.g. to walk
away), then reconnecting them, one or the other screen (or both) refuse
to power on, even if I fiddle with the power button.  Sometimes
unplug/replug helps, mostly not.  Eventually, after turning the screens
on and off via xrandr/arandr, multiple times, I can usually get them
both back on.  I see the same thing if I connect two screens via a MST
hub, or one screen there and one via the HDMI port.  It's got that race
condition feeling about it, in that I can do the same thing several
times over and one of the times it works fine, but the others not.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1313539

Title:
  [DisplayPort] monitor shows black screen and "no input signal" after
  turning the monitor off and on manually

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1313539/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1313539] Re: [DisplayPort] monitor shows black screen and "no input signal" after turning the monitor off and on manually

2018-05-14 Thread Xav Paice
It really didn't like me trying to attach the log file.  Here's a
pastebin: https://paste.ubuntu.com/p/mm4cwkGv4z/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1313539

Title:
  [DisplayPort] monitor shows black screen and "no input signal" after
  turning the monitor off and on manually

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1313539/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1770040] Re: lbaas load balancer does not forward traffic unless agent restarted

2018-05-14 Thread Xav Paice
This was reproduced with a heat template, but just running the steps at
the start of the case from horizon are enough.  Note that neutron-
gateway was deployed with aa-profile-mode set to complain, not the
default setting.

Changing this to 'disable' seems to have fixed the problem, more testing
is in progress.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1770040

Title:
  lbaas load balancer does not forward traffic unless agent restarted

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1770040/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1770040] Re: lbaas load balancer does not forward traffic unless agent restarted

2018-05-13 Thread Xav Paice
Apparmor is in 'complain' mode, the logs show the same entries but
allowed rather than denied.

Worth trying that change first, then installing -proposed if that makes
no difference.  This is a production site after all.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1770040

Title:
  lbaas load balancer does not forward traffic unless agent restarted

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1770040/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1770040] Re: lbaas load balancer does not forward traffic unless agent restarted

2018-05-10 Thread Xav Paice
Please note that this affects customers as follows;

- customer creates a lbaas, no backends come up
- we restart the service, and backends come to life
- customer creates another lbaas, the running one is fine but the new one has 
no backends
- we restart... etc

This means for every new load balancer, we need to restart the service
to get it actually forwarding traffic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1770040

Title:
  lbaas load balancer does not forward traffic unless agent restarted

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1770040/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1770040] Re: lbaas load balancer does not forward traffic unless agent restarted

2018-05-10 Thread Xav Paice
Due to customer impact, have subscribed field-high.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1770040

Title:
  lbaas load balancer does not forward traffic unless agent restarted

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1770040/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1724173] Re: bcache makes the whole io system hang after long run time

2018-02-14 Thread Xav Paice
We're also seeing this with 4.4.0-111-generic (on Trusty), and a very
similar hardware profile.  The boxes in question are running Swift with
a large (millions) number of objects all approx 32k in size.

I'm currently fio'ing in a test environment to try to reproduce this
away from production.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724173

Title:
  bcache makes the whole io system hang after long run time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1724173] Re: bcache makes the whole io system hang after long run time

2018-02-14 Thread Xav Paice
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724173

Title:
  bcache makes the whole io system hang after long run time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1731595] Re: L3 HA: multiple agents are active at the same time

2017-12-12 Thread Xav Paice
We have installed the Ocata -proposed package, however the situation is
this:

- there's 464 routers configured, on 3 Neutron gateway hosts, using l3-ha, and 
each router is scheduled to all 3 hosts.
- we installed the package because were in a situation with a current incident 
with multiple l3 agents active, hoping the package update would solve the 
problem.  One of the gateway hosts was being rebooted at the time to also try 
to do a King Canute and halt the tidal wave of arp.
- We later found that openvswitch had run out of filehandles, see LP: #1737866
- Resolving that allowed ovs to create a ton more filehandles.
- Removing/ re-adding the routers to agents seemed to clean things up, we saw 
some routers with multiple agents active, and some with none active (all 3 
agents 'standby').
- After a few iterations of that, things cleaned up.
- 15-20 mins later, we saw more routers with multiple agents active (ones which 
weren't before), and ran through the same cleanup steps.  At this time, there 
were a large number of keepalived messages in syslog, particularly routers 
becoming MASTER then BACKUP again. (https://pastebin.canonical.com/205361/)
- after another hour or two, we're still clean.

I can't at this stage whether the fix actually fixed the problem or not
- I need to dig further to find out if there could have been some
process running cleanups.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1731595

Title:
  L3 HA: multiple agents are active at the same time

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1731595/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1731595] Re: L3 HA: multiple agents are active at the same time

2017-12-11 Thread Xav Paice
Please note, we now have a client affected by this running Mitaka as
well.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1731595

Title:
  L3 HA: multiple agents are active at the same time

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1731595/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1623658] Re: livestatus socket permission

2017-09-05 Thread Xav Paice
** Changed in: nagios-charm
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1623658

Title:
  livestatus socket permission

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1623658/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1623658] Re: livestatus socket permission

2017-08-21 Thread Xav Paice
https://code.launchpad.net/~xavpaice/nagios-charm/+git/nagios-
charm/+merge/329344

** Merge proposal linked:
   
https://code.launchpad.net/~xavpaice/nagios-charm/+git/nagios-charm/+merge/329344

** Changed in: nagios-charm
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1623658

Title:
  livestatus socket permission

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1623658/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1623658] Re: livestatus socket permission

2017-08-07 Thread Xav Paice
** Changed in: nagios-charm
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1623658

Title:
  livestatus socket permission

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1623658/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1623658] Re: livestatus socket permission

2017-08-07 Thread Xav Paice
https://code.launchpad.net/~xavpaice/nagios-charm/+git/nagios-
charm/+merge/328677

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1623658

Title:
  livestatus socket permission

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1623658/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1623658] Re: livestatus socket permission

2017-08-07 Thread Xav Paice
In https://git.launchpad.net/nagios-charm/tree/hooks/install the Nagios
charm creates the mklivestatus directory and sets perms.  We will need
to change this to add +x.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1623658

Title:
  livestatus socket permission

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1623658/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1702595] Re: Upgrade neutron-plugin-openvswitch-agent package causes nova-compute to fall over

2017-07-05 Thread Xav Paice
apologies for the vile wrapping.  For those with access,
https://pastebin.canonical.com/192695/ might be easier to read.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1702595

Title:
  Upgrade neutron-plugin-openvswitch-agent package causes nova-compute
  to fall over

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1702595/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1702595] [NEW] Upgrade neutron-plugin-openvswitch-agent package causes nova-compute to fall over

2017-07-05 Thread Xav Paice
Public bug reported:

On upgrading neutron on a compute node, instances on that node wound up
losing some of their network plumbing via openvswitch.

This cloud: Mitaka, xenial, openvswitch with gre.

2017-07-05  16:17:52, the following auto-upgrade occurred: neutron-
common:amd64 (2:8.4.0-0ubuntu2, 2:8.4.0-0ubuntu3), neutron-plugin-
openvswitch-agent:amd64 (2:8.4.0-0ubuntu2, 2:8.4.0-0ubuntu3), neutron-
openvswitch-agent:amd64 (2:8.4.0-0ubuntu2, 2:8.4.0-0ubuntu3), python-
neutron:amd64 (2:8.4.0-0ubuntu2, 2:8.4.0-0ubuntu3)

The neutron logs:
2017-07-05 16:17:53.670 6156 CRITICAL neutron 
[req-1fcb315b-9f12-4657-aea8-1463f55f0106 - - - - -] ProcessExecutionError: 
Exit code: -15; Stdin: ; Stdout: ; Stderr: Signal 15 (TERM) caught by ps 
(procps-ng version 3.3.10).
ps:display.c:66: please report this bug

2017-07-05 16:17:53.670 6156 ERROR neutron Traceback (most recent call last):
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
2017-07-05 16:17:53.670 6156 ERROR neutron sys.exit(main())
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
2017-07-05 16:17:53.670 6156 ERROR neutron agent_main.main()
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/main.py",
 line 49, in main
2017-07-05 16:17:53.670 6156 ERROR neutron mod.main()
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/main.py",
 line 36, in main
2017-07-05 16:17:53.670 6156 ERROR neutron 
ovs_neutron_agent.main(bridge_classes)
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2164, in main
2017-07-05 16:17:53.670 6156 ERROR neutron agent.daemon_loop()
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2087, in daemon_loop
2017-07-05 16:17:53.670 6156 ERROR neutron self.rpc_loop(polling_manager=pm)
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2000, in rpc_loop
2017-07-05 16:17:53.670 6156 ERROR neutron if 
(self._agent_has_updates(polling_manager) or sync
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1752, in _agent_has_updates
2017-07-05 16:17:53.670 6156 ERROR neutron return 
(polling_manager.is_polling_required or
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/common/base_polling.py", line 
35, in is_polling_required
2017-07-05 16:17:53.670 6156 ERROR neutron polling_required = 
self._is_polling_required()
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/polling.py", line 69, in 
_is_polling_required
2017-07-05 16:17:53.670 6156 ERROR neutron return self._monitor.has_updates
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ovsdb_monitor.py", line 
74, in has_updates
2017-07-05 16:17:53.670 6156 ERROR neutron if not self.is_active():
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/async_process.py", line 
100, in is_active
2017-07-05 16:17:53.670 6156 ERROR neutron self.pid, 
self.cmd_without_namespace)
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/async_process.py", line 
159, in pid
2017-07-05 16:17:53.670 6156 ERROR neutron run_as_root=self.run_as_root)
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 267, in 
get_root_helper_child_pid
2017-07-05 16:17:53.670 6156 ERROR neutron pid = find_child_pids(pid)[0]
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 202, in 
find_child_pids
2017-07-05 16:17:53.670 6156 ERROR neutron return []
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-07-05 16:17:53.670 6156 ERROR neutron self.force_reraise()
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-07-05 16:17:53.670 6156 ERROR neutron six.reraise(self.type_, 
self.value, self.tb)
2017-07-05 16:17:53.670 6156 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 194, in 
find_child_pids

[Bug 1623658] Re: livestatus socket permission

2017-06-29 Thread Xav Paice
** Package changed: nagios (Juju Charms Collection) => check-mk (Ubuntu)

** Also affects: nagios-charm
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1623658

Title:
  livestatus socket permission

To manage notifications about this bug go to:
https://bugs.launchpad.net/nagios-charm/+bug/1623658/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1679823] Re: bond0: Invalid MTU 9000 requested, hw max 1500 with kernel 4.10 (or 4.8.0-49, xenial-hwe)

2017-05-12 Thread Xav Paice
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1679823

Title:
  bond0: Invalid MTU 9000 requested, hw max 1500 with kernel 4.10 (or
  4.8.0-49, xenial-hwe)

To manage notifications about this bug go to:
https://bugs.launchpad.net/linux/+bug/1679823/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1588391] Re: ceilometer charm creates world-readable /etc/ceilometer/ceilometer.conf, exposing credentials

2017-02-08 Thread Xav Paice
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1588391

Title:
  ceilometer charm creates world-readable
  /etc/ceilometer/ceilometer.conf, exposing credentials

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1588391/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1403152] Re: unregister_netdevice: waiting for lo to become free. Usage count

2016-10-24 Thread Xav Paice
>From the logs it looks like the patch is now a part of
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1627730 which hit
4.4.0-46.67~14.04.1 (proposed) on 22nd Oct?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1403152

Title:
  unregister_netdevice: waiting for lo to become free. Usage count

To manage notifications about this bug go to:
https://bugs.launchpad.net/linux/+bug/1403152/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1474667] Re: log dir permissions are incorrect for use with swift

2015-10-01 Thread Xav Paice
comment #8 was pretty clear that this isn't regarded as something that
needs fixing.  If that's still the case, this should be closed as
wontfix.

I don't know another way round it though - and if changing the package
is the wrong approach I would like to know the right approach so we can
take the preferred action.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1474667

Title:
  log dir permissions are incorrect for use with swift

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1474667/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1474667] Re: log dir permissions are incorrect

2015-07-21 Thread Xav Paice
I see your point, and agree in principal, but the effect of this is that
we cannot use the Ubuntu Cloud Archive until Swift is changed to use the
Ceilometer code in some other way.  That's a pretty significant change,
and a diversion from the approach described by the devs of Ceilometer in
their own documentation.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1474667

Title:
  log dir permissions are incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1474667/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1474667] Re: log dir permissions are incorrect

2015-07-15 Thread Xav Paice
I'm happy to submit a patch if someone could please point me at how?  I
looked at
https://wiki.ubuntu.com/ServerTeam/OpenStack#Submitting_a_Patch but get
bzr: ERROR: development focus
https://api.launchpad.net/1.0/ceilometer/liberty has no branch in
return.  No doubt some newbie problem here, a point in the right
direction would be great.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1474667

Title:
  log dir permissions are incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1474667/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1474667] [NEW] log dir permissions are incorrect

2015-07-14 Thread Xav Paice
Public bug reported:

In ceilometer-common.postinst, permissions for the dir
/var/log/ceilometer are set to 750.

In http://docs.openstack.org/developer/ceilometer/install/manual.html
it's stated to Note Please make sure that ceilometer’s logging
directory (if it’s configured) is read and write accessible for the user
swift is started by.

That means the perms need to be 770, not 750.

This prevents parts of swift from running, where it uses ceilometer's
egg to log info to ceilometer about usage.

** Affects: ceilometer (Ubuntu)
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1474667

Title:
  log dir permissions are incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1474667/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1474667] [NEW] log dir permissions are incorrect

2015-07-14 Thread Xav Paice
Public bug reported:

In ceilometer-common.postinst, permissions for the dir
/var/log/ceilometer are set to 750.

In http://docs.openstack.org/developer/ceilometer/install/manual.html
it's stated to Note Please make sure that ceilometer’s logging
directory (if it’s configured) is read and write accessible for the user
swift is started by.

That means the perms need to be 770, not 750.

This prevents parts of swift from running, where it uses ceilometer's
egg to log info to ceilometer about usage.

** Affects: ceilometer (Ubuntu)
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceilometer in Ubuntu.
https://bugs.launchpad.net/bugs/1474667

Title:
  log dir permissions are incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1474667/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1389239] Re: apparmor is uninstalled when deploying icehouse nova-compute on Precise

2014-11-04 Thread Xav Paice
*** This bug is a duplicate of bug 1387251 ***
https://bugs.launchpad.net/bugs/1387251

** This bug has been marked a duplicate of bug 1387251
   apparmor conflict with precise cloud archive

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in Ubuntu.
https://bugs.launchpad.net/bugs/1389239

Title:
  apparmor is uninstalled when deploying icehouse nova-compute on
  Precise

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1389239/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1389239] Re: apparmor is uninstalled when deploying icehouse nova-compute on Precise

2014-11-04 Thread Xav Paice
*** This bug is a duplicate of bug 1387251 ***
https://bugs.launchpad.net/bugs/1387251

** This bug has been marked a duplicate of bug 1387251
   apparmor conflict with precise cloud archive

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1389239

Title:
  apparmor is uninstalled when deploying icehouse nova-compute on
  Precise

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1389239/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs