[Bug 1488426] Re: High CPU usage of kworker/ksoftirqd

2019-02-10 Thread JuanJo Ciarlante
FYI still happening to me on 18.04 with HWE kernel,
similar behavior as #38: kworker with steady high
cpu usage after un-docking, re-docking didn't solve
it tho.

kernel: 4.18.0-15-generic
hardware: Thinkpad x270, Thinkpad Ultra Dock, network: enp0s31f6 (dock eth) and 
wlp3s0 up

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1488426

Title:
  High CPU usage of kworker/ksoftirqd

To manage notifications about this bug go to:
https://bugs.launchpad.net/hwe-next/+bug/1488426/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1670959] Re: systemd-resolved using 100% CPU

2019-01-30 Thread JuanJo Ciarlante
For other souls facing this "Medium" issue,
a hammer-ish workaround that works for me: 

1) Run:
apt-get install cpulimit

2) edit /lib/systemd/system/systemd-resolved.service:

2a) Comment out:
#Type=notify

2b) Replace line (may want to remove the -k to let cpulimit throttle it):
#ExecStart=!!/lib/systemd/systemd-resolved
ExecStart=!!/usr/bin/cpulimit -f -q -k -l 50 -- /lib/systemd/systemd-resolved

3) Run:

systemctl daemon-reload
systemctl restart systemd-resolved

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1670959

Title:
  systemd-resolved using 100% CPU

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dnsmasq/+bug/1670959/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1670959] Re: systemd-resolved using 100% CPU

2019-01-23 Thread JuanJo Ciarlante
Also happening to me after 16.04 -> 18.04 LTS upgrade (via do-release-
upgrade)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1670959

Title:
  systemd-resolved using 100% CPU

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dnsmasq/+bug/1670959/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1742602] Re: Blank screen when starting X after upgrading from 4.10 to 4.13.0-26

2018-02-19 Thread JuanJo Ciarlante
FYI this is also happening for me, LTS 16.04.3 + HWE (kernel and xorg pkgs),
Thinkpad x270 w/ Integrated Graphics Chipset: Intel(R) HD Graphics 620.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1742602

Title:
  Blank screen when starting X after upgrading from 4.10 to 4.13.0-26

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1742602/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2017-06-07 Thread JuanJo Ciarlante
Indeed that had been the case, thx for replying.

** Changed in: iproute2 (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: linux (Ubuntu)
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1695929] [NEW] vlan on top of bond interfaces assumes bond[0-9]+ naming

2017-06-05 Thread JuanJo Ciarlante
Public bug reported:

Context: deploying nodes via maas with bonds and vlans
on top of them, in particular for the example below:
bong-stg and bond-stg.600

~# dpkg-query -W vlan
vlan1.9-3.2ubuntu1.16.04.1

* /etc/network/interfaces excerpt as setup by Maas
  (obfuscated for ip and mac addresses)

[...]
auto bond-stg
iface bond-stg inet static
address x.x.x.x/x
bond-xmit_hash_policy layer2
bond-slaves none
hwaddress ether xx:xx:xx:xx:xx:xx
bond-miimon 100
bond-lacp_rate slow
mtu 9000
bond-mode 802.3ad

[...]
auto bond-stg.600
iface bond-stg.600 inet static
address x.x.x.x/x
vlan_id 600
vlan-raw-device bond-stg
mtu 9000


* FYI bong-stg is properly brought up

* upping bong-stg.600 fails with:

~# ifup bond-stg.600
/etc/network/if-pre-up.d/mtuipv6: line 9: /sys/class/net/bond-stg.600/mtu: No 
such file or directory
/etc/network/if-pre-up.d/mtuipv6: line 10: 
/proc/sys/net/ipv6/conf/bond-stg.600/mtu: No such file or directory
Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
iface bond-mgmt inet manual
cat: /sys/class/net/bond-stg.600/mtu: No such file or directory
Device "bond-stg.600" does not exist.
bond-stg.600 does not exist, unable to create bond-stg.600
run-parts: /etc/network/if-pre-up.d/vlan exited with return code 1
Failed to bring up bond-stg.600.

- this happens because /etc/network/if-pre-up.d/vlan
  hardcodes bond names to be bond[0-9][0-9]*

* changing above regexp to bond[^.]* fixes it
 (modulo the mtu error):

~# ifup bond-stg.600
/etc/network/if-pre-up.d/mtuipv6: line 9: /sys/class/net/bond-stg.600/mtu: No 
such file or directory
/etc/network/if-pre-up.d/mtuipv6: line 10: 
/proc/sys/net/ipv6/conf/bond-stg.600/mtu: No such file or directory
Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Added VLAN with VID == 600 to IF -:bond-stg:-

* diff for above changes to /etc/network/if-pre-up.d/vlan
- http://paste.ubuntu.com/24784957/

** Affects: vlan (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1695929

Title:
  vlan on top of bond interfaces assumes bond[0-9]+ naming

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/vlan/+bug/1695929/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1679823] Re: bond0: Invalid MTU 9000 requested, hw max 1500 with kernel 4.8 / 4.10 in XENIAL LTS

2017-05-12 Thread JuanJo Ciarlante
We really need to get xenial added for its HWE kernels:
we have several BootStacks running with them, mainly for
latest needed drivers while keeping LTS (mellanox for
VNFs as an example)- all these are  now obviously at
risk on the next reboot.

Note also that recovering from this issue does usually
require OOB access to the node, as typically bonded
interfaces are used for both mgmt and data planes.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1679823

Title:
  bond0: Invalid MTU 9000 requested, hw max 1500 with kernel 4.8 / 4.10
  in XENIAL LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/linux/+bug/1679823/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1602057] Re: [SRU] (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2017-04-07 Thread JuanJo Ciarlante
FYI we're also hitting this on trusty/mitaka for what looks
like incompletely deleted instances:

* still running at hypervisor, ie
virsh dominfo UUID  # shows it ok

* deleted both at nova 'instances' and 'block_device_mapping' tables.

Once certain it's still running at hypervisor, 
our workaround is to revive the instance at nova DB
with something like:

mysql> begin work;
mysql> update instances
  set vm_state='active', deleted=0, deleted_at=NULL
  where uuid='';
mysql> update block_device_mapping
  set deleted=0, deleted_at=NULL
  where instance_uuid='';
mysql> commit work;

Note also it has happened to us from failed migrations
(ie instance shown at the 'wrong' host at nova DB),
we've fixed those by adding to the 1st SQL

 host='', node='',

with above hostname-s as:
-  from nova service-list
-  from nova hypervisor-list

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1602057

Title:
  [SRU] (libvirt) KeyError updating resources for some node, guest.uuid
  is not in BDM list

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1668123] Re: lxc fails to start with cgroup error

2017-03-09 Thread JuanJo Ciarlante
FYI because of other maintenance I had to do on the affected nodes,
after upgrading to linux-generic-lts-xenial 4.4.0-66-generic this
issue didn't show anymore.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1668123

Title:
  lxc fails to start  with cgroup error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1668123/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1582394] Re: [16.04, lxc] Failed to reset devices.list on ...

2017-02-08 Thread JuanJo Ciarlante
I can't make it work even after manually installing squashfuse
(FYI lxc created by juju deploy cs:ubuntu --to lxc:1 )

root@juju-machine-1-lxc-14:~# uname -a
Linux juju-machine-1-lxc-14 4.8.0-34-generic #36~16.04.1-Ubuntu SMP Wed Dec 21 
18:55:08 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

root@juju-machine-1-lxc-14:~# apt-cache policy squashfuse
squashfuse:
  Installed: 0.1.100-0ubuntu1~ubuntu16.04.1
  Candidate: 0.1.100-0ubuntu1~ubuntu16.04.1
  Version table:
 *** 0.1.100-0ubuntu1~ubuntu16.04.1 500
500 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 
Packages
100 /var/lib/dpkg/status

root@juju-machine-1-lxc-14:~# snap install hello-world
- Mount snap "core" (888) ([start snap-core-888.mount] failed with exit status 
1: Job for snap-core-888.mount failed. See "systemctl status 
snap-core-888.mount" and "journalctl -xe" for details.

root@juju-machine-1-lxc-14:~# journalctl -xe
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Failed to reset devices.list 
on /system.slice/snap-core-888.mount: Operation not permitted
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Mounting Mount unit for 
core...
-- Subject: Unit snap-core-888.mount has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit snap-core-888.mount has begun starting up.
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Failed to reset devices.list 
on /init.scope: Operation not permitted
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Failed to reset devices.list 
on /user.slice: Operation not permitted
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Failed to reset devices.list 
on /system.slice/dbus.service: Operation not permitted
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Failed to reset devices.list 
on /system.slice/ondemand.service: Operation not permitted
Feb 08 22:54:21 juju-machine-1-lxc-14 systemd[1]: Failed to reset devices.list 
on /system.slice/sys-kernel-debug.mount: Operation not permitted
[...]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1582394

Title:
  [16.04, lxc] Failed to reset devices.list on ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1582394/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1638700] Re: hio: SSD data corruption under stress test

2016-11-21 Thread JuanJo Ciarlante
FTR/FYI (as per chatter w/kamal) we're waiting for >= 4.8.0-28
to be available at https://launchpad.net/ubuntu/+source/linux-hwe-edge

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1638700

Title:
  hio: SSD data corruption under stress test

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1638700/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1582278] Re: [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one NUMA node and PCI device from another NUMA node.

2016-11-17 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1582278

Title:
  [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from
  one NUMA node and PCI device from another NUMA node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582278/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1635223] [NEW] please include mlx5_core modules in linux-image-generic package

2016-10-20 Thread JuanJo Ciarlante
Public bug reported:

Because linux-image-generic pkg doesn't include mlx5_core,
stock ubuntu cloud-images can't be used by VM guests using
mellanox VFs, forcing the creation of an ad-hoc cloud image
with added linux-image-extra-virtual

** Affects: cloud-images
 Importance: Undecided
 Status: New

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: Confirmed


** Tags: canonical-bootstack

** Also affects: cloud-images
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1635223

Title:
  please include mlx5_core modules in linux-image-generic package

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1635223/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1601986] Re: RuntimeError: osrandom engine already registered

2016-07-12 Thread JuanJo Ciarlante
** Also affects: python-cryptography (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1601986

Title:
  RuntimeError: osrandom engine already registered

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1601986/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1596866] Re: NMI watchdog: Watchdog detected hard LOCKUP on cpu 0 - Xenial - Python

2016-06-29 Thread JuanJo Ciarlante
Some ~recent alike finding, in case it helps:
  https://github.com/TobleMiner/wintron7.0/issues/2
- worked around with clocksource=tsc, guess that
ntpq should also show a large drift.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1596866

Title:
  NMI watchdog: Watchdog detected hard LOCKUP on cpu 0 - Xenial - Python

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1596866/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1570657] Re: Bash completion needed for versioned juju commands

2016-06-20 Thread JuanJo Ciarlante
See https://bugs.launchpad.net/juju-core/+bug/1588403/comments/5

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1570657

Title:
  Bash completion needed for versioned juju commands

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1570657/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1588403] Re: Tab completion missing in Juju 2.0 betas

2016-06-20 Thread JuanJo Ciarlante
See https://github.com/juju/juju/pull/5057 for *beta9*
and above (ie "applications" instead of "services") -
can try it with:


sudo -i # become root

rm /etc/bash_completion.d/juju2
wget -O /etc/bash_completion.d/juju-2.0 \
  
https://raw.githubusercontent.com/jjo/juju/master/etc/bash_completion.d/juju-2.0
wget -O /etc/bash_completion.d/juju-version \
  
https://raw.githubusercontent.com/jjo/juju/master/etc/bash_completion.d/juju-version

. /etc/bash_completion

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1588403

Title:
  Tab completion missing in Juju 2.0 betas

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1588403/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1570657] Re: Bash completion needed for versioned juju commands

2016-06-20 Thread JuanJo Ciarlante
See updated https://github.com/juju/juju/pull/5057, in short
it adds below two files to /etc/bash_completion.d/
(sort == load order) ->

juju-2.0 (added)
juju-core(existing from juju1)
juju-version (added)

, so that:

* juju-2.0: completion for `juju-2.0`
, but also plain `juju` (ie self-contained)
* juju-core: overwrites completion for `juju`
* juju-version: overwrites completion for `juju`
, with runtime logic to use v1 xor v2.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1570657

Title:
  Bash completion needed for versioned juju commands

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1570657/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1567895] Re: openstack volume create does not support --snapshot

2016-06-05 Thread JuanJo Ciarlante
** Also affects: python-openstackclient (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1567895

Title:
  openstack volume create does not support --snapshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1567895/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

2016-04-25 Thread JuanJo Ciarlante
For experimenting purposes / measuring the difference against
what would be a (better) behaving epoll_wait usage, I created:
https://github.com/jjo/dl-hack-lp1518430
,  which implements hooking epoll_wait() and select() 
(via LD_PRELOAD) to limit the rate of calls with zero timeouts.

WfM'd on an experimental stack, on one node hosting all OS
API LXCs and nova-compute, I measured a 
700K/s -> 120K/s reduction of epoll_wait calls (as shown
by: sysdig -c topscalls).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518430/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1543046] Re: thermald spamming kernel log when updating powercap RAPL powerlimit

2016-04-11 Thread JuanJo Ciarlante
FYI I'm running trusty with linux-generic-lts-xenial 4.4.0.18.10,
was getting same thermald spamming until I manually upgraded
to the package as per comment#8 above:
- upgrade thermald:amd64 1.4.3-5~14.04.2 1.4.3-5~14.04.3
, then no more kern.log spamming, thanks!.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1543046

Title:
  thermald spamming kernel log when updating powercap RAPL  powerlimit

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/thermald/+bug/1543046/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-03-02 Thread JuanJo Ciarlante
Chris: confirming this bug most likely fixed indeed by
1:2014.1.5-0ubuntu3, as there has been no further alerts from missing
tun_ids since it got installed 1 week ago (recall we had been getting
several of those per week).

Thanks! :) --J

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1531963

Title:
  [SRU] trusty/icehouse neutron-plugin-openvswitch-agent:
  lvm.tun_ofports.remove crashes with KeyError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1531963/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-03-02 Thread JuanJo Ciarlante
Chris: confirming this bug most likely fixed indeed by
1:2014.1.5-0ubuntu3, as there has been no further alerts from missing
tun_ids since it got installed 1 week ago (recall we had been getting
several of those per week).

Thanks! :) --J

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531963

Title:
  [SRU] trusty/icehouse neutron-plugin-openvswitch-agent:
  lvm.tun_ofports.remove crashes with KeyError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1531963/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-02-24 Thread JuanJo Ciarlante
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes
1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours,
FYI this stack has ~30 nodes, ~1k+ active instances.

We expect this change to (obviously) stop those KeyError messages at log,
and likely also stop nodes from missing tun_ids - FYI we regularly get alerted
for the latter  (~several times a week), I'll add an update next week on how
it went.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531963

Title:
  [SRU] trusty/icehouse neutron-plugin-openvswitch-agent:
  lvm.tun_ofports.remove crashes with KeyError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1531963/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1531963] Re: [SRU] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-02-24 Thread JuanJo Ciarlante
Thanks Chris for the updates - FYI we've upgraded all of our compute nodes
1:2014.1.5-0ubuntu3 from proposed, no (extra)issues so far after some hours,
FYI this stack has ~30 nodes, ~1k+ active instances.

We expect this change to (obviously) stop those KeyError messages at log,
and likely also stop nodes from missing tun_ids - FYI we regularly get alerted
for the latter  (~several times a week), I'll add an update next week on how
it went.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1531963

Title:
  [SRU] trusty/icehouse neutron-plugin-openvswitch-agent:
  lvm.tun_ofports.remove crashes with KeyError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1531963/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1531963] [NEW] trusty/icehouse neutron-plugin-openvswitch-agent: lvm.tun_ofports.remove crashes with KeyError

2016-01-07 Thread JuanJo Ciarlante
Public bug reported:

Filing this on ubuntu/neutron package, as neutron itself is EOL'd for
Icehouse.

FYI this is a nonHA icehouse/trusty deploy using serverteam's juju
charms.

On one of our production environments with a rather high rate of API
calls, (sp for  transient VMs from CI), we frequently get neutron OVS
breakage on compute nodes¹, which we've been able to more or less
correlate with the following alike errors at /var/log/neutron
/openvswitch-agent.log:

2016-01-07 06:33:48.917 18357 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 399, in _del_fdb_flow
2016-01-07 06:33:48.917 18357 TRACE neutron.openstack.common.rpc.amqp 
lvm.tun_ofports.remove(ofport)
2016-01-07 06:33:48.917 18357 TRACE neutron.openstack.common.rpc.amqp KeyError: 
'13'

Detailed log: http://paste.ubuntu.com/14431656/  - note the same time of
occurrence on the 3 diff compute nodes shown there.

¹ What we then observe are missing are missing tun_ids from
  ovs-ofctl dump-flows br-tun
ie provider:segmentation_id not present at the compute node for a VM with a 
neutron network that has it.

Afaics this had been fixed upstream at ( lp#1421105 ):
https://git.openstack.org/cgit/openstack/neutron/commit/?id=841b2f58f375df53b380cf5796bb31c82cd09260
, please consider backporting it to Icehouse, it's a pretty trivial fix.

** Affects: neutron (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Description changed:

  Filing this on ubuntu/neutron package, as neutron itself is EOL'd for
  Icehouse.
  
  FYI this is a nonHA icehouse/trusty deploy using serverteam's juju
  charms.
  
- On one of our production environments with a rather high rate of API calls,
- (sp for  transient VMs from CI), we frequently get neutron OVS breakage
- on compute nodes¹, which we've been able to more or less correlate with
- the following alike errors at /var/log/neutron/openvswitch-agent.log:
+ On one of our production environments with a rather high rate of API
+ calls, (sp for  transient VMs from CI), we frequently get neutron OVS
+ breakage on compute nodes¹, which we've been able to more or less
+ correlate with the following alike errors at /var/log/neutron
+ /openvswitch-agent.log:
  
  2016-01-07 06:33:48.917 18357 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 399, in _del_fdb_flow
  2016-01-07 06:33:48.917 18357 TRACE neutron.openstack.common.rpc.amqp 
lvm.tun_ofports.remove(ofport)
  2016-01-07 06:33:48.917 18357 TRACE neutron.openstack.common.rpc.amqp 
KeyError: '13'
  
- Detailed log: http://paste.ubuntu.com/14431656/  - note the same time
- of occurrence on the 3 diff compute nodes shown there.
+ Detailed log: http://paste.ubuntu.com/14431656/  - note the same time of
+ occurrence on the 3 diff compute nodes shown there.
  
- ¹ What we then observe are missing are missing tun_ids from 
+ ¹ What we then observe are missing are missing tun_ids from
ovs-ofctl dump-flows br-tun
- ie provider:segmentation_id not present at the compute node
- for a VM with a neutron network that has it.
+ ie provider:segmentation_id not present at the compute node for a VM with a 
neutron network that has it.
  
  Afaics this had been fixed upstream at ( lp#1421105 ):
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=841b2f58f375df53b380cf5796bb31c82cd09260
  , please consider backporting it to Icehouse, it's a pretty trivial fix.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1531963

Title:
  trusty/icehouse neutron-plugin-openvswitch-agent:
  lvm.tun_ofports.remove crashes with KeyError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1531963/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1460164] Re: upgrade of openvswitch-switch can sometimes break neutron-plugin-openvswitch-agent

2015-12-17 Thread JuanJo Ciarlante
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage
on one (or more) of our production openstacks.

** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1460164

Title:
  upgrade of openvswitch-switch can sometimes break neutron-plugin-
  openvswitch-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1460164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1460164] Re: upgrade of openvswitch-switch can sometimes break neutron-plugin-openvswitch-agent

2015-12-17 Thread JuanJo Ciarlante
FYI today's openvswitch-switch upgrade triggered a cluster-wide outage
on one (or more) of our production openstacks.

** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to neutron in Ubuntu.
https://bugs.launchpad.net/bugs/1460164

Title:
  upgrade of openvswitch-switch can sometimes break neutron-plugin-
  openvswitch-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1460164/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1521279] Re: check_haproxy.sh is broken for openstack liberty's haproxy 1.5.14

2015-11-30 Thread JuanJo Ciarlante
** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: cinder (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: glance (Ubuntu)

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to glance in Ubuntu.
https://bugs.launchpad.net/bugs/1521279

Title:
  check_haproxy.sh is broken for openstack liberty's haproxy 1.5.14

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-helpers/+bug/1521279/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1521279] Re: check_haproxy.sh is broken for openstack liberty's haproxy 1.5.14

2015-11-30 Thread JuanJo Ciarlante
** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: cinder (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: glance (Ubuntu)

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: nova-cloud-controller (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1521279

Title:
  check_haproxy.sh is broken for openstack liberty's haproxy 1.5.14

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-helpers/+bug/1521279/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1518430] [NEW] liberty: ~busy loop on epoll_wait being called with zero timeout

2015-11-20 Thread JuanJo Ciarlante
Public bug reported:

Context: openstack juju/maas deploy using 1510 charms release
on trusty, with:
  openstack-origin: "cloud:trusty-liberty"
  source: "cloud:trusty-updates/liberty

* Several openstack nova- and neutron- services, at least:
nova-compute, neutron-server, nova-conductor,
neutron-openvswitch-agent,neutron-vpn-agent
show almost busy looping on epoll_wait() calls, with zero timeout set
most frequently.
- nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
  http://paste.ubuntu.com/13371248/ (ltrace, strace)

As comparison, this is how it looks on a kilo deploy:
- http://paste.ubuntu.com/13371635/

* 'top' sample from a nova-cloud-controller unit from
   this completely idle stack:
  http://paste.ubuntu.com/13371809/

FYI *not* seeing this behavior on keystone, glance, cinder,
ceilometer-api.

As this issue is present on several components, it likely comes
from common libraries (oslo concurrency?), fyi filed the bug to
nova itself as a starting point for debugging.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Description changed:

  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
-   openstack-origin: "cloud:trusty-liberty" 
-   source: "cloud:trusty-updates/liberty
+   openstack-origin: "cloud:trusty-liberty"
+   source: "cloud:trusty-updates/liberty
  
- * Several openstack nova- and neutron- services, at least: 
+ * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
- - nova-compute (chose it b/cos single proc'd) strace and ltrace captures: 
-   http://paste.ubuntu.com/13371248/ (ltrace, strace)
+ - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
+   http://paste.ubuntu.com/13371248/ (ltrace, strace)
  
  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/
  
- * 'top' sample from a nova-cloud-controller unit:
-   http://paste.ubuntu.com/13371809/
+ * 'top' sample from a nova-cloud-controller unit from
+this completely idle stack:
+   http://paste.ubuntu.com/13371809/
  
  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.
  
  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1518430/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1518430] [NEW] liberty: ~busy loop on epoll_wait being called with zero timeout

2015-11-20 Thread JuanJo Ciarlante
Public bug reported:

Context: openstack juju/maas deploy using 1510 charms release
on trusty, with:
  openstack-origin: "cloud:trusty-liberty"
  source: "cloud:trusty-updates/liberty

* Several openstack nova- and neutron- services, at least:
nova-compute, neutron-server, nova-conductor,
neutron-openvswitch-agent,neutron-vpn-agent
show almost busy looping on epoll_wait() calls, with zero timeout set
most frequently.
- nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
  http://paste.ubuntu.com/13371248/ (ltrace, strace)

As comparison, this is how it looks on a kilo deploy:
- http://paste.ubuntu.com/13371635/

* 'top' sample from a nova-cloud-controller unit from
   this completely idle stack:
  http://paste.ubuntu.com/13371809/

FYI *not* seeing this behavior on keystone, glance, cinder,
ceilometer-api.

As this issue is present on several components, it likely comes
from common libraries (oslo concurrency?), fyi filed the bug to
nova itself as a starting point for debugging.

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

** Description changed:

  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
-   openstack-origin: "cloud:trusty-liberty" 
-   source: "cloud:trusty-updates/liberty
+   openstack-origin: "cloud:trusty-liberty"
+   source: "cloud:trusty-updates/liberty
  
- * Several openstack nova- and neutron- services, at least: 
+ * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
- - nova-compute (chose it b/cos single proc'd) strace and ltrace captures: 
-   http://paste.ubuntu.com/13371248/ (ltrace, strace)
+ - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
+   http://paste.ubuntu.com/13371248/ (ltrace, strace)
  
  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/
  
- * 'top' sample from a nova-cloud-controller unit:
-   http://paste.ubuntu.com/13371809/
+ * 'top' sample from a nova-cloud-controller unit from
+this completely idle stack:
+   http://paste.ubuntu.com/13371809/
  
  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.
  
  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1518430/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1511495] [NEW] lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
Public bug reported:

lxc packages:
 *** 1.0.7-0ubuntu0.9 0
500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64 
Packages

lxc apparmor profiles loading fails with:
root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting 
Found reference to variable PROC, but is never declared
root@host:~# echo $?
1
root@host:~# aa-enforce /etc/apparmor.d/lxc/lxc-default-with-mounting 
[...]
raise apparmor.AppArmorException(cmd_info[1])
apparmor.common.AppArmorException: 'Found reference to variable PROC, but is 
never declared\n'

FYI adding '#include '  fixes it.

** Affects: lxc (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1511495

Title:
  lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1511495/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1511495] [NEW] lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
Public bug reported:

lxc packages:
 *** 1.0.7-0ubuntu0.9 0
500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64 
Packages

lxc apparmor profiles loading fails with:
root@host:~# apparmor_parser -r /etc/apparmor.d/lxc/lxc-default-with-mounting 
Found reference to variable PROC, but is never declared
root@host:~# echo $?
1
root@host:~# aa-enforce /etc/apparmor.d/lxc/lxc-default-with-mounting 
[...]
raise apparmor.AppArmorException(cmd_info[1])
apparmor.common.AppArmorException: 'Found reference to variable PROC, but is 
never declared\n'

FYI adding '#include '  fixes it.

** Affects: lxc (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1511495

Title:
  lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1511495/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1511495] Re: lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
** Changed in: lxc (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1511495

Title:
  lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1511495/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1511495] Re: lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

2015-10-29 Thread JuanJo Ciarlante
** Changed in: lxc (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1511495

Title:
  lxc 1.0.7-0ubuntu0.9 has buggy apparmor profiles

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1511495/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-30 Thread JuanJo Ciarlante
w000T! \o/ using @jsalisbury kernel from comment#7
3.19.0-30-generic #33~lp1497812 , 
I can't reproduce the failing behavior under same host + setup
- no mirrored frames or alike dmesg
- containers networking ok

Comparison between stock vivid
3.19.0-30-generic #33~14.04.1-Ubuntu and above:
- http://paste.ubuntu.com/12627042/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1497812

Title:
  i40e bug: non physical MAC outbound frames appear as copied back
  inbound  (mirrored)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1497812/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1500981] Re: juju-db segfault while syncing with replicas

2015-09-30 Thread JuanJo Ciarlante
@gz: I got this at our staging environment, where we re-deploy
HA'd juju + openstacks several times a week (or day), 1st time
I positively observe this behavior, so I'd guess it's unfortunately
a subtle race condition or alike.

I did save /var/lib/juju/db/, /var/log/syslog and /var/log/juju/machine-?.log
from the 3 state servers, can provide them.

** Changed in: juju-core
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-mongodb in Ubuntu.
https://bugs.launchpad.net/bugs/1500981

Title:
  juju-db segfault while syncing with replicas

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1500981/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1500981] Re: juju-db segfault while syncing with replicas

2015-09-30 Thread JuanJo Ciarlante
@gz: I got this at our staging environment, where we re-deploy
HA'd juju + openstacks several times a week (or day), 1st time
I positively observe this behavior, so I'd guess it's unfortunately
a subtle race condition or alike.

I did save /var/lib/juju/db/, /var/log/syslog and /var/log/juju/machine-?.log
from the 3 state servers, can provide them.

** Changed in: juju-core
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1500981

Title:
  juju-db segfault while syncing with replicas

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1500981/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-29 Thread JuanJo Ciarlante
Confirming _not_ observing reported issue on an
equivalent setup w/ LXCs frames hitting phy interfaces
( bridged towards br0 -> bond0 -> {eth3, eth4} ):

* linux 4.2.0-12-generic #14~14.04.1-Ubuntu (from canonical-kernel-team/ppa)
* i40e version 1.3.4-k

# ethtool -i eth3
driver: i40e
version: 1.3.4-k
firmware-version: f4.33.31377 a1.2 n4.41 e1863

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1497812

Title:
  i40e bug: non physical MAC outbound frames appear as copied back
  inbound  (mirrored)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1497812/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-21 Thread JuanJo Ciarlante
FYI we found these issues while deploying openstack via juju/maas
over a pool of 8 nodes having 4x i40e NICs, where we also found
linux-hwe-generic-trusty (lts-utopic) to be unreliable from its old
i40e driver (0.4.10-k).

Below is a summary of our i40e findings using lts-vivid and lts-utopic
re: successful completed deploys:

#1 3.19.0-28-generic w/stock 1.2.2-k: non-phy mirrored frames (this bug)
#2 3.16.0-49-generic w/stock 0.4.10-k: unreliable deploys
#3 3.19.0-28-generic w/built 2.2.48: OK
#4 3.16.0-49-generic w/built 2.2.48: OK

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1497812

Title:
  i40e bug: non physical MAC outbound frames appear as copied back
  inbound  (mirrored)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-lts-vivid/+bug/1497812/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1497812] Re: i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-21 Thread JuanJo Ciarlante
ERRATA on comment #2 : OK i40e driver version is 1.2.48,
as per original report URL.

Comment #2 table is actually:

#1 3.19.0-28-generic w/stock 1.2.2-k: non-phy mirrored frames (this bug)
#2 3.16.0-49-generic w/stock 0.4.10-k: unreliable deploys
#3 3.19.0-28-generic w/built 1.2.48: OK  (*)
#4 3.16.0-49-generic w/built 1.2.48: OK  (*)

(*) corrected to be 1.2.48

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1497812

Title:
  i40e bug: non physical MAC outbound frames appear as copied back
  inbound  (mirrored)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-lts-vivid/+bug/1497812/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1497812] [NEW] i40e bug: non physical MAC outbound frames appear as copied back inbound (mirrored)

2015-09-20 Thread JuanJo Ciarlante
Public bug reported:

Using 3.19.0-28-generic #30~14.04.1-Ubuntu with stock i40e
driver version 2.2.2-k makes every 'non physical' MAC output
frame appear as copied back at input, as if the switch was
doing frame 'mirroring' (and/or hair-pinning).

FYI same setup, with i40e upgraded to 1.2.48 from
http://downloadmirror.intel.com/25282/eng/i40e-1.2.48.tar.gz
behaves OK, fyi also we did a port mirroring setup at
the switch directed to a different physical port for debugging,
and didn't observe these frames to be physically present.

See tcpdump -P in/out and more details at
http://paste.ubuntu.com/12511680/

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: linux-image-3.19.0-28-generic 3.19.0-28.30~14.04.1
ProcVersionSignature: Ubuntu 3.19.0-28.30~14.04.1-generic 3.19.8-ckt5
Uname: Linux 3.19.0-28-generic x86_64
ApportVersion: 2.14.1-0ubuntu3.13
Architecture: amd64
Date: Mon Sep 21 02:05:28 2015
ProcEnviron:
 TERM=screen
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: linux-lts-vivid
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: linux-lts-vivid (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug canonical-bootstack trusty uec-images

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1497812

Title:
  i40e bug: non physical MAC outbound frames appear as copied back
  inbound  (mirrored)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-lts-vivid/+bug/1497812/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-04 Thread JuanJo Ciarlante
Thanks for the quick turnaround, could you please backport the fix
to 1507 trunk ?
We have several stacks where we need to manually apply
above workaround for corosync/pacemaker to behave properly,
and several coming down the line before 1510.

FYI I while fixing hacluster trunk (essentially came out with the same
changes), had to add a line to test_hacluster_utils.py to pass unittests:
http://paste.ubuntu.com/12272628/

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1490727/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1490727] Re: "Invalid IPC credentials" after corosync, pacemaker service restarts

2015-09-04 Thread JuanJo Ciarlante
Thanks for the quick turnaround, could you please backport the fix
to 1507 trunk ?
We have several stacks where we need to manually apply
above workaround for corosync/pacemaker to behave properly,
and several coming down the line before 1510.

FYI I while fixing hacluster trunk (essentially came out with the same
changes), had to add a line to test_hacluster_utils.py to pass unittests:
http://paste.ubuntu.com/12272628/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1490727

Title:
  "Invalid IPC credentials" after corosync, pacemaker service restarts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1490727/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
After trying several corosync/pacemaker restarts without luck,
I was able to workaround this by adding an 'uidgid'
entry for hacluster:haclient:

* from /var/log/syslog:
Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]:  [MAIN  ] Denied 
connection attempt from 108:113
$ getent passwd 108
hacluster:x:108:113::/var/lib/heartbeat:/bin/false
$ getent group 113
haclient:x:113:

* add uidgid config:
# echo $'uidgid {\n  uid: hacluster\n  gid: haclient\n}' > 
/etc/corosync/uidgid.d/hacluster

* restart => Ok (crm status, etc)

I can't explain why other units are working ok without
this ACL addition (racing at service setup/start?).

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1439649

Title:
  Pacemaker unable to communicate with corosync on restart under lxc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1439649/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker
on cinder, glance services deployed with juju:

$ juju run --service=cinder,glance  "service corosync restart; service
pacemaker restart"

, which broke pacemaker start on all of them, with same "Invalid IPC 
credentials":
http://paste.ubuntu.com/12240477/ , then obviously failing 'crm status' /etc.

Fixing using comment#14 workaround:
$ juju run --service=glance,cinder  "echo -e 'uidgid {\n uid: hacluster\n gid: 
haclient\n}' > /etc/corosync/uidgid.d/hacluster; service corosync restart; 
service pacemaker restart"

$ juju run --service=glance,cinder "crm status"
=> Ok

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1439649

Title:
  Pacemaker unable to communicate with corosync on restart under lxc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1439649/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
After trying several corosync/pacemaker restarts without luck,
I was able to workaround this by adding an 'uidgid'
entry for hacluster:haclient:

* from /var/log/syslog:
Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]:  [MAIN  ] Denied 
connection attempt from 108:113
$ getent passwd 108
hacluster:x:108:113::/var/lib/heartbeat:/bin/false
$ getent group 113
haclient:x:113:

* add uidgid config:
# echo $'uidgid {\n  uid: hacluster\n  gid: haclient\n}' > 
/etc/corosync/uidgid.d/hacluster

* restart => Ok (crm status, etc)

I can't explain why other units are working ok without
this ACL addition (racing at service setup/start?).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1439649

Title:
  Pacemaker unable to communicate with corosync on restart under lxc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1439649/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2015-08-31 Thread JuanJo Ciarlante
FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker
on cinder, glance services deployed with juju:

$ juju run --service=cinder,glance  "service corosync restart; service
pacemaker restart"

, which broke pacemaker start on all of them, with same "Invalid IPC 
credentials":
http://paste.ubuntu.com/12240477/ , then obviously failing 'crm status' /etc.

Fixing using comment#14 workaround:
$ juju run --service=glance,cinder  "echo -e 'uidgid {\n uid: hacluster\n gid: 
haclient\n}' > /etc/corosync/uidgid.d/hacluster; service corosync restart; 
service pacemaker restart"

$ juju run --service=glance,cinder "crm status"
=> Ok

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1439649

Title:
  Pacemaker unable to communicate with corosync on restart under lxc

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1439649/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1487190] Re: nicstat fails on more than ~30 interfaces

2015-08-20 Thread JuanJo Ciarlante
fixed by 
https://github.com/jjo/nicstat/commit/3c2407da66c2fd2914e7f362f41f729cc21ff1e4,
see strace comparison (stock vs compiled with above) at a host
with ~270 interfaces:
http://paste.ubuntu.com/12137566/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1487190

Title:
  nicstat fails on more than ~30 interfaces

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nicstat/+bug/1487190/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1487190] [NEW] nicstat fails on more than ~30 interfaces

2015-08-20 Thread JuanJo Ciarlante
Public bug reported:

nicstat falsely assumes that a single read from /proc/net/dev
will return all its content (even if using a ~large buffer, 128K)

** Affects: nicstat (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1487190

Title:
  nicstat fails on more than ~30 interfaces

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nicstat/+bug/1487190/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1350947] Re: apparmor: no working rule to allow making a mount private

2015-08-19 Thread JuanJo Ciarlante
FYI I'm able to successfully drive netns inside LXC, manually then also 
via openstack neutron-gateways, via this crafted aa profile:
/etc/apparmor.d/lxc/lxc-default-with-netns -
https://gist.github.com/jjo/ff32b08e48e4a52bfc36

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1350947

Title:
  apparmor: no working rule to allow making a mount private

To manage notifications about this bug go to:
https://bugs.launchpad.net/apparmor/+bug/1350947/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1350947] Re: apparmor: no working rule to allow making a mount private

2015-08-19 Thread JuanJo Ciarlante
FYI I'm able to successfully drive netns inside LXC, manually then also 
via openstack neutron-gateways, via this crafted aa profile:
/etc/apparmor.d/lxc/lxc-default-with-netns -
https://gist.github.com/jjo/ff32b08e48e4a52bfc36

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1350947

Title:
  apparmor: no working rule to allow making a mount private

To manage notifications about this bug go to:
https://bugs.launchpad.net/apparmor/+bug/1350947/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1476428] [NEW] haproxy service handling failing to stop old instances

2015-07-20 Thread JuanJo Ciarlante
Public bug reported:

On an openstack HA kilo deployment using charms trunks,
several services failing to properly restart haproxy, leaving
old instances running, showing cinder/0 as example:

$ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p; 
egrep St.*ing.haproxy /var/log/juju/unit-cinder-0.log'
PIDPPID  STARTED CMD
  13913   1 Sun Jul 19 01:02:11 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  14448   1 Sun Jul 19 01:04:59 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  14848   1 Sun Jul 19 01:05:03 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  21437   1 Sun Jul 19 01:07:00 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  27656   1 Sun Jul 19 01:08:33 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  32073   1 Sun Jul 19 01:09:27 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  39752   1 Sun Jul 19 01:10:59 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  39829   1 Sun Jul 19 01:11:00 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  44200   1 Sun Jul 19 01:18:57 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
 299019  299018 Mon Jul 20 23:38:30 2015 bash -c pgrep -f haproxy | xargs ps -o 
pid,ppid,lstart,cmd -p; egrep St.*ing.haproxy /var/log/juju/unit-cinder-0.log
2015-07-19 01:02:14 INFO config-changed  * Stopping haproxy haproxy
2015-07-19 01:02:15 INFO config-changed  * Starting haproxy haproxy
2015-07-19 01:05:03 INFO cluster-relation-changed  * Stopping haproxy haproxy
2015-07-19 01:05:03 INFO cluster-relation-changed  * Starting haproxy haproxy
2015-07-19 01:05:07 INFO cluster-relation-changed  * Stopping haproxy haproxy
2015-07-19 01:05:07 INFO cluster-relation-changed  * Starting haproxy haproxy
2015-07-19 01:11:03 INFO identity-service-relation-changed  * Stopping haproxy 
haproxy
2015-07-19 01:11:03 INFO identity-service-relation-changed  * Starting haproxy 
haproxy
(copied also to http://paste.ubuntu.com/11911818/)

As shown above, new haproxy instances ~correlate with
Starting/Stopping lines at juju log, as expected.

FYI same issue find for (HA) services using haproxy, positively confirmed on:
nova-cloud-controller, keystone,  glance, cinder,
openstack-dashboard, swift-proxy, ceilometer

** Affects: ceilometer (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: cinder (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: glance (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: keystone (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: neutron-api (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: nova-cloud-controller (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: openstack-dashboard (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: swift-proxy (Juju Charms Collection)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack canonical-is

** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: cinder (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: glance (Ubuntu)

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: swift-proxy (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: ceilometer (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to glance in Ubuntu.
https://bugs.launchpad.net/bugs/1476428

Title:
  haproxy service handling failing to stop old instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/ceilometer/+bug/1476428/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1476428] [NEW] haproxy service handling failing to stop old instances

2015-07-20 Thread JuanJo Ciarlante
Public bug reported:

On an openstack HA kilo deployment using charms trunks,
several services failing to properly restart haproxy, leaving
old instances running, showing cinder/0 as example:

$ juju ssh cinder/0 'pgrep -f haproxy | xargs ps -o pid,ppid,lstart,cmd -p; 
egrep St.*ing.haproxy /var/log/juju/unit-cinder-0.log'
PIDPPID  STARTED CMD
  13913   1 Sun Jul 19 01:02:11 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  14448   1 Sun Jul 19 01:04:59 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  14848   1 Sun Jul 19 01:05:03 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  21437   1 Sun Jul 19 01:07:00 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  27656   1 Sun Jul 19 01:08:33 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  32073   1 Sun Jul 19 01:09:27 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  39752   1 Sun Jul 19 01:10:59 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  39829   1 Sun Jul 19 01:11:00 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
  44200   1 Sun Jul 19 01:18:57 2015 /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
 299019  299018 Mon Jul 20 23:38:30 2015 bash -c pgrep -f haproxy | xargs ps -o 
pid,ppid,lstart,cmd -p; egrep St.*ing.haproxy /var/log/juju/unit-cinder-0.log
2015-07-19 01:02:14 INFO config-changed  * Stopping haproxy haproxy
2015-07-19 01:02:15 INFO config-changed  * Starting haproxy haproxy
2015-07-19 01:05:03 INFO cluster-relation-changed  * Stopping haproxy haproxy
2015-07-19 01:05:03 INFO cluster-relation-changed  * Starting haproxy haproxy
2015-07-19 01:05:07 INFO cluster-relation-changed  * Stopping haproxy haproxy
2015-07-19 01:05:07 INFO cluster-relation-changed  * Starting haproxy haproxy
2015-07-19 01:11:03 INFO identity-service-relation-changed  * Stopping haproxy 
haproxy
2015-07-19 01:11:03 INFO identity-service-relation-changed  * Starting haproxy 
haproxy
(copied also to http://paste.ubuntu.com/11911818/)

As shown above, new haproxy instances ~correlate with
Starting/Stopping lines at juju log, as expected.

FYI same issue find for (HA) services using haproxy, positively confirmed on:
nova-cloud-controller, keystone,  glance, cinder,
openstack-dashboard, swift-proxy, ceilometer

** Affects: ceilometer (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: cinder (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: glance (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: keystone (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: neutron-api (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: nova-cloud-controller (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: openstack-dashboard (Juju Charms Collection)
 Importance: Undecided
 Status: New

** Affects: swift-proxy (Juju Charms Collection)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack canonical-is

** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: cinder (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: glance (Ubuntu)

** Also affects: glance (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: neutron-api (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: swift-proxy (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Also affects: ceilometer (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1476428

Title:
  haproxy service handling failing to stop old instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/ceilometer/+bug/1476428/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1356392] Re: lacks sw raid1 install support

2015-07-09 Thread JuanJo Ciarlante
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ?
Please consider re-prioritizing.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1356392

Title:
  lacks sw raid1 install support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1356392/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1356392] Re: lacks sw raid1 install support

2015-07-09 Thread JuanJo Ciarlante
With maas 1.9 deprecating d-i, what option is left for maas swraid installs ?
Please consider re-prioritizing.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to curtin in Ubuntu.
https://bugs.launchpad.net/bugs/1356392

Title:
  lacks sw raid1 install support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1356392/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1462466] [NEW] bcache-tools udev rules on trusty lacking util-linux 2.24+ workaround

2015-06-05 Thread JuanJo Ciarlante
Public bug reported:

We're using bcache under trusty HWE kernel  (3.16.0-38-generic)
with bcache-tools 1.0.7-0ubuntu1 (built from src).

As trusty has util-linux 2.20.1, udev rules for auto registering
bcache devices are skipped:

 # blkid was run by the standard udev rules
 # It recognised bcache (util-linux 2.24+)
 ENV{ID_FS_TYPE}==bcache, GOTO=bcache_backing_found

We're manually cowboy'ing the following line alongside above:
 KERNEL==nvme*, GOTO=bcache_backing_found

, but it would be great if something like trying bcache-register
on non rotational devices were added to the udev rules:

 ENV{DEVTYPE}==disk, ATTR{queue/rotational}==0,
GOTO=bcache_backing_found

** Affects: bcache-tools (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to bcache-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1462466

Title:
  bcache-tools udev rules on trusty lacking util-linux 2.24+ workaround

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1462466/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1462466] [NEW] bcache-tools udev rules on trusty lacking util-linux 2.24+ workaround

2015-06-05 Thread JuanJo Ciarlante
Public bug reported:

We're using bcache under trusty HWE kernel  (3.16.0-38-generic)
with bcache-tools 1.0.7-0ubuntu1 (built from src).

As trusty has util-linux 2.20.1, udev rules for auto registering
bcache devices are skipped:

 # blkid was run by the standard udev rules
 # It recognised bcache (util-linux 2.24+)
 ENV{ID_FS_TYPE}==bcache, GOTO=bcache_backing_found

We're manually cowboy'ing the following line alongside above:
 KERNEL==nvme*, GOTO=bcache_backing_found

, but it would be great if something like trying bcache-register
on non rotational devices were added to the udev rules:

 ENV{DEVTYPE}==disk, ATTR{queue/rotational}==0,
GOTO=bcache_backing_found

** Affects: bcache-tools (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1462466

Title:
  bcache-tools udev rules on trusty lacking util-linux 2.24+ workaround

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1462466/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-20 Thread JuanJo Ciarlante
** Changed in: linux (Ubuntu)
   Status: Incomplete = Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-20 Thread JuanJo Ciarlante
** Changed in: linux (Ubuntu)
   Status: Incomplete = Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/,
I've narrowed down to:
* OK:  tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b 
0p requeues 0 
* BAD:  tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b 
0p requeues 0 
* BAD:  tc-class-stats.3.11.0-031100-generic.txt: rate 0bit 0pps backlog 0b 0p 
requeues 0 

will update the tags as per above comment, tnx.


** Tags added: kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1 
kernel-bug-exists-upstream-4.1-rc1 kernel-fixed-upstream-3.10

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
With kernels from http://kernel.ubuntu.com/~kernel-ppa/mainline/,
I've narrowed down to:
* OK:  tc-class-stats.3.10.76-031076-generic.txt: rate 1600bit 2pps backlog 0b 
0p requeues 0 
* BAD:  tc-class-stats.3.11.0-031100rc1-generic.txt: rate 0bit 0pps backlog 0b 
0p requeues 0 
* BAD:  tc-class-stats.3.11.0-031100-generic.txt: rate 0bit 0pps backlog 0b 0p 
requeues 0 

will update the tags as per above comment, tnx.


** Tags added: kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1 
kernel-bug-exists-upstream-4.1-rc1 kernel-fixed-upstream-3.10

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
As per comment #13, I've added the following tags:
* kernel-fixed-upstream-3.10 
* kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1 
kernel-bug-exists-upstream-4.1-rc1

Please correct them if I misunderstood the naming convention,

FYI my narrowed bisect corresponds to:
*** OK ***:
linux (3.10.76-031076.201504291035) saucy; urgency=low

  * Mainline build at commit: v3.10.76


*** BAD ***:
linux (3.11.0-031100rc1.201307141935) saucy; urgency=low

  * Mainline build at commit: v3.11-rc1

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
As per comment #13, I've added the following tags:
* kernel-fixed-upstream-3.10 
* kernel-bug-exists-upstream kernel-bug-exists-upstream-3.11rc1 
kernel-bug-exists-upstream-4.1-rc1

Please correct them if I misunderstood the naming convention,

FYI my narrowed bisect corresponds to:
*** OK ***:
linux (3.10.76-031076.201504291035) saucy; urgency=low

  * Mainline build at commit: v3.10.76


*** BAD ***:
linux (3.11.0-031100rc1.201307141935) saucy; urgency=low

  * Mainline build at commit: v3.11-rc1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
FYI peeking at patch-3.11-rc1, shows

[...]
-   struct gnet_stats_rate_est  tcfc_rate_est;
+   struct gnet_stats_rate_est64tcfc_rate_est;

with its correspondent addition:

+ * struct gnet_stats_rate_est64 - rate estimator
+ * @bps: current byte rate
+ * @pps: current packet rate
+ */
+struct gnet_stats_rate_est64 {
+   __u64   bps;
+   __u64   pps;
+};


FYI modding iproute2 3.19.0 to show EST64 vs EST(32) shows
tc using EST32:
http://paste.ubuntu.com/10963208/

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
FYI peeking at patch-3.11-rc1, shows

[...]
-   struct gnet_stats_rate_est  tcfc_rate_est;
+   struct gnet_stats_rate_est64tcfc_rate_est;

with its correspondent addition:

+ * struct gnet_stats_rate_est64 - rate estimator
+ * @bps: current byte rate
+ * @pps: current packet rate
+ */
+struct gnet_stats_rate_est64 {
+   __u64   bps;
+   __u64   pps;
+};


FYI modding iproute2 3.19.0 to show EST64 vs EST(32) shows
tc using EST32:
http://paste.ubuntu.com/10963208/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same
bad results.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-05-01 Thread JuanJo Ciarlante
@peanlvch: FYI as per comment #4 I already tested v4.1-rc1-vivid, same
bad results.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI tried iproute2-3.19.0, same zero rate output.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI there are several changes at
https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12
that refer to htb rate handling.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI this has been reported to debian also (kernel 3.16):
https://lists.debian.org/debian-kernel/2014/11/msg00288.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI this has been reported to debian also (kernel 3.16):
https://lists.debian.org/debian-kernel/2014/11/msg00288.html

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb
(from ~kernel-ppa) failed the same way.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI linux-image-4.1.0-040100rc1-generic_4.1.0-040100rc1.201504270235_i386.deb
(from ~kernel-ppa) failed the same way.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
By installing different kernel versions (trusty, manual download and dpkg -i),
I narrowed this down to:
- linux-image-3.8.0-44-generic: OK
- linux-image-3.11.0-26-generic: BAD (zero rate counters).

FYI I used this script:
# cat htb.sh
/sbin/tc qdisc del dev eth0 root
/sbin/tc qdisc add dev eth0 root handle 1: htb default 30
/sbin/tc class add dev eth0 parent 1: classid 1:1 htb rate 1kbit burst 15k
/sbin/tc class add dev eth0 parent 1:1 classid 1:10 htb rate 2500kbit burst 15k
/sbin/tc filter add dev eth0 protocol ip parent 1: prio 1 u32 match ip src 0/0 
flowid 1:10

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
By installing different kernel versions (trusty, manual download and dpkg -i),
I narrowed this down to:
- linux-image-3.8.0-44-generic: OK
- linux-image-3.11.0-26-generic: BAD (zero rate counters).

FYI I used this script:
# cat htb.sh
/sbin/tc qdisc del dev eth0 root
/sbin/tc qdisc add dev eth0 root handle 1: htb default 30
/sbin/tc class add dev eth0 parent 1: classid 1:1 htb rate 1kbit burst 15k
/sbin/tc class add dev eth0 parent 1:1 classid 1:10 htb rate 2500kbit burst 15k
/sbin/tc filter add dev eth0 protocol ip parent 1: prio 1 u32 match ip src 0/0 
flowid 1:10

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI tried iproute2-3.19.0, same zero rate output.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
FYI there are several changes at
https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.10.12
that refer to htb rate handling.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
** Also affects: linux-meta (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to iproute2 in Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1426589] Re: tc class statistics rates are all zero after upgrade to Trusty

2015-04-28 Thread JuanJo Ciarlante
** Also affects: linux-meta (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1426589

Title:
  tc class statistics rates are all zero after upgrade to Trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iproute2/+bug/1426589/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1379567] Re: maas-proxy is an open proxy with no ACLs and listening on all interfaces

2015-02-11 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1379567

Title:
  maas-proxy is an open proxy with no ACLs and listening on all
  interfaces

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1379567/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1379567] Re: maas-proxy is an open proxy with no ACLs and listening on all interfaces

2015-02-11 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1379567

Title:
  maas-proxy is an open proxy with no ACLs and listening on all
  interfaces

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1379567/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2015-01-30 Thread JuanJo Ciarlante
@sinzui: closing this as invalid, as I later confirmed this to be a MTU
issue.

** Changed in: juju-core
   Status: Triaged = Invalid

** Changed in: juju-core (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2015-01-30 Thread JuanJo Ciarlante
@sinzui: closing this as invalid, as I later confirmed this to be a MTU
issue.

** Changed in: juju-core
   Status: Triaged = Invalid

** Changed in: juju-core (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1355813] Re: Interface MTU management across MAAS/juju

2015-01-05 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1355813

Title:
  Interface MTU management across MAAS/juju

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1355813/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1355813] Re: Interface MTU management across MAAS/juju

2015-01-05 Thread JuanJo Ciarlante
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1355813

Title:
  Interface MTU management across MAAS/juju

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1355813/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-19 Thread JuanJo Ciarlante
This deployment has 2 metal nodes hosting LXC units (machine:
0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while
'--to lxc:18' was consistently failing as described above.

FYI I've worked around this by removing machine 18 down to
'maas ready' and reacquiring it from juju, now all new LXC
units there behave normally.

IMO still worth digging what state bits left there for that
machine were triggering this issue, copied a juju backup
tarball to ~natefinch in case this is feasible.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-19 Thread JuanJo Ciarlante
This deployment has 2 metal nodes hosting LXC units (machine:
0, 18), then 'juju deploy cs:ubuntu --to lxc:0' does ok, while
'--to lxc:18' was consistently failing as described above.

FYI I've worked around this by removing machine 18 down to
'maas ready' and reacquiring it from juju, now all new LXC
units there behave normally.

IMO still worth digging what state bits left there for that
machine were triggering this issue, copied a juju backup
tarball to ~natefinch in case this is feasible.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1393444] [NEW] machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
Public bug reported:

FYI this is the same environment from lp#1392810 (1.18-1.19-1.20),
juju version: 1.20.11-trusty-amd64

New units deployed (to LXC over maas) stay at agent-state: pending:
http://paste.ubuntu.com/9057045/

#1 TCP connects ok to node0:17070
- at the unit:
ubuntu@juju-machine-18-lxc-5:~$ netstat -tn
tcp   0  0 x.x.x.167:57937 x.x.x.8:17070   ESTABLISHED

- at node0:
ubunte@node0:~$ sudo netstat -tnp|grep 167
tcp6  0   3807 x.x.x.8:17070   x.x.x.167:57937 ESTABLISHED 1993/jujud

Interesting there is that node0's socket tcp receive queue (3807 bytes)
is not being read by jujud.

#2 machine-0.log:
- nothing shows at unit's connection time
(ie restart  jujud-machine-18-lxc-5)

- after 4~5minutes, connection drops, and this is logged:
2014-11-17 14:28:56 ERROR juju.state.apiserver.common resource.go:102 error 
stopping *apiserver.pingTimeout resource: ping timeout

** Affects: juju-core
 Importance: High
 Status: Triaged

** Affects: juju-core (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack canonical-is lxc upgrade-juju

** Tags added: canonical-bootstack

** Tags added: canonical-is

** Summary changed:

- machine unit connects to apiserver but doesn't deploy service
+ machine unit connects to apiserver but stays in agent-state: pending

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/
NOTE there the repeated log stanzas are because of my manual restarts.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
strace at both sides (grepped for specific sockets): 
http://paste.ubuntu.com/9057691/,
mind the subsecond date diff.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1393444] [NEW] machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
Public bug reported:

FYI this is the same environment from lp#1392810 (1.18-1.19-1.20),
juju version: 1.20.11-trusty-amd64

New units deployed (to LXC over maas) stay at agent-state: pending:
http://paste.ubuntu.com/9057045/

#1 TCP connects ok to node0:17070
- at the unit:
ubuntu@juju-machine-18-lxc-5:~$ netstat -tn
tcp   0  0 x.x.x.167:57937 x.x.x.8:17070   ESTABLISHED

- at node0:
ubunte@node0:~$ sudo netstat -tnp|grep 167
tcp6  0   3807 x.x.x.8:17070   x.x.x.167:57937 ESTABLISHED 1993/jujud

Interesting there is that node0's socket tcp receive queue (3807 bytes)
is not being read by jujud.

#2 machine-0.log:
- nothing shows at unit's connection time
(ie restart  jujud-machine-18-lxc-5)

- after 4~5minutes, connection drops, and this is logged:
2014-11-17 14:28:56 ERROR juju.state.apiserver.common resource.go:102 error 
stopping *apiserver.pingTimeout resource: ping timeout

** Affects: juju-core
 Importance: High
 Status: Triaged

** Affects: juju-core (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack canonical-is lxc upgrade-juju

** Tags added: canonical-bootstack

** Tags added: canonical-is

** Summary changed:

- machine unit connects to apiserver but doesn't deploy service
+ machine unit connects to apiserver but stays in agent-state: pending

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
/var/log/juju/machine-18-lxc-5.log: http://paste.ubuntu.com/9057287/
NOTE there the repeated log stanzas are because of my manual restarts.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1393444] Re: machine unit connects to apiserver but stays in agent-state: pending

2014-11-17 Thread JuanJo Ciarlante
strace at both sides (grepped for specific sockets): 
http://paste.ubuntu.com/9057691/,
mind the subsecond date diff.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1393444

Title:
  machine unit connects to apiserver but stays in agent-state: pending

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1393444/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1377964] Re: maas-proxy logrotate permission denied

2014-11-06 Thread JuanJo Ciarlante
Also affected by this issue:  1.7rc1 (upgraded from 1.5.2) at /var/log/syslog:
Nov  6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log: 
(13) Permission denied

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1377964

Title:
  maas-proxy logrotate permission denied

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1377964/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1377964] Re: maas-proxy logrotate permission denied

2014-11-06 Thread JuanJo Ciarlante
Also affected by this issue:  1.7rc1 (upgraded from 1.5.2) at /var/log/syslog:
Nov  6 06:25:01 duck squid3: Cannot open stdio:/var/log/maas/proxy/store.log: 
(13) Permission denied

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1377964

Title:
  maas-proxy logrotate permission denied

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1377964/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1382190] Re: LXCs assigned IPs by MAAS DHCP lack DNS PTR entries

2014-10-17 Thread JuanJo Ciarlante
To clarify what's happening with the rabbitmq charm: for its units to be
able cluster together, they need to refer to each other by hostname, see
[0] which was done based on the observed pattern as per #4,#7 comments above.

[0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-
nodename-to-host-dns-PTR

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1382190

Title:
  LXCs assigned IPs by MAAS DHCP lack DNS PTR entries

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1382190/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1382190] Re: LXCs assigned IPs by MAAS DHCP lack DNS PTR entries

2014-10-17 Thread JuanJo Ciarlante
To clarify what's happening with the rabbitmq charm: for its units to be
able cluster together, they need to refer to each other by hostname, see
[0] which was done based on the observed pattern as per #4,#7 comments above.

[0] https://code.launchpad.net/~jjo/charms/trusty/rabbitmq-server/fix-
nodename-to-host-dns-PTR

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1382190

Title:
  LXCs assigned IPs by MAAS DHCP lack DNS PTR entries

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1382190/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1274947] Re: juju lxc instances deployed via MAAS don't have resolvable hostnames

2014-10-16 Thread JuanJo Ciarlante
FYI this is voiding current trusty/rabbitmq charm from deploying on MaaS
1.7beta + LXC, 1.5 had at least PTR resolution for every dhcp'd IP as e.g:
IN PTR 10-1-57-22.maas.  , while 1.7beta has none afaicT.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1274947

Title:
  juju lxc instances deployed via MAAS don't have resolvable hostnames

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1274947/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


  1   2   >